[BUG] Getting STATUS_STACK_BUFFER_OVERRUN on any operations
RReverser opened this issue · 19 comments
Description
Details regarding the bug
Any operation, including simple arrayfire::info()
or Array::new(...)
seems to be taking a very long time, and eventually fails with:
error: process didn't exit successfully: `target\debug\examples\af.exe` (exit code: 0xc0000409, STATUS_STACK_BUFFER_OVERRUN)
Did you build ArrayFire yourself or did you use the official installers
I've used official Windows installer for ArrayFire 3.8 with CUDA 11.2 from here: https://arrayfire.s3.amazonaws.com/3.8.0/ArrayFire-v3.8.0-CUDA-11.2.exe.
Which backend is experiencing this issue? (CPU, CUDA, OpenCL)
It's happening on CUDA backend.
Actually, while trying to see which backend is experiencing this issue, I've noticed that docs don't seem to match the reality. The docs on Backend suggest that default backend / first choice would be OpenCL, falling back to others. However, if I set it explicitly via arrayfire::set_backend(Backend::OPENCL)
, then everything works, so I guess the choice is done differently?
Do you have a workaround?
Yes, I can set backend to OPENCL or CPU and then everything works.
Can the bug be reproduced reliably on your system?
Yes, every time with every API I tried.
A clear and concise description of what you expected to happen.
All those calls to succeed I guess.
Run your executable with AF_TRACE=all and AF_PRINT_ERRORS=1 environment variables set.
AF_PRINT_ERRORS=1
doesn't help / doesn't print anything.
AF_TRACE=all
produces following output:
[unified][1615582233][027972] [ ..\src\api\unified\symbol_manager.cpp(141) ] Attempting: Default System Paths
[unified][1615582233][027972] [ ..\src\api\unified\symbol_manager.cpp(144) ] Found: afcpu.dll
[unified][1615582233][027972] [ ..\src\api\unified\symbol_manager.cpp(151) ] Device Count: 1.
[unified][1615582233][027972] [ ..\src\api\unified\symbol_manager.cpp(141) ] Attempting: Default System Paths
[unified][1615582233][027972] [ ..\src\api\unified\symbol_manager.cpp(144) ] Found: afopencl.dll
[platform][1615582234][027972] [ ..\src\backend\common\DependencyModule.cpp(99) ] Attempting to load: forge.dll
[platform][1615582234][027972] [ ..\src\backend\common\DependencyModule.cpp(102) ] Found: forge.dll
[platform][1615582234][027972] [ ..\src\backend\opencl\device_manager.cpp(218) ] Found 2 OpenCL platforms
[platform][1615582234][027972] [ ..\src\backend\opencl\device_manager.cpp(230) ] Found 1 devices on platform NVIDIA CUDA
[platform][1615582234][027972] [ ..\src\backend\opencl\device_manager.cpp(235) ] Found device GeForce MX150 on platform NVIDIA CUDA
[platform][1615582234][027972] [ ..\src\backend\opencl\device_manager.cpp(230) ] Found 1 devices on platform Intel(R) OpenCL HD Graphics
[platform][1615582234][027972] [ ..\src\backend\opencl\device_manager.cpp(235) ] Found device Intel(R) UHD Graphics 620 on platform Intel(R)
OpenCL HD Graphics
[platform][1615582234][027972] [ ..\src\backend\opencl\device_manager.cpp(240) ] Found 2 OpenCL devices
[platform][1615582235][027972] [ ..\src\backend\opencl\device_manager.cpp(335) ] Default device: 0
[unified][1615582235][027972] [ ..\src\api\unified\symbol_manager.cpp(151) ] Device Count: 2.
[unified][1615582235][027972] [ ..\src\api\unified\symbol_manager.cpp(141) ] Attempting: Default System Paths
[unified][1615582235][027972] [ ..\src\api\unified\symbol_manager.cpp(144) ] Found: afcuda.dll
[unified][1615582235][027972] [ ..\src\api\unified\symbol_manager.cpp(151) ] Device Count: 1.
[unified][1615582235][027972] [ ..\src\api\unified\symbol_manager.cpp(206) ] AF_DEFAULT_BACKEND: cuda
[platform][1615582235][027972] [ ..\src\backend\common\DependencyModule.cpp(99) ] Attempting to load: forge.dll
[platform][1615582235][027972] [ ..\src\backend\common\DependencyModule.cpp(102) ] Found: forge.dll
[platform][1615582235][027972] [ ..\src\backend\cuda\device_manager.cpp(468) ] CUDA Driver supports up to CUDA 11.0 ArrayFire CUDA Runtime 11.2
(...stuck here for a long time...)
error: process didn't exit successfully: `target\release\examples\af.exe` (exit code: 0xc0000409, STATUS_STACK_BUFFER_OVERRUN)
Reproducible Code and/or Steps
fn main() {
arrayfire::info();
// or: arrayfire::Array::new(&[1,2,3], dim4!(3));
// or literally anything
}
System Information
Windows:
Download clinfo from https://github.com/Oblomov/clinfo
Number of platforms 2
Platform Name NVIDIA CUDA
Platform Vendor NVIDIA Corporation
Platform Version OpenCL 1.2 CUDA 11.0.228
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options
cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics
Platform Extensions function suffix NV
Platform Name Intel(R) OpenCL HD Graphics
Platform Vendor Intel(R) Corporation
Platform Version OpenCL 2.1
Platform Profile FULL_PROFILE
Platform Extensions cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_intel_subgroups cl_intel_required_subgroup_size cl_intel_subgroups_short cl_khr_spir cl_intel_accelerator cl_intel_driver_diagnostics cl_khr_priority_hints cl_khr_throttle_hints cl_khr_create_command_queue cl_intel_subgroups_char cl_intel_subgroups_long cl_khr_fp64 cl_khr_subgroups cl_khr_il_program cl_intel_spirv_device_side_avc_motion_estimation cl_intel_spirv_media_block_io cl_intel_spirv_subgroups cl_khr_spirv_no_integer_wrap_decoration cl_intel_unified_shared_memory_preview cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_motion_estimation cl_intel_device_side_avc_motion_estimation cl_intel_advanced_motion_estimation cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_image2d_from_buffer cl_khr_depth_images cl_intel_media_block_io cl_khr_3d_image_writes cl_khr_gl_sharing cl_khr_gl_depth_images cl_khr_gl_event
cl_khr_gl_msaa_sharing cl_intel_dx9_media_sharing cl_khr_dx9_media_sharing cl_khr_d3d10_sharing cl_khr_d3d11_sharing cl_intel_d3d11_nv12_media_sharing cl_intel_unified_sharing cl_intel_simultaneous_sharing
Platform Extensions function suffix INTEL
Platform Host timer resolution 100ns
Platform Name NVIDIA CUDA
Number of devices 1
Device Name GeForce MX150
Device Vendor NVIDIA Corporation
Device Vendor ID 0x10de
Device Version OpenCL 1.2 CUDA
Driver Version 452.66
Device OpenCL C Version OpenCL C 1.2
Device Type GPU
Device Topology (NV) PCI-E, 0000:01:00.0
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 3
Max clock frequency 1531MHz
Compute Capability (NV) 6.1
Device Partition (core)
Max number of sub-devices 1
Supported partition types None
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 1024x1024x64
Max work group size 1024
Preferred work group size multiple (kernel) 32
Warp size (NV) 32
Preferred / native vector sizes
char 1 / 1
short 1 / 1
int 1 / 1
long 1 / 1
half 0 / 0 (n/a)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 2147483648 (2GiB)
Error Correction support No
Max memory allocation 536870912 (512MiB)
Unified memory for Host and Device No
Integrated memory (NV) No
Minimum alignment for any data type 128 bytes
Alignment of base address 4096 bits (512 bytes)
Global Memory cache type Read/Write
Global Memory cache size 147456 (144KiB)
Global Memory cache line size 128 bytes
Image support Yes
Max number of samplers per kernel 32
Max size for 1D images from buffer 268435456 pixels
Max 1D or 2D image array size 2048 images
Max 2D image size 16384x32768 pixels
Max 3D image size 16384x16384x16384 pixels
Max number of read image args 256
Max number of write image args 16
Local memory type Local
Local memory size 49152 (48KiB)
Registers per block (NV) 65536
Max number of constant args 9
Max constant buffer size 65536 (64KiB)
Max size of kernel argument 4352 (4.25KiB)
Queue properties
Out-of-order execution Yes
Profiling Yes
Prefer user sync for interop No
Profiling timer resolution 1000ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Kernel execution timeout (NV) Yes
Concurrent copy and kernel execution (NV) Yes
Number of async copy engines 1
printf() buffer size 1048576 (1024KiB)
Built-in kernels (n/a)
Device Extensions cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_fp64 cl_khr_byte_addressable_store cl_khr_icd cl_khr_gl_sharing cl_nv_compiler_options
cl_nv_device_attribute_query cl_nv_pragma_unroll cl_nv_d3d10_sharing cl_khr_d3d10_sharing cl_nv_d3d11_sharing cl_nv_copy_opts cl_nv_create_buffer cl_khr_int64_base_atomics cl_khr_int64_extended_atomics
Platform Name Intel(R) OpenCL HD Graphics
Number of devices 1
Device Name Intel(R) UHD Graphics 620
Device Vendor Intel(R) Corporation
Device Vendor ID 0x8086
Device Version OpenCL 2.1 NEO
Driver Version 27.20.100.8476
Device OpenCL C Version OpenCL C 2.0
Device Type GPU
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 24
Max clock frequency 1150MHz
Device Partition (core)
Max number of sub-devices 0
Supported partition types None
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 256x256x256
Max work group size 256
Preferred work group size multiple (kernel) 32
Max sub-groups per work group 32
Sub-group sizes (Intel) 8, 16, 32
Preferred / native vector sizes
char 16 / 16
short 8 / 8
int 4 / 4
long 1 / 1
half 8 / 8 (cl_khr_fp16)
float 1 / 1
double 1 / 1 (cl_khr_fp64)
Half-precision Floating-point support (cl_khr_fp16)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Correctly-rounded divide and sqrt operations Yes
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 3392032768 (3.159GiB)
Error Correction support No
Max memory allocation 1696016384 (1.58GiB)
Unified memory for Host and Device Yes
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing Yes
Fine-grained system sharing No
Atomics Yes
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 64 bytes
Global 64 bytes
Local 64 bytes
Max size for global variable 65536 (64KiB)
Preferred total size of global vars 1696016384 (1.58GiB)
Global Memory cache type Read/Write
Global Memory cache size 524288 (512KiB)
Global Memory cache line size 64 bytes
Image support Yes
Max number of samplers per kernel 16
Max size for 1D images from buffer 106001024 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 4 bytes
Pitch alignment for 2D image buffers 4 pixels
Max 2D image size 16384x16384 pixels
Max planar YUV image size 16384x16352 pixels
Max 3D image size 16384x16384x2048 pixels
Max number of read image args 128
Max number of write image args 128
Max number of read/write image args 128
Max number of pipe args 16
Max active pipe reservations 1
Max pipe packet size 1024
Local memory type Local
Local memory size 65536 (64KiB)
Max number of constant args 8
Max constant buffer size 1696016384 (1.58GiB)
Max size of kernel argument 1024
Queue properties (on host)
Out-of-order execution Yes
Profiling Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 131072 (128KiB)
Max size 67108864 (64MiB)
Max queues on device 1
Max events on device 1024
Prefer user sync for interop Yes
Number of simultaneous interops (Intel) 1
Simultaneous interops GL WGL D3D9 (KHR) D3D9 (INTEL) D3D9Ex (KHR) D3D9Ex (INTEL) DXVA (KHR) DXVA (INTEL) D3D10 D3D11
Profiling timer resolution 83ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels No
Sub-group independent forward progress Yes
IL version SPIR-V_1.2
SPIR versions 1.2
printf() buffer size 4194304 (4MiB)
Built-in kernels block_motion_estimate_intel;block_advanced_motion_estimate_check_intel;block_advanced_motion_estimate_bidirectional_check_intel;
Motion Estimation accelerator version (Intel) 2
Device-side AVC Motion Estimation version 1
Supports texture sampler use Yes
Supports preemption No
Device Extensions cl_khr_byte_addressable_store cl_khr_fp16 cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_icd cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_intel_subgroups cl_intel_required_subgroup_size cl_intel_subgroups_short cl_khr_spir cl_intel_accelerator cl_intel_driver_diagnostics cl_khr_priority_hints cl_khr_throttle_hints cl_khr_create_command_queue cl_intel_subgroups_char cl_intel_subgroups_long cl_khr_fp64 cl_khr_subgroups cl_khr_il_program cl_intel_spirv_device_side_avc_motion_estimation cl_intel_spirv_media_block_io cl_intel_spirv_subgroups cl_khr_spirv_no_integer_wrap_decoration cl_intel_unified_shared_memory_preview cl_khr_mipmap_image cl_khr_mipmap_image_writes cl_intel_planar_yuv cl_intel_packed_yuv cl_intel_motion_estimation cl_intel_device_side_avc_motion_estimation cl_intel_advanced_motion_estimation cl_khr_int64_base_atomics cl_khr_int64_extended_atomics cl_khr_image2d_from_buffer cl_khr_depth_images cl_intel_media_block_io cl_khr_3d_image_writes cl_khr_gl_sharing cl_khr_gl_depth_images cl_khr_gl_event
cl_khr_gl_msaa_sharing cl_intel_dx9_media_sharing cl_khr_dx9_media_sharing cl_khr_d3d10_sharing cl_khr_d3d11_sharing cl_intel_d3d11_nv12_media_sharing cl_intel_unified_sharing cl_intel_simultaneous_sharing
NULL platform behavior
clGetPlatformInfo(NULL, CL_PLATFORM_NAME, ...) No platform
clGetDeviceIDs(NULL, CL_DEVICE_TYPE_ALL, ...) No platform
clCreateContext(NULL, ...) [default] No platform
clCreateContext(NULL, ...) [other] Success [NV]
clCreateContextFromType(NULL, CL_DEVICE_TYPE_DEFAULT) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CPU) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_GPU) No platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ACCELERATOR) No devices found in platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_CUSTOM) Invalid device type for platform
clCreateContextFromType(NULL, CL_DEVICE_TYPE_ALL) No platform
NOTE: your OpenCL library only supports OpenCL 2.0,
but some installed platforms support OpenCL 2.1.
Programs using 2.1 features may crash
or behave unexpectedly
If you have NVIDIA GPUs. Run nvidia-smi usually located in
C:\Program Files\NVIDIA Corporation\NVSMI
Fri Mar 12 20:57:58 2021
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 452.66 Driver Version: 452.66 CUDA Version: 11.0 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 GeForce MX150 WDDM | 00000000:01:00.0 Off | N/A |
| N/A 56C P0 N/A / N/A | 141MiB / 2048MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 53732 C+G ...IA GeForce Experience.exe N/A |
+-----------------------------------------------------------------------------+
Checklist
- Using the latest available ArrayFire release
- GPU drivers are up to date (actually checking this now; regardless, I believe the fact that there is no human-readable error in any mode and instead it crashes internally, is a bug)
* (actually checking this now; regardless, I believe the fact that there is no human-readable error in any mode and instead it crashes internally, is a bug)
UPD: updating driver helped, so at least that's good.
I've noticed that docs don't seem to match the reality. The docs on Backend suggest that default backend / first choice would be OpenCL, falling back to others.
@RReverser Can you please point me to this location in docs.
UPD: about driver.
I am confused, what do you mean ? Does updating driver resolved the problem ? or It helped with partially but not completely - if so, can you please share how far it did help the program to progress ?
Are you a Rust only developer ? If not, can you please try C++ helloworld example from ArrayFire and let me know how that runs on your system.
@RReverser Can you please point me to this location in docs.
It's here: https://docs.rs/arrayfire/3.8.0/arrayfire/enum.Backend.html.
Default backend order: OpenCL -> CUDA -> CPU
I am confused, what do you mean ? Does updating driver resolved the problem ?
Yes, it resolved the problem and ArrayFire works now (so I can't repro with C++ either), but as I said above:
regardless, I believe the fact that there is no human-readable error in any mode and instead it crashes internally, is a bug
Surely, there has to be a way to print a human-readable error in case of incompatible driver version or something like that instead of crashing.
It's here: https://docs.rs/arrayfire/3.8.0/arrayfire/enum.Backend.html.
Thanks, I will correct that.
Yes, it resolved the problem and ArrayFire works now (so I can't repro with C++ either), but as I said above:
The problem is resolved with driver update, cool.
Surely, there has to be a way to print a human-readable error in case of incompatible driver version or something like that instead of crashing.
We do some checks for driver and cuda runtime compatibility and log them too. I wonder if they are captured by cargo output 🤔
https://github.com/arrayfire/arrayfire/blob/master/src/backend/cuda/device_manager.cpp#L459
We do some checks for driver and cuda runtime compatibility and log them too. I wonder if they are captured by cargo output 🤔
https://github.com/arrayfire/arrayfire/blob/master/src/backend/cuda/device_manager.cpp#L459
I don't think it was, and either way it's probably worth encoding them as a runtime panic. Anything's better than a stack buffer overrun indicating a memory corruption in apparently safe Rust code.
We do some checks for driver and cuda runtime compatibility and log them too. I wonder if they are captured by cargo output thinking
https://github.com/arrayfire/arrayfire/blob/master/src/backend/cuda/device_manager.cpp#L459I don't think it was, and either way it's probably worth encoding them as a runtime panic. Anything's better than a stack buffer overrun indicating a memory corruption in apparently safe Rust code.
I am not really sure if the cause of that is arrayfire code base. Can you tell me your driver version when it caused issue.
Can you tell me your driver version when it caused issue.
It's in the report above: 452.66
Can you tell me your driver version when it caused issue.
It's in the report above: 452.66
Okay, thank you. I will check with that driver version and see if I can reproduce the issue. If I can, I shall move this issue upstream to address it correctly.
@RReverser I just realized that you are trying to use CUDA 11.2 based ArrayFire with driver 450 series. CUDA 11.2 requires 460.82 minimum. It is not a bug rather, wrong driver version was being used with ArrayFire that was built with CUDA 11.2
Okay. I still think that it should provide better error message on version mismatch, but as it's not affecting me personally, I don't have strong opinion on this.
That is true, it should give an error message rather than a silent seg fault. I was only letting you know that incorrect driver was the reason which I didn't realize until today when I was going through the conversation again.
I was only letting you know that incorrect driver was the reason
Oh yeah, I assumed it was the case as soon as updating the driver fixed the problem :)
@RReverser I have reviewed the issue and code once again today. We do throw an error here https://github.com/arrayfire/arrayfire/blob/master/src/backend/cuda/device_manager.cpp#L459 under this call. Oddly, somewhere in that call or in CUDA API calls, the hang is taking place. Can you please try running the code (with the old driver) through Visual Studio Debugger and share the call stack with me. I am interested in the line that is causing the hang.
@RReverser I have reviewed the issue and code once again today. We do throw an error here https://github.com/arrayfire/arrayfire/blob/master/src/backend/cuda/device_manager.cpp#L459 under this call. Oddly, somewhere in that call or in CUDA API calls, the hang is taking place. Can you please try running the code (with the old driver) through Visual Studio Debugger and share the call stack with me. I am interested in the line that is causing the hang.
@RReverser Did you get a chance to look into this ?
Can you please try running the code (with the old driver)
Yeah, sorry, as I said above I already upgraded the driver which resolved the blocker for me and wouldn't want to look for ways to downgrade it back or return to the issue at this point.
That is alright. I have checked our code base and the necessary log statements for the given scenario are present. I suspect it is either the driver that is causing the hang in which case we cannot gracefully catch the issue to give an appropriate error message.
Until we can figure out some evidence that our log statements aren't working due to logical error in our code base, I don't think we can investigate this further, so closing the issue for now. For any other future/current who may encounter similar issue, if you think you have some additional info to share, feel free to reopen the issue.
Thank you.
Fair enough, thanks.
running into the same issue on windows 10 right now. i tried updating my drivers and cuda and i tried both the 10.x and 11.x arrayfire binaries to no avail. i also tried manually setting the back-end like OP but it didn't seem to fix anything; OpenCL, CUDA, and CPU all throw STATUS_STACK_BUFFER_OVERRUN
. in fact it appears simply running set_backend
and nothing else causes an overrun.
the download link on github for the cinfo binary seems to be down right now but heres the output from nvidia-smi
Fri Feb 4 02:19:12 2022
+-----------------------------------------------------------------------------+
| NVIDIA-SMI 511.65 Driver Version: 511.65 CUDA Version: 11.6 |
|-------------------------------+----------------------+----------------------+
| GPU Name TCC/WDDM | Bus-Id Disp.A | Volatile Uncorr. ECC |
| Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. |
| | | MIG M. |
|===============================+======================+======================|
| 0 NVIDIA GeForce ... WDDM | 00000000:01:00.0 On | N/A |
| 0% 49C P0 126W / 350W | 1056MiB / 24576MiB | 2% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
+-----------------------------------------------------------------------------+
| Processes: |
| GPU GI CI PID Type Process name GPU Memory |
| ID ID Usage |
|=============================================================================|
| 0 N/A N/A 1580 C+G N/A |
| 0 N/A N/A 2872 C+G ...icrosoft VS Code\Code.exe N/A |
| 0 N/A N/A 5536 C+G N/A |
| 0 N/A N/A 6932 C+G ...kyb3d8bbwe\Calculator.exe N/A |
| 0 N/A N/A 7884 C+G C:\Windows\explorer.exe N/A |
| 0 N/A N/A 12184 C+G ...artMenuExperienceHost.exe N/A |
| 0 N/A N/A 12364 C+G ...ekyb3d8bbwe\YourPhone.exe N/A |
| 0 N/A N/A 12584 C+G ...5n1h2txyewy\SearchApp.exe N/A |
| 0 N/A N/A 13440 C+G ...4__htrsf667h5kn2\AWCC.exe N/A |
| 0 N/A N/A 14248 C+G ...ekyb3d8bbwe\HxOutlook.exe N/A |
| 0 N/A N/A 15588 C+G ...108.43\msedgewebview2.exe N/A |
| 0 N/A N/A 16044 C+G ...perience\NVIDIA Share.exe N/A |
| 0 N/A N/A 16160 C+G ...perience\NVIDIA Share.exe N/A |
| 0 N/A N/A 16676 C+G N/A |
| 0 N/A N/A 16712 C+G ...zilla Firefox\firefox.exe N/A |
| 0 N/A N/A 16968 C+G ...nputApp\TextInputHost.exe N/A |
| 0 N/A N/A 21680 C+G ...lack\app-4.23.0\slack.exe N/A |
| 0 N/A N/A 22944 C+G ...zilla Firefox\firefox.exe N/A |
| 0 N/A N/A 23416 C+G ...y\ShellExperienceHost.exe N/A |
| 0 N/A N/A 23752 C+G ...lPanel\SystemSettings.exe N/A |
+-----------------------------------------------------------------------------+
and heres AF_TRACE=all
[unified][1643970448][5736] [ ..\src\api\unified\symbol_manager.cpp(141) ] Attempting: Default System Paths
[unified][1643970448][5736] [ ..\src\api\unified\symbol_manager.cpp(144) ] Found: afcpu.dll
[unified][1643970448][5736] [ ..\src\api\unified\symbol_manager.cpp(151) ] Device Count: 1.
[unified][1643970448][5736] [ ..\src\api\unified\symbol_manager.cpp(141) ] Attempting: Default System Paths
[unified][1643970448][5736] [ ..\src\api\unified\symbol_manager.cpp(144) ] Found: afopencl.dll
[platform][1643970448][5736] [ ..\src\backend\common\DependencyModule.cpp(99) ] Attempting to load: forge.dll
[platform][1643970448][5736] [ ..\src\backend\common\DependencyModule.cpp(102) ] Found: forge.dll
[platform][1643970448][5736] [ ..\src\backend\opencl\device_manager.cpp(218) ] Found 3 OpenCL platforms
[platform][1643970448][5736] [ ..\src\backend\opencl\device_manager.cpp(230) ] Found 1 devices on platform NVIDIA CUDA
[platform][1643970448][5736] [ ..\src\backend\opencl\device_manager.cpp(235) ] Found device NVIDIA GeForce RTX 3090 on platform NVIDIA CUDA
[platform][1643970448][5736] [ ..\src\backend\opencl\device_manager.cpp(230) ] Found 1 devices on platform Intel(R) OpenCL
[platform][1643970448][5736] [ ..\src\backend\opencl\device_manager.cpp(235) ] Found device 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz on platform Intel(R) OpenCL
[platform][1643970448][5736] [ ..\src\backend\opencl\device_manager.cpp(230) ] Found 1 devices on platform Experimental OpenCL 2.1 CPU Only Platform
[platform][1643970448][5736] [ ..\src\backend\opencl\device_manager.cpp(235) ] Found device 11th Gen Intel(R) Core(TM) i9-11900K @ 3.50GHz on platform Experimental OpenCL 2.1 CPU Only Platform
[platform][1643970448][5736] [ ..\src\backend\opencl\device_manager.cpp(240) ] Found 3 OpenCL devices
error: process didn't exit successfully: `target\debug\rust_playground.exe` (exit code: 0xc0000409, STATUS_STACK_BUFFER_OVERRUN)
i elected to file a issue since it seems like the root cause is different from OP. feel free to close the issue if its a duplicate