arrayfire/arrayfire-rust

[BUG] arrayfire fails on Intel OneAPI OpenCL CPU Runtime and POCL OpenCL CPU Runtime

BA8F0D39 opened this issue · 9 comments

Description

Arrayfire version: (3, 8, 0)
Name: Intel(R)_Core(TM)i5-8400_CPU@ 2.80GHz
Platform: OpenCL
Toolkit: Intel(R) CPU Runtime for OpenCL(TM) Applications
Compute: 2.1
Revision: d86edd18

Platform Name Intel(R) CPU Runtime for OpenCL(TM) Applications
Number of devices 1
Device Name Intel(R) Core(TM) i5-8400 CPU @ 2.80GHz
Device Vendor Intel(R) Corporation
Device Vendor ID 0x8086
Device Version OpenCL 2.1 (Build 0)
Driver Version 18.1.0.0920
Device OpenCL C Version OpenCL C 2.0
Device Type CPU
Device Profile FULL_PROFILE
Device Available Yes
Compiler Available Yes
Linker Available Yes
Max compute units 6
Max clock frequency 2800MHz
Device Partition (core)
Max number of sub-devices 6
Supported partition types by counts, equally, by names (Intel)
Supported affinity domains (n/a)
Max work item dimensions 3
Max work item sizes 8192x8192x8192
Max work group size 8192
Preferred work group size multiple 128
Max sub-groups per work group 1
Preferred / native vector sizes
char 1 / 32
short 1 / 16
int 1 / 8
long 1 / 4
half 0 / 0 (n/a)
float 1 / 8
double 1 / 4 (cl_khr_fp64)
Half-precision Floating-point support (n/a)
Single-precision Floating-point support (core)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero No
Round to infinity No
IEEE754-2008 fused multiply-add No
Support is emulated in software No
Correctly-rounded divide and sqrt operations No
Double-precision Floating-point support (cl_khr_fp64)
Denormals Yes
Infinity and NANs Yes
Round to nearest Yes
Round to zero Yes
Round to infinity Yes
IEEE754-2008 fused multiply-add Yes
Support is emulated in software No
Address bits 64, Little-Endian
Global memory size 33594986496 (31.29GiB)
Error Correction support No
Max memory allocation 8398746624 (7.822GiB)
Unified memory for Host and Device Yes
Shared Virtual Memory (SVM) capabilities (core)
Coarse-grained buffer sharing Yes
Fine-grained buffer sharing Yes
Fine-grained system sharing Yes
Atomics Yes
Minimum alignment for any data type 128 bytes
Alignment of base address 1024 bits (128 bytes)
Preferred alignment for atomics
SVM 64 bytes
Global 64 bytes
Local 0 bytes
Max size for global variable 65536 (64KiB)
Preferred total size of global vars 65536 (64KiB)
Global Memory cache type Read/Write
Global Memory cache size 262144 (256KiB)
Global Memory cache line size 64 bytes
Image support Yes
Max number of samplers per kernel 480
Max size for 1D images from buffer 524921664 pixels
Max 1D or 2D image array size 2048 images
Base address alignment for 2D image buffers 64 bytes
Pitch alignment for 2D image buffers 64 pixels
Max 2D image size 16384x16384 pixels
Max 3D image size 2048x2048x2048 pixels
Max number of read image args 480
Max number of write image args 480
Max number of read/write image args 480
Max number of pipe args 16
Max active pipe reservations 43690
Max pipe packet size 1024
Local memory type Global
Local memory size 32768 (32KiB)
Max number of constant args 480
Max constant buffer size 131072 (128KiB)
Max size of kernel argument 3840 (3.75KiB)
Queue properties (on host)
Out-of-order execution Yes
Profiling Yes
Local thread execution (Intel) Yes
Queue properties (on device)
Out-of-order execution Yes
Profiling Yes
Preferred size 4294967295 (4GiB)
Max size 4294967295 (4GiB)
Max queues on device 4294967295
Max events on device 4294967295
Prefer user sync for interop No
Profiling timer resolution 1ns
Execution capabilities
Run OpenCL kernels Yes
Run native kernels Yes
Sub-group independent forward progress No
IL version SPIR-V_1.0
SPIR versions 1.2
printf() buffer size 1048576 (1024KiB)
Built-in kernels (n/a)
Device Extensions cl_khr_icd cl_khr_global_int32_base_atomics cl_khr_global_int32_extended_atomics cl_khr_local_int32_base_atomics cl_khr_local_int32_extended_atomics cl_khr_byte_addressable_store cl_khr_depth_images cl_khr_3d_image_writes cl_intel_exec_by_local_thread cl_khr_spir cl_khr_fp64 cl_khr_image2d_from_buffer cl_intel_vec_len_hint


	let val_cpu: Vec<f32> = vec![1.0, 2.0, 3.0, 4.0, 5.0, 6.0, 7.0, 8.0, 9.0];
	let mut val = arrayfire::Array::new(&val_cpu, arrayfire::Dim4::new(&[val_cpu.len() as u64, 1, 1, 1]));

	let key_cpu: Vec<i32> = vec![0, 0, 1, 1, 1, 0, 0, 2, 2 ];
	let mut key = arrayfire::Array::new(&key_cpu, arrayfire::Dim4::new(&[key_cpu.len() as u64, 1, 1, 1]));



	let (_, val_ret) = arrayfire::sum_by_key::<i32, f32>(
		&key, 
		&val, 
		0
	);



	arrayfire::print_gen("val_ret".to_string(), &val_ret, Some(6));

arrayfire::sum_by_key crashes on the OpenCL backend on Intel CPU.

However, arrayfire::sum_by_key works on CUDA backend.

arrayfire::sum_by_key on Intel OpenCL and AMD POCL has errors for large arrays

What are the errors ? Can you please share output of the program with AF_TRACE environment variable set to the value of all

@BA8F0D39 Weird, I can't see your reply on github web at all although I could see it in my email.

In any case, the error is quite evident isn't it ?

thread 'main' panicked at 'Error message: One of the function arguments is incorrect
Last error: In function reduce_promote_by_key
In file src/api/c/reduce.cpp:377
Invalid argument at index 2
Expected: kinfo.isVector()
0# 0x00007FE318930134 in /opt/array

The key vector can't be two dimensional when the keys are identifying a unique element along a single dimension which is essentially a vector. Hence, key array should be a vector. This is not a bug, it is expected input for *-by_key

@9prady9

It seems my code has a race condition.

	let contents = fs::read_to_string(filename).expect("error"); //Read data from file
	let v0:Vec<f64> = parseData(contents); //Turn string to vector
	let mut a0 = arrayfire::Array::new(&v0, arrayfire::Dim4::new(&[v0.len() as u64, 1, 1, 1])); //Vector to Arrayfire

	a0 = arrayfire::sigmoid(&a0); //Segment fault

It seems arrayfire::Array::new is asynchronous and returns the array a0 before the data in a0 is valid.
a0 sometimes has invalid data, which causes functions performed on a0 to fail.

Is arrayfire::Array::new synchronous or asynchronous?

@BA8F0D39 Array creation from host data is synchronous i.e. CUDA backend synchronizes on the stream that it called cudaMemcpyAsync on and OpenCL backend does a blocking enqueuWriteBuffer. So, I doubt that input buffer is going out of scope here. The issue must be something else

@9prady9
I finally figured it out. It wasn't a bug on the Arrayfire code but a bug in the Intel OpenCL implementation.
On the CUDA backend and CPU backend, there isn't any bugs.

On the Intel OpenCL implementation, there are many kernels that have weird errors.


	let v0: Vec<f64> = vec![-2.600000, -7.400000, -3.100000,  -6.100000, -7.000000, -3.000000];
	let mut a0 = arrayfire::Array::new(&v0, arrayfire::Dim4::new(&[2, 3, 1, 1]));

	arrayfire::print_gen("a0".to_string(), &a0, Some(6));

	let v1: Vec<f64> = vec![-4.400000, -9.400000, -12.400000,  -4.500000, -9.500000, -12.500000, -4.600000,-9.600000 ,  -12.600000 ];
	let mut a1 = arrayfire::Array::new(&v1, arrayfire::Dim4::new(&[3, 3, 1, 1]));

	arrayfire::print_gen("a1".to_string(), &a1, Some(6));

	a0 = arrayfire::join(0, &a0, &a1);

	arrayfire::print_gen("a0".to_string(), &a0, Some(6));

Arrayfire version: (3, 8, 0)
Name: Intel(R)_Core(TM)i5-8400_CPU@ 2.80GHz
Platform: OpenCL
Toolkit: Intel(R) OpenCL
Compute: 2.1
Revision: d86edd18
a0
[2 3 1 1]
-2.600000 -3.100000 -7.000000
-7.400000 -6.100000 -3.000000

a1
[3 3 1 1]
-4.400000 -4.500000 -4.600000
-9.400000 -9.500000 -9.600000
-12.400000 -12.500000 -12.600000

a0
[5 3 1 1]
-2.600000 -4.500000 -4.600000
-7.400000 -9.500000 -9.600000
-4.400000 -4.500000 -4.600000
-9.400000 -9.500000 -9.600000
-12.400000 -12.500000 -12.600000

What OpenCL implementation works for 3.8.0 and 3.8.1?
intel-oneapi-runtime-opencl 2021.2.0-610 and POCL 1.7 doesn't seem to work at all.

What OpenCL do you recommend?

Also the CPU backend with openblas only uses 1 thread.

@BA8F0D39 Sorry about the delay.

We haven't tried oneAPI Intel OpenCL runtime yet ourselves. Although, I would say we have had quite some issues with Intel OpenCL runtime on Windows. I am not aware of the current status of those bugs. I would suggest using OpenCL runtime that have passed a good majority of OpenCL conformance tests. I would have to get back to on which is better for Intel devices.

As far as the openblas and CPU backend, I would suggest a query on openblas project directly because we essentially delegate calls to openblas or any other BLAS implementation with necessary inputs. Parallelizing the execution would entirely depend on the respective BLAS upstream in this case openblas. In general though, CPU backend carries out serial execution on separate thread - we do that separate thread to make the API consistently asynchronous across all backends.

@BA8F0D39 I would suggest using this Intel OpenCL runtime for CPU. You need to fill a form with your details to download that one. I think this is an old runtime though.

@9prady9
I gave up Intel OpenCL and used POCL because Intel OpenCL has more issues than POCL