Hi,
I'm Trying to convert a code written in Cuda to openCL and run into some trouble. My final goal is to implement the code on an Odroid XU3 board with a Mali T628 GPU.
In order to simplify the transition and save time trying to debug openCL kernels I've taken the following steps:
I know that different architectures may have different optimizations but that isn't my main concern for now. I manged to run the openCL code on my Nvidia GPU with no apparent issues but keep getting strange errors when trying to run the code on the Odroid board. I know that different architectures have different handling of exceptions etc. but I'm not sure how to solve those issues.
Since the openCL code works great on my Nvidia I assume that I managed to do the correct transition between thread/blocks -> workItems/workGroups etc. I already fixed several issues that relate to the cl_device_max_work_group_size issue so that can't be the cause.When running the code i'm getting a "CL_OUT_OF_RESOURCES" error.
I've narrowed the cause of the error to 2 lines in the code but not sure to fix those issues.
the error is caused by the following lines in the kernel code attached :
Is there any tool that can help debugging those issues on the Odroid ? I saw that using "printf" inside the kernel isn't possible. Is there another available command ?
Thanks
Yuval
Instead of using a fixed 256 (or 128 as Anthony recommended) work items per work group, I recommend to use clGetKernelWorkGroupInfo and query CL_KERNEL_WORK_GROUP_SIZE. This way you will get the largest work group that the device allows with that kernel (so it can change for different kernels also): On AMD, you will probably get 256, on nVidia 1024, on Intel GPUs 512 (CPU can vary vastly), on Mali this can vary among 64/128/256 based on how complex your kernels are (more register usage per workitem = smaller max work group). On Qualcomm Adreno, it can also get any multiple of 16 (I constantly get annoyed by a work group or 80 or 192 threads, so I have to fix the code not to assume a power-of-two work group). You're out of luck if the algorithm requires a per-determined work group size, but that should be rare.
Hi lrdxgm,
What you say is mostly true, however if your kernel is ALU bound, then you will benefit from forcing the local workgroup size to 128 because the extra memory accesses caused by the register spilling will be hidden by the ALU operations and the GPU utilisation will be much better resulting in better performance.
Hope this makes sense.