Hello everyone,
Based on OpenCL guide for Mali 600 gpu, CL_MEM_ALLOC_HOST_PTR should be used to remove any data copy and to improve performance.
Today, I was testing on memory copy time using CL_MEM_ALLOC_HOST_PTR on Arndale board having a Mali 604 gpu. I tested with CL_MEM_ALLOC_HOST_PTR and with clEnqueueWriteBuffer. I found that overall I do not get much performance improvement if I use CL_MEM_ALLOC_HOST_PTR. Because the clEnqueueMap function takes almost same time as clEnqueueWriteBuffer.
This test has been done on vector addition.
How I tested:
Instead of having a pointer created with malloc and transfer data to device, I created a buffer at first using CL_MEM_ALLOC_HOST_PTR . Then I mapped this buffer using OpenCL API. This returns a pointer and I filled the memory pointed by this pointer with data. The mapping mechanism in this case takes time. The mapping time is almost equal to clENqueuewritebuffer. So, from this example, I did not get any significant improvement using CL_MEM_ALLOC_HOST_PTR.
My question is, why mapping time is so big when I use CL_MEM_ALLOC_HOST_PTR?
Here is the performance measurements:
Element size: 10000000, Kernel : vector addition, All times are in microseconds
56987
I have also attached the three version of vector addition codes with this post for your kind review.
Hi,
1. It is still the wrong way of doing things: in a real life appication you would allocate your buffers at initialisation time and then re-use them, you wouldn't free and allocate new ones at every frame because this would be really inefficient.
Your test application should try to be as close as possible from a real application, there isn't much value in benchmarking the initialisation and destruction of objects as it's not what will affect your application's performance.
1/2: My guess (It's only a guess though) is that enqueueWrite is faster because it relies on memcpy which will trigger the CPU governor up.
Indeed mapping is done by the CPU, however the CPU load depends on the memory bus frequency which is set by the same governor as the CPU, however this governor only takes into account CPU activity.
So if the CPU was sat doing nothing while waiting for the GPU to complete a job, then it will have been clocked down and so will the memory bus and as a result the cache maintenance will be slow.
Ideally the governor would monitor the memory bus activity too and clock back up both the CPU and the memory bus when needed, unfortunately this is not currently the case in the Linux kernel.
So the workaround is to set the CPU governor to "performance" so that both the CPU and memory bus stay clocked up.
3. cpu-freq is the standard way of controlling the CPU Frequency in the Linux Kernel: https://www.kernel.org/doc/Documentation/cpu-freq/governors.txt
And here is an example of how to set the governor from userspace: CPU frequency scaling in Linux | iDebian's Weblog
Hope this helps,
Anthony