Hi,
I wish to allocate a vector and use it's data pointer to allocate a zero copy buffer on the GPU. There is this cl_arm_import_memory extension which can be used to do this. But I am not sure wether its supported for all mali midgard OpenCL drivers or not.
I was going through this link and I am quite puzzled by the following lines : -
If the extension string cl_arm_import_memory_host is exposed then importing from normal userspace allocations (such as those created via malloc) is supported.What exactly does these lines mean ? I am specifically working on rockchip's RK3399 boards. Kindly help.
Right, calling clEnqueueMapBuffer on an imported buffer is forbidden by the spec but some older versions of the driver didn't reject the call. If the clImportMemoryARM function returned successfully, then your platform has support and I suggest you do the following:
- Use a custom allocator with your std::vector's that guarantees that memory is aligned to 64 bytes (cache line alignment, as per the extension specification).
- Import the pointer returned by vec.data() using clImportMemoryARM with CL_IMPORT_TYPE_HOST_ARM.
This obviously only works if the allocation backing the vector doesn't change after the import into OpenCL so you need to reserve enough space upfront (e.g. push_back may end up reallocating). To maintain data consistency, you have nothing to do, it's all managed by the OpenCL runtime.
Regards,
Kévin
Thanks Kevin for the prompt reply. Really appreciate it.
Hi, I am seeing a performance difference between when I allocate cl_mem using arm_import_memory and when I allocate using CL_MEM_ALLOC_HOST_PTR. The kernel execution time decreases by 10% when buffer is allocated by passing the CL_MEM_ALLOC_HOST_PTR flag in clCreateBuffer() function. Is this the expected behaviour ? and is there any workaround for it?
This is expected behaviour. What you are likely measuring (I can confirm if you tell me exactly how you're measuring this) is the cost of maintaining data consistency between the CPU and GPU.
Conceptually, running a kernel on imported host memory has roughly the same cost as unmapping a buffer, running the kernel and mapping the buffer on the CPU again.
You can reduce that cost to a minimum by batching kernels into as few flush groups as possible. Later drivers are better at this.
> Conceptually, running a kernel on imported host memory has roughly the same cost as unmapping a buffer, running the kernel and mapping the buffer on the CPU again.
Hi, Kevin. Can you explain this in detail? such as why unmap and map is needed?