This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Optimised GPU convolution for low memory integrated devices -such as arm processors /GPUs?

I wish to implement convolution on arm mali GPUs and want it to be optimised for both speed and memory ? What's the best way to do this? GEMM based MCMK convolutions are not suited as they utilise a lot of memory. Also, a direct implementation on GPU is way slower than the corresponding CPU version. Any time for memory reshaping should be taken into account for timing calculations.