This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

Mali texture allocation transformation from OpenGL to GPU internal format

Are there any transformation/conversation if I allocate texture from OpenGL application user space to some GPU internal format?

And is it possible to disable this transformation?

16 or 24 or 32bit texture

Are there any disadvantages ?

For example if I allocate with

some application memory block to OpenGL API ,

some application memory block to EGL API,

DMBBUF to EGL,

UMP to EGL  ?

Parents
  • I am curios for situation, when we use DMABUF/UMP for video decoder OUTPUT (big changing textures  60fps)

    As DMA and UMP buffers are already in kernel space, but I still want to use shaders on this buffers(textures)

    And all at 60fps ?

    Do you have some spacial case for this kind of textures ?

Reply
  • I am curios for situation, when we use DMABUF/UMP for video decoder OUTPUT (big changing textures  60fps)

    As DMA and UMP buffers are already in kernel space, but I still want to use shaders on this buffers(textures)

    And all at 60fps ?

    Do you have some spacial case for this kind of textures ?

Children
  • Do you have some spacial case for this kind of textures ?

    EGL external images can be mapped directly on into the GPU memory view, so it's all zero copy, although format negotiation is implementation-specific for video surfaces (as they are commonly YUV and so not really a native OpenGL ES texture type).

    HTH,
    Pete