Are there any transformation/conversation if I allocate texture from OpenGL application user space to some GPU internal format?
And is it possible to disable this transformation?
16 or 24 or 32bit texture
Are there any disadvantages ?
For example if I allocate with
some application memory block to OpenGL API ,
some application memory block to EGL API,
DMBBUF to EGL,
UMP to EGL ?
Do you have some spacial case for this kind of textures ?
EGL external images can be mapped directly on into the GPU memory view, so it's all zero copy, although format negotiation is implementation-specific for video surfaces (as they are commonly YUV and so not really a native OpenGL ES texture type).
HTH, Pete