Which between the 2 approaches is better and why?
1. 3 YUV textures and shader code
2. YUV image of GL_OES_EGL_image_external
Hi pulsar,
I don't know but my initial hunch would be that option 2 will be better. My thinking is that you could then do a single memory look-up to get a YUV pixel and make use of the powerful (and multiple) ALUs to do the maths. The alternative would require 3 memory read operations, so would probably be performance bound by the memory load operations whilst the ALUs sat idle waiting for data to work on.
HTH, Pete
> I don't know but my initial hunch would be that option 2 will be better.
Yes - undoubtedly the latter if your platform supports it. The Mali-T600 series has native YUV sampling support so you can load all YUV channels in a single operation which we be lower power and faster than rolling your own in shader code.
The complexity comes from accessibility to this functionality to user applications; there is no concept of YUV surfaces in EGL or OpenGL ES, so the means to generate and import a YUV EGL Image relies in importing a raw data buffer which is pre-populated (e.g. by the device's camera or video decode hardware) and finding out the YUV format via an OS and platform specific side channel such as special state flags on the Android gralloc memory handle.
If you want to stick to "vanilla" Open GL ES for portability reasons then the only option you really have is unfortunately the former roll-your-own route.
Pete
thanks for the help guys,
Now what I understand is that the reading the textures from memory is optimal if we have YUV textures using the latter approach.
What about the YUV->RGB conversion part? How is that mathematics done?
Color conversion is automatically applied as OpenGL ES only really understands RGB. As per my first post note that there is no such thing as off-the-shelf "YUV" (or more accurately Y'CbCr) - every vendor camera and video block does it slightly differently (BT.601, BT.607, wide gamut, narrow gamut, to name but a few of the options) - so the channel chromatic conversion weights are determined as part of the platform specific integration.
ok, so the chromatic conversion is done through some hardwrae logic while reading from the memory itself, by applying the appropriate coefficients based on the YUV format?
We won't talk about how the internals of our products work publicly - as a developer you don't need to know if we do our job right . All we can confirm is the programmer-visible behavior, which is that the shader code will get RGB color values, and you can specify per-platform conversion weights.