This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

glReadPixels

Note: This was originally posted on 9th March 2012 at http://forums.arm.com

Hi all,

i try to render scene in an off screen  way, controlling the location of my pixels (the pointer is fixed by myself).
It seems that MALI400 is not supporting eglCreatePixmapSurface, interface that would allows me to create a  surface that will use pixels in the location i want.

As it is not working i have to use the API glReadPixels which is very very slow (compare to other GPU).

So i'm wondering if there is a reason for such bad perf for ReadPixels and if someone knows a way with MALI to render a scene at place (pixel buffer address) you want.

Thanks for your help.

BR

Seb
Parents
  • Note: This was originally posted on 13th June 2012 at http://forums.arm.com

    Hi guangx,

    do you mean you have just one image data buffer, in this format:

    http://www.fourcc.org/yuv.php#NV12

    that you are trying to sample as a texture and render onto some OpenGL-ES geometry?

    I don't think OpenGL-ES 2.0 understands the NV12 format directly, so this could be the problem.

    Whilst you can write a fragment shader to do YUV to RGB conversion, it is usually used with planar data. In other words, each input channel (Y, U, V) is in a separate memory buffer, and each is mapped to a separate OpenGL-ES texture. Then, the fragment shader can sample each channel and do the maths to convert.

    The NV12 format is tricky, because it contains both planar data (the Y component) and immediately after, it contains interleaved data (the U and V components). It may be possible to do this by treating the entire texture as a single color channel (e.g. a LUMINANCE or ALPHA format, with a single byte per texel) but there will then be some maths required to calculate the actual texture coordinates to sample to retreive your 3 components (Y, U, V) from the texture.

    If this is what you are trying to achieve, I think something like the following fragment shader will calculate the right texture coordinates, but the YUV->RGB stage is still left to do:


    precision mediump float;

    // Pass in the entire texture here.
    // E.g. a 256x256 video frame will actually be:
    //
    //        256
    //     +--------+
    // 256 |   Y    |
    //     +--------+
    //     + VU..VU |
    // 128 | VU..VU |
    //     +--------+
    //
    // 256x384.
    uniform sampler2D u_s2dNV12;

    varying vec2 v_v2TexCoord;

    // This needs to be set from the application code.
    // E.g. for a 256 wide texture, it will be 1/256 = 0.00390625.
    uniform float u_fInverseWidth;

    void main()
    {
        // Calculate the texture coord for the Y sample.
        vec2 v2YTexCoord;
        v2YTexCoord.s = v_v2TexCoord.s;
        v2YTexCoord.t = v_v2TexCoord.t * 2.0 / 3.0;

        // Sample the NV12 texture to read the Y component.
        float fY = texture2D(u_s2dNV12, v2YTexCoord).r;

        // Calculate the texture coord for the U sample.
        vec2 v2UTexCoord;
        v2UTexCoord.s = floor(v_v2TexCoord.s / 2.0) * 2.0 + u_fInverseWidth;
        v2UTexCoord.t = v_v2TexCoord.t * 1.0 / 3.0 + 2.0 / 3.0;

        // Sample the NV12 texture to read the U component.
        float fU = texture2D(u_s2dNV12, v2UTexCoord).r;

        // Calculate the texture coord for the V sample.
        vec2 v2VTexCoord;
        v2VTexCoord.s = floor(v_v2TexCoord.s / 2.0) * 2.0;
        v2VTexCoord.t = v_v2TexCoord.t * 1.0 / 3.0 + 2.0 / 3.0;

        // Sample the NV12 texture to read the V component.
        float fV = texture2D(u_s2dNV12, v2VTexCoord).r;

        // TODO: Insert YUV->RGB conversion maths here.

        gl_FragColor = vec4(fY, fU, fV, 1.0);
    }


    Please note I have not tested this code, so there may be bugs :-)

    Beware that with larger textures, the fragment processor may run out of precision to perform the coordinate maths. In this case, you should consider de-interleaving the UV data on the CPU and then passing 3 planar buffers to OpenGL-ES. Or, you could see whether your video source can generate other formats than NV12, such as planar (non-interleaved) YUV.

    HTH, Pete
Reply
  • Note: This was originally posted on 13th June 2012 at http://forums.arm.com

    Hi guangx,

    do you mean you have just one image data buffer, in this format:

    http://www.fourcc.org/yuv.php#NV12

    that you are trying to sample as a texture and render onto some OpenGL-ES geometry?

    I don't think OpenGL-ES 2.0 understands the NV12 format directly, so this could be the problem.

    Whilst you can write a fragment shader to do YUV to RGB conversion, it is usually used with planar data. In other words, each input channel (Y, U, V) is in a separate memory buffer, and each is mapped to a separate OpenGL-ES texture. Then, the fragment shader can sample each channel and do the maths to convert.

    The NV12 format is tricky, because it contains both planar data (the Y component) and immediately after, it contains interleaved data (the U and V components). It may be possible to do this by treating the entire texture as a single color channel (e.g. a LUMINANCE or ALPHA format, with a single byte per texel) but there will then be some maths required to calculate the actual texture coordinates to sample to retreive your 3 components (Y, U, V) from the texture.

    If this is what you are trying to achieve, I think something like the following fragment shader will calculate the right texture coordinates, but the YUV->RGB stage is still left to do:


    precision mediump float;

    // Pass in the entire texture here.
    // E.g. a 256x256 video frame will actually be:
    //
    //        256
    //     +--------+
    // 256 |   Y    |
    //     +--------+
    //     + VU..VU |
    // 128 | VU..VU |
    //     +--------+
    //
    // 256x384.
    uniform sampler2D u_s2dNV12;

    varying vec2 v_v2TexCoord;

    // This needs to be set from the application code.
    // E.g. for a 256 wide texture, it will be 1/256 = 0.00390625.
    uniform float u_fInverseWidth;

    void main()
    {
        // Calculate the texture coord for the Y sample.
        vec2 v2YTexCoord;
        v2YTexCoord.s = v_v2TexCoord.s;
        v2YTexCoord.t = v_v2TexCoord.t * 2.0 / 3.0;

        // Sample the NV12 texture to read the Y component.
        float fY = texture2D(u_s2dNV12, v2YTexCoord).r;

        // Calculate the texture coord for the U sample.
        vec2 v2UTexCoord;
        v2UTexCoord.s = floor(v_v2TexCoord.s / 2.0) * 2.0 + u_fInverseWidth;
        v2UTexCoord.t = v_v2TexCoord.t * 1.0 / 3.0 + 2.0 / 3.0;

        // Sample the NV12 texture to read the U component.
        float fU = texture2D(u_s2dNV12, v2UTexCoord).r;

        // Calculate the texture coord for the V sample.
        vec2 v2VTexCoord;
        v2VTexCoord.s = floor(v_v2TexCoord.s / 2.0) * 2.0;
        v2VTexCoord.t = v_v2TexCoord.t * 1.0 / 3.0 + 2.0 / 3.0;

        // Sample the NV12 texture to read the V component.
        float fV = texture2D(u_s2dNV12, v2VTexCoord).r;

        // TODO: Insert YUV->RGB conversion maths here.

        gl_FragColor = vec4(fY, fU, fV, 1.0);
    }


    Please note I have not tested this code, so there may be bugs :-)

    Beware that with larger textures, the fragment processor may run out of precision to perform the coordinate maths. In this case, you should consider de-interleaving the UV data on the CPU and then passing 3 planar buffers to OpenGL-ES. Or, you could see whether your video source can generate other formats than NV12, such as planar (non-interleaved) YUV.

    HTH, Pete
Children
No data