This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

glReadPixels

Note: This was originally posted on 9th March 2012 at http://forums.arm.com

Hi all,

i try to render scene in an off screen  way, controlling the location of my pixels (the pointer is fixed by myself).
It seems that MALI400 is not supporting eglCreatePixmapSurface, interface that would allows me to create a  surface that will use pixels in the location i want.

As it is not working i have to use the API glReadPixels which is very very slow (compare to other GPU).

So i'm wondering if there is a reason for such bad perf for ReadPixels and if someone knows a way with MALI to render a scene at place (pixel buffer address) you want.

Thanks for your help.

BR

Seb
  • Note: This was originally posted on 10th March 2012 at http://forums.arm.com

    Hi Seb,

       I am sorry that you are facing problems with pixmap surfaces. Could you let us know what errors you get when trying to use pixmap surfaces. After every EGL call , you can check if an error has been generated using eglGetError.  Could you let me know what error it shows? It might also help us if you can share snippets of code where the pixmap surface is created.

    Also, with respect to glReadPixels, we generally advise programmers not to use that particular API call as it is causes a flush and breaks the rendering pileline. This will cause huge performance impacts.

    BR,
    Karthik
  • Note: This was originally posted on 12th March 2012 at http://forums.arm.com

    Hi Karthik,

    thanks a lot for answering me.

    The error i get is 12298 -> 0x300A -> EGL_BAD_NATIVE_PIXMAP
    For infromation i'm working on ANdroid and using Galaxy S2

    the code i wrote is the following one:
    on native side:

    #define PIXEL_FORMAT_RGB_565 4
    typedef struct egl_native_pixmap_t
    {
    int32_t  version; /* must be 32 */
    int32_t  width;
    int32_t  height;
    int32_t  stride;
    uint8_t* data;
    uint8_t  format;
    uint8_t  rfu[3];
    union {
         uint32_t compressedFormat;
         int32_t  vstride;
    };
    int32_t  reserved;
    } egl_native_pixmap_t;


    JNIEXPORT void JNICALL
    Java_com_vmware_mvp_vm_handlers_gpu_PixelBuffer_createPixmapSurface(JNIEnv *_env, jobject _this, jobject out_sur,
         jobject display, jobject config, jobject native_pixmap,
         jintArray attrib_list)
    {
    jfieldID gBitmap_NativeBitmapFieldID;
    jfieldID gSurface_EGLSurfaceFieldID;
    //jfieldID gSurface_NativePixelRefFieldID;

    EGLDisplay dpy = getDisplay(_env, display);
    EGLConfig  cnf = getConfig(_env, config);
    jint* base = 0;

    jclass bitmap_class = _env->FindClass("android/graphics/Bitmap");
    gBitmap_NativeBitmapFieldID = _env->GetFieldID(bitmap_class, "mNativeBitmap", "I");

    jclass gSurface_class = _env->FindClass( "com/google/android/gles_jni/EGLSurfaceImpl");
    gSurface_EGLSurfaceFieldID = _env->GetFieldID(gSurface_class, "mEGLSurface", "I");

    //gSurface_NativePixelRefFieldID = _env->GetFieldID(gSurface_class, "mNativePixelRef", "I");

    SkBitmap const * nativeBitmap =
             (SkBitmap const *)_env->GetIntField(native_pixmap,
                     gBitmap_NativeBitmapFieldID);


    egl_native_pixmap_t pixmap;
    pixmap.version = sizeof(pixmap);
    pixmap.width  = nativeBitmap->width();
    pixmap.height = nativeBitmap->height();
    pixmap.stride = nativeBitmap->rowBytes() / nativeBitmap->bytesPerPixel();
    pixmap.format = PIXEL_FORMAT_RGB_565;
    pixmap.data   = (uint8_t*)nativeBitmap->getPixels();

        
    if (attrib_list) {
         // XXX: if array is malformed, we should return an NPE instead of segfault
         base = (jint *)_env->GetPrimitiveArrayCritical(attrib_list, (jboolean *)0);
    }
    EGLSurface sur = eglCreatePixmapSurface(dpy, cnf, &pixmap, base);

    if(sur == EGL_NO_SURFACE)
    {
       int error = eglGetError();
       LogErr("Error (%d) Creating the Pixmap Surface.\n",error);
    }
    if (attrib_list) {
         _env->ReleasePrimitiveArrayCritical(attrib_list, base, JNI_ABORT);
    }

    if (sur != EGL_NO_SURFACE) {
         _env->SetIntField(out_sur, gSurface_EGLSurfaceFieldID, (int)sur);
    }
    }

    on Android side declaration is done as follow
       private native void createPixmapSurface(EGLSurface sur,
                                   EGLDisplay display, EGLConfig config,
                                   Object native_pixmap, int[] attrib_list);



    On Android side call is done as folllow.
      mNativePixmap = Bitmap.createBitmap(mWidth, mHeight,Bitmap.Config.RGB_565);
      
      MY_EGLSurfaceImpl sur = new MY_EGLSurfaceImpl();
      createPixmapSurface(sur,mEGLDisplay, mEGLConfig, mNativePixmap, null);
      mEGLSurface = sur;



    For information i was told by some STE people that PixmapSurface is not supported by MALI400 for Android. But i don't know if it is really true.
    If it is the case then i'm wondering what i can use to remove the need to call glReadPixels.

    I would need to be able to force the place where to render the 3D Scene and as Pixmap is not working then i do have to use glReadPixels.
    I tried FBO also and even if i'm able to render into a texture, then after i also face the problem to get access to texture pixels and have to use glReadPixels.

    As you said perfs seems very bad as soon as you are using glReadPixels so i definitively need a way to bypass such call.

    Again thanks for your help.

    BR

    Seb
  • Note: This was originally posted on 13th March 2012 at http://forums.arm.com

    Thanks a lot for that very valuable reply. Let me implement that and i let you know the result.

    BR

    Seb
  • Note: This was originally posted on 14th March 2012 at http://forums.arm.com

    Hi Karthik,

    First i wanted to thanks you for the very good feedback you provided me.I implemented the solution you gave and it is working fine with a very good level of performance compare to glReadPixels.

    For information, I wanted to notice that i had to adapt a part of step 6 code:
    you are creating a texture and also a EGLImageKHR. On my side i modified your code to arrive to following one.

    glViewport[color=#666600]([/color][color=#006666]0[/color][color=#666600],[/color] [color=#006666]0[/color][color=#666600],[/color] w[color=#666600],[/color] h[color=#666600]);[/color]
    checkGlError[color=#666600]([/color][color=#008800]"glViewport"[/color][color=#666600]);[/color][color=#880000]/* Initialize FBOs. */[/color]
    glGenFramebuffers[color=#666600]([/color][color=#006666]1[/color][color=#666600],[/color] [color=#666600]&[/color]iFBO[color=#666600]);[/color]
    checkGlError[color=#666600]([/color][color=#008800]"glGenFramebuffers"[/color][color=#666600]);[/color][color=#666600]
    [/color][color=#880000]/* Initialize FBO texture. */[/color]
    glGenTextures[color=#666600]([/color][color=#006666]1[/color][color=#666600],[/color] [color=#666600]&[/color]iFBOTex[color=#666600]);[/color]
    glBindTexture[color=#666600]([/color]GL_TEXTURE_2D[color=#666600],[/color] iFBOTex[color=#666600]);[/color]

    _glEGLImageTargetTexture2DOES[color=#666600]([/color]GL_TEXTURE_2D[color=#666600],[/color] pEGLImage[color=#666600]);[/color]
    checkGlError[color=#666600]([/color][color=#008800]"_glEGLImageTargetTexture2DOES"[/color][color=#666600]);[/color][color=#666600]
    [/color][color=#880000]/* Attach texture to the framebuffer. */[/color]
    glFramebufferTexture2D[color=#666600]([/color]GL_FRAMEBUFFER[color=#666600],[/color] GL_COLOR_ATTACHMENT0[color=#666600],[/color] GL_TEXTURE_2D[color=#666600],[/color] iFBOTex[color=#666600],[/color] [color=#006666]0[/color][color=#666600]);[/color]
    checkGlError[color=#666600]([/color][color=#008800]"glFramebufferTexture2D"[/color][color=#666600]);[/color]



    [color=#880000]/* Check FBO is OK. */[/color]
    iResult [color=#666600]=[/color] glCheckFramebufferStatus[color=#666600]([/color]GL_FRAMEBUFFER[color=#666600]);[/color]
    [color=#000088]if[/color][color=#666600]([/color]iResult [color=#666600]!=[/color] GL_FRAMEBUFFER_COMPLETE[color=#666600])[/color]
    [color=#666600]{[/color]
    LOGE[color=#666600]([/color][color=#008800]"Error: Framebuffer incomplete at %s:%i\n"[/color][color=#666600],[/color] __FILE__[color=#666600],[/color] __LINE__[color=#666600]);[/color]
    [color=#666600]}[/color]
    [color=#880000]/* Render to framebuffer object. */[/color]
    [color=#880000]/* Bind our framebuffer for rendering. */[/color]
    glBindFramebuffer[color=#666600]([/color]GL_FRAMEBUFFER[color=#666600],[/color] iFBO[color=#666600]);[/color]
    checkGlError[color=#666600]([/color][color=#008800]"glBindFramebuffer"[/color][color=#666600]);[/color]

    [color=#880000]/* Unbind framebuffer. */[/color]
    glBindFramebuffer[color=#666600]([/color]GL_FRAMEBUFFER[color=#666600],[/color] [color=#006666]0[/color][color=#666600]);[/color]




  • Note: This was originally posted on 22nd January 2013 at http://forums.arm.com

    (added new topic. link)
  • Note: This was originally posted on 13th June 2012 at http://forums.arm.com

    Hi all,
    When i try above code, i am failed.
    At firstly, i  make one GLSurfaceView and set one render to it in Java. In onDrawFrame function of render, i want to rende data by opengl which format is NV12 in one graphicbuffer.
    I use eglCreateImageKHR to get EGLImageKHR variable img from graphicbuffer, then i use glEGLImageTargetTexture2DOES to binde. And i try above code also. At last, there is only green in rendering.
    I doubt EGL initlization is need to do by some code. Does any know how to fix the problem?

    thx,
    guangx
  • Note: This was originally posted on 13th June 2012 at http://forums.arm.com

    Hi guangx,

    do you mean you have just one image data buffer, in this format:

    http://www.fourcc.org/yuv.php#NV12

    that you are trying to sample as a texture and render onto some OpenGL-ES geometry?

    I don't think OpenGL-ES 2.0 understands the NV12 format directly, so this could be the problem.

    Whilst you can write a fragment shader to do YUV to RGB conversion, it is usually used with planar data. In other words, each input channel (Y, U, V) is in a separate memory buffer, and each is mapped to a separate OpenGL-ES texture. Then, the fragment shader can sample each channel and do the maths to convert.

    The NV12 format is tricky, because it contains both planar data (the Y component) and immediately after, it contains interleaved data (the U and V components). It may be possible to do this by treating the entire texture as a single color channel (e.g. a LUMINANCE or ALPHA format, with a single byte per texel) but there will then be some maths required to calculate the actual texture coordinates to sample to retreive your 3 components (Y, U, V) from the texture.

    If this is what you are trying to achieve, I think something like the following fragment shader will calculate the right texture coordinates, but the YUV->RGB stage is still left to do:


    precision mediump float;

    // Pass in the entire texture here.
    // E.g. a 256x256 video frame will actually be:
    //
    //        256
    //     +--------+
    // 256 |   Y    |
    //     +--------+
    //     + VU..VU |
    // 128 | VU..VU |
    //     +--------+
    //
    // 256x384.
    uniform sampler2D u_s2dNV12;

    varying vec2 v_v2TexCoord;

    // This needs to be set from the application code.
    // E.g. for a 256 wide texture, it will be 1/256 = 0.00390625.
    uniform float u_fInverseWidth;

    void main()
    {
        // Calculate the texture coord for the Y sample.
        vec2 v2YTexCoord;
        v2YTexCoord.s = v_v2TexCoord.s;
        v2YTexCoord.t = v_v2TexCoord.t * 2.0 / 3.0;

        // Sample the NV12 texture to read the Y component.
        float fY = texture2D(u_s2dNV12, v2YTexCoord).r;

        // Calculate the texture coord for the U sample.
        vec2 v2UTexCoord;
        v2UTexCoord.s = floor(v_v2TexCoord.s / 2.0) * 2.0 + u_fInverseWidth;
        v2UTexCoord.t = v_v2TexCoord.t * 1.0 / 3.0 + 2.0 / 3.0;

        // Sample the NV12 texture to read the U component.
        float fU = texture2D(u_s2dNV12, v2UTexCoord).r;

        // Calculate the texture coord for the V sample.
        vec2 v2VTexCoord;
        v2VTexCoord.s = floor(v_v2TexCoord.s / 2.0) * 2.0;
        v2VTexCoord.t = v_v2TexCoord.t * 1.0 / 3.0 + 2.0 / 3.0;

        // Sample the NV12 texture to read the V component.
        float fV = texture2D(u_s2dNV12, v2VTexCoord).r;

        // TODO: Insert YUV->RGB conversion maths here.

        gl_FragColor = vec4(fY, fU, fV, 1.0);
    }


    Please note I have not tested this code, so there may be bugs :-)

    Beware that with larger textures, the fragment processor may run out of precision to perform the coordinate maths. In this case, you should consider de-interleaving the UV data on the CPU and then passing 3 planar buffers to OpenGL-ES. Or, you could see whether your video source can generate other formats than NV12, such as planar (non-interleaved) YUV.

    HTH, Pete
  • Note: This was originally posted on 13th June 2012 at http://forums.arm.com

    Hi Pete,
    Thanks for your timely reply.

    We have following enviroment:

    1) OMX HW decoder

    Graphic buffer in decoder is dequeued from one Anativewindows. The Anativewindows is get from one surfaceview.

    2) Opengl or es render

    set in another GLsurfaceview.

    3) After HW decoding, we can get NV12 format decoded result.Can we use one graphicBuffer from decoder to produce one  EGLImageKHR by using eglCreateImageKHR function, and then bind it to surface, if the graphicBuffer is not new from surface of render?

    thx.

    regards,

    guangx

  • Note: This was originally posted on 13th June 2012 at http://forums.arm.com

    Hi guangx,

    is this a Mali GPU platform you're using?

    I'm not sure I understood the question, but on a Mali platform it should be possible to have the video decode hardware decode into an area of memory mapped by the Unified Memory Provider (UMP) module, which can then be mapped as a GL texture via EGLImage so the Mali can access it.

    HTH, Pete
  • Note: This was originally posted on 12th September 2012 at http://forums.arm.com

    I run the above code in SAMSUNG S3 device, is failed,

    When I try the SAMSUNG S3 run the above code, I failed.

    Here are some of my code:

    void initEGL(int width, int height, SkBitmap bitmap)
    {
    //step 1.
    const char* const driver_absolute_path = "/system/lib/egl/libEGL_mali.so";
    void* dso = dlopen(driver_absolute_path, RTLD_LAZY);
    if (dso != 0)
    {
      LOGI("dlopen: SUCCEEDED");
      _eglCreateImageKHR = (PFNEGLCREATEIMAGEKHRPROC)dlsym(dso, "eglCreateImageKHR");
      _eglDestroyImageKHR = (PFNEGLDESTROYIMAGEKHRPROC) dlsym(dso, "eglDestroyImageKHR");
    }
    else
    {
      LOGI("dlopen: FAILED! Loading functions in common way!");
      _eglCreateImageKHR = (PFNEGLCREATEIMAGEKHRPROC) eglGetProcAddress("eglCreateImageKHR");
      _eglDestroyImageKHR = (PFNEGLDESTROYIMAGEKHRPROC) eglGetProcAddress("eglDestroyImageKHR");
    }

    if(_eglCreateImageKHR == NULL)
    {
      LOGE("Error: Failed to find eglCreateImageKHR at %s:%i\n", __FILE__, __LINE__);
      exit(1);
    }
    if(_eglDestroyImageKHR == NULL)
    {
      LOGE("Error: Failed to find eglDestroyImageKHR at %s:%i\n", __FILE__, __LINE__);
      exit(1);
    }
    _glEGLImageTargetTexture2DOES = (PFNGLEGLIMAGETARGETTEXTURE2DOESPROC) eglGetProcAddress("glEGLImageTargetTexture2DOES");
    if(_glEGLImageTargetTexture2DOES == NULL)
    {
      LOGE("Error: Failed to find glEGLImageTargetTexture2DOES at %s:%i\n", __FILE__, __LINE__);
      exit(1);
    }

    //step 2. Create the Android Graphic Buffer
    GraphicBuffer* buffer = new GraphicBuffer(width, height,
       HAL_PIXEL_FORMAT_RGBA_8888,
       GraphicBuffer::USAGE_HW_TEXTURE |
       GraphicBuffer::USAGE_HW_2D |
       GRALLOC_USAGE_SW_READ_OFTEN |
       GRALLOC_USAGE_SW_WRITE_OFTEN);
    // Init the buffer
    status_t err = buffer->initCheck();
    if (err != NO_ERROR)
    {
      LOGE("Error: %s\n", strerror(-err));
      return ;
    }

    // Retrieve andorid native buffer
    android_native_buffer_t* anb = buffer->getNativeBuffer();

    //step 3. Create the EGLImage
    const EGLint attrs[] = {
       EGL_IMAGE_PRESERVED_KHR, EGL_TRUE,
       EGL_NONE, EGL_NONE
    };
    EGLImageKHR pEGLImage = _eglCreateImageKHR(eglGetCurrentDisplay(), EGL_NO_CONTEXT, EGL_NATIVE_BUFFER_ANDROID, (EGLClientBuffer)anb, attrs);
    if (pEGLImage == EGL_NO_IMAGE_KHR) {
      EGLint error = eglGetError();
      LOGE("Error (%#x): Creating EGLImageKHR at %s:%i\n", error, __FILE__, __LINE__);
    }

    //setp 4. Set up the FBO with the EGLImage as target
    GLuint iFBOTex;
    GLuint iFBO;

    glViewport(0, 0, width, height);
    checkGlError("glViewport");

    glGenFramebuffers(1, &iFBO);
    checkGlError("glGenFramebuffers");
    /* Initialize FBO texture. */
    glGenTextures(1, &iFBOTex);
    checkGlError("glGenTextures");
    glBindTexture(GL_TEXTURE_2D, iFBOTex);

    /* Bind our framebuffer for rendering. */
    glBindFramebuffer(GL_FRAMEBUFFER, iFBO);
    checkGlError("glBindFramebuffer");

    /* Attach texture to the framebuffer. */
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, iFBOTex, 0);
    checkGlError("glFramebufferTexture2D");
    /* Check FBO is OK. */
    GLenum iResult = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(iResult != GL_FRAMEBUFFER_COMPLETE) {
      LOGE("Error (%#x): Framebuffer incomplete at %s:%i\n", iResult, __FILE__, __LINE__);
    }

    _glEGLImageTargetTexture2DOES(GL_TEXTURE_2D, pEGLImage);
    checkEGLError("glEGLImageTargetTexture2DOES");

    /* Render to framebuffer object. */
    /* Unbind framebuffer. */
    glBindFramebuffer(GL_FRAMEBUFFER, 0);
    checkGlError("glBindFramebuffer");


    //step 5. When rendering every frame, bind to the FBO and issue all glCommands, followed by unbind.
    glBindFramebuffer(GL_FRAMEBUFFER, iFBO);

    /* Set the viewport according to the FBO's texture. */
    glViewport(0, 0, width , height);

    /* Clear screen on FBO. */
    glClearColor(0.5f, 0.5f, 0.5f, 1.0);
    glClear(GL_COLOR_BUFFER_BIT | GL_DEPTH_BUFFER_BIT);
    /*********************************************
      *******************************************/
    glDrawArrays(GL_TRIANGLES, 0, 3);
    /* And unbind the FrameBuffer Object so subsequent drawing calls are to the EGL window surface. */
    glBindFramebuffer(GL_FRAMEBUFFER,0);


    //setp 6. Finally use the following code for reading the buffer out from user space. Something like the following
    // Just in case the buffer was not created yet
    if (buffer == NULL)
      return;

    void* vaddr;
    // Lock the buffer and retrieve a pointer where we are going to write the data

    buffer->lock(GRALLOC_USAGE_SW_WRITE_OFTEN, &vaddr);
    if (vaddr == NULL)
    {
      buffer->unlock();
      return;
    }

    unsigned char* ucVaddr = (unsigned char*)vaddr;

    }

    occur error when execute the follow code fragement.
    /* Attach texture to the framebuffer. */
    glFramebufferTexture2D(GL_FRAMEBUFFER, GL_COLOR_ATTACHMENT0, GL_TEXTURE_2D, iFBOTex, 0);
    checkGlError("glFramebufferTexture2D");
    /* Check FBO is OK. */
    GLenum iResult = glCheckFramebufferStatus(GL_FRAMEBUFFER);
    if(iResult != GL_FRAMEBUFFER_COMPLETE) {
      LOGE("Error (%#x): Framebuffer incomplete at %s:%i\n", iResult, __FILE__, __LINE__);
    }

    When the code is executed to glCheckFramebufferStatus will prompt an error, the error is: "Error (0x3008): Creating EGLImageKHR at..." and "Error (0): Framebuffer incomplete at ...."



    Thanks for your help.