This discussion has been locked.
You can no longer post new replies to this discussion. If you have a question you can start a new discussion

glTexImage2D memory leak

Hello Everyone! I have a question about function glTexImage2D.

There is some code that cause GL memory leak on the devices with mali gpu (t760) and force close android application.

  ....

    struct Vertex

    {

         float x, y, z;

         float u, v;

    };

  ...

    glBindFramebuffer(GL_FRAMEBUFFER, fbo);

    glViewport(0, 0, 1440, 2560);

    while(true)

    {      

        glActiveTexture(GL_TEXTURE0);

        glBindTexture(GL_TEXTURE_2D, texID);

       glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);

        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);

        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);

        glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);

   

        glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1440, 2560, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);

   

        glUseProgram(shader);

        int position = glGetAttribLocation(shader, "aPosition");

        glVertexAttribPointer(hPosition, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertexBuffer[0]);

        glEnableVertexAttribArray(position);

   

        int texCoord = glGetAttribLocation(data.shader, "aTexCoord");

        glVertexAttribPointer(texCoord, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertexBuffer[0].u);

        glEnableVertexAttribArray(texCoord);

        int textureLocation = glGetUniformLocation(shader, "texture0");

        glUniform1i(textureLocation, 0);

        glDrawArrays(GL_TRIANGLE_FAN, 0, 4);

        glDisableVertexAttribArray(position);

        glDisableVertexAttribArray(texCoord);

    }

The shaders program is a trivial, just gets texture sampler and draws at the target FBO.

This code is just rendering texture into FBO. On each steps texture is updated by glTexImage2D call.

The question is why this code leads to memory leak?

I suppose, the reason is that glTexImage2D on each call allocates new chunk of memory for texture's data.

Texture memory that are used at previous rendering step is never released, until we call glFinish or glBindFramebuffer(GL_FRAMEBUFFER, 0).

It looks like some optimization on driver side, but is it standard behavior? Please, correct me if I'm wrong.

Thanks a lot!

P.S Code above works fine on the devices with Adreno GPU.

Parents
  • Looking at the ops code snippet I do not see anywhere where the allocated texture is deleted. The driver afaik is doing exactly what is asked of it..blindly allocating texture every iteration of the loop. If the texture is never deleted, how is the driver supposed to know that the texture will not be used in future ? The application is in a position to make those decision and it is incorrect to defer that responsibility to the driver. The code posted would be no different if the memory was being allocate hosted via malloc/new with no associated free/delete. The OS will event exhaust all available memory and will most like kill your application. If the code snippet posted is the actually code being used, then this is NOT a driver bug, but a user error. Its not the driver's responsibility to fix programmer error. A robust program will handle exceptional cases ( check for exceptions and errors from API calls ) and deal with them accordingly. If you allocate memory, then check if the allocation fail, if you delete something, make sure its valid before you delete etc..and the list goes on.

    I'm also very surprised when you mention that this works on an Adreno GPU. Trust me I have used PowerVR, Mali and Adreno GPUs in development for Android and Adreno GPUs have always exhibit non specification behavior ( which probably explained why this worked. ) Not bashing the Adreno GPU ( well sort of, you can see my frustration in their forum with actual driver bugs that all the other named GPU does not exhibit ), but if you can find a PowerVR GPU, I would test on that too and see if the application works.

Reply
  • Looking at the ops code snippet I do not see anywhere where the allocated texture is deleted. The driver afaik is doing exactly what is asked of it..blindly allocating texture every iteration of the loop. If the texture is never deleted, how is the driver supposed to know that the texture will not be used in future ? The application is in a position to make those decision and it is incorrect to defer that responsibility to the driver. The code posted would be no different if the memory was being allocate hosted via malloc/new with no associated free/delete. The OS will event exhaust all available memory and will most like kill your application. If the code snippet posted is the actually code being used, then this is NOT a driver bug, but a user error. Its not the driver's responsibility to fix programmer error. A robust program will handle exceptional cases ( check for exceptions and errors from API calls ) and deal with them accordingly. If you allocate memory, then check if the allocation fail, if you delete something, make sure its valid before you delete etc..and the list goes on.

    I'm also very surprised when you mention that this works on an Adreno GPU. Trust me I have used PowerVR, Mali and Adreno GPUs in development for Android and Adreno GPUs have always exhibit non specification behavior ( which probably explained why this worked. ) Not bashing the Adreno GPU ( well sort of, you can see my frustration in their forum with actual driver bugs that all the other named GPU does not exhibit ), but if you can find a PowerVR GPU, I would test on that too and see if the application works.

Children