Hello Everyone! I have a question about function glTexImage2D.
There is some code that cause GL memory leak on the devices with mali gpu (t760) and force close android application.
....
struct Vertex
{
float x, y, z;
float u, v;
};
...
glBindFramebuffer(GL_FRAMEBUFFER, fbo);
glViewport(0, 0, 1440, 2560);
while(true)
glActiveTexture(GL_TEXTURE0);
glBindTexture(GL_TEXTURE_2D, texID);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE);
glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE);
glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1440, 2560, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer);
glUseProgram(shader);
int position = glGetAttribLocation(shader, "aPosition");
glVertexAttribPointer(hPosition, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertexBuffer[0]);
glEnableVertexAttribArray(position);
int texCoord = glGetAttribLocation(data.shader, "aTexCoord");
glVertexAttribPointer(texCoord, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertexBuffer[0].u);
glEnableVertexAttribArray(texCoord);
int textureLocation = glGetUniformLocation(shader, "texture0");
glUniform1i(textureLocation, 0);
glDrawArrays(GL_TRIANGLE_FAN, 0, 4);
glDisableVertexAttribArray(position);
glDisableVertexAttribArray(texCoord);
}
The shaders program is a trivial, just gets texture sampler and draws at the target FBO.
This code is just rendering texture into FBO. On each steps texture is updated by glTexImage2D call.
The question is why this code leads to memory leak?
I suppose, the reason is that glTexImage2D on each call allocates new chunk of memory for texture's data.
Texture memory that are used at previous rendering step is never released, until we call glFinish or glBindFramebuffer(GL_FRAMEBUFFER, 0).
It looks like some optimization on driver side, but is it standard behavior? Please, correct me if I'm wrong.
Thanks a lot!
P.S Code above works fine on the devices with Adreno GPU.
Hi Kirrero,
Which platform are you using to test? Do you have a complete sample available which I can use to look into the issue for you?
In the meantime, I will see what I can find for you .
Kind Regards,
Rich
I haven an answer for you.Your supposition about the behavior on Mali is quite correct,
"the reason is that glTexImage2D on each call allocates new chunk of memory for texture's data. Texture memory that are used at previous rendering step is never released, until we call glFinish or glBindFramebuffer(GL_FRAMEBUFFER, 0)."
The reason behind this is that Mali is a deferred renderer. What your sample code is doing is building an infinite queue of draw calls which we batch up ready to process when a flush occurs triggered by either from a call to glFinish or glBindFramebuffer. The "old" texture data that was allocated and had your texture uploaded to can't be flushed because the draw calls using it haven't actually happened yet! The data has to be kept so the draw calls that sample from it can do so when that draw call is executed.
Other immediate mode renders will process the issued draw call immediately so don't need to hang onto these textures in memory.
I hope this helps,
Hi Rich,
Thanks a lot for answers, it realy helps for understanding how it works.
But why driver can't push draw queue when glTexImage2D can't allocate needed memory or when a program is allocated too much gpu memory?
Unfortunately I'm not in a position where I can comment on the internal workings of our driver, however, I can tell you that the driver team are now aware that you have encountered this situation. This is not something that we have seen raised as an issue previously as far as I know.
The behavior in this situation falls outside the scope of the GLES specification and is left to the vendor, and so the behavior on Mali is still compliant with the spec.
The simple workaround for now would be to ensure that you are periodically flushing as this will avoid the OOM situation.
Hope this Helps,
Looking at the ops code snippet I do not see anywhere where the allocated texture is deleted. The driver afaik is doing exactly what is asked of it..blindly allocating texture every iteration of the loop. If the texture is never deleted, how is the driver supposed to know that the texture will not be used in future ? The application is in a position to make those decision and it is incorrect to defer that responsibility to the driver. The code posted would be no different if the memory was being allocate hosted via malloc/new with no associated free/delete. The OS will event exhaust all available memory and will most like kill your application. If the code snippet posted is the actually code being used, then this is NOT a driver bug, but a user error. Its not the driver's responsibility to fix programmer error. A robust program will handle exceptional cases ( check for exceptions and errors from API calls ) and deal with them accordingly. If you allocate memory, then check if the allocation fail, if you delete something, make sure its valid before you delete etc..and the list goes on.
I'm also very surprised when you mention that this works on an Adreno GPU. Trust me I have used PowerVR, Mali and Adreno GPUs in development for Android and Adreno GPUs have always exhibit non specification behavior ( which probably explained why this worked. ) Not bashing the Adreno GPU ( well sort of, you can see my frustration in their forum with actual driver bugs that all the other named GPU does not exhibit ), but if you can find a PowerVR GPU, I would test on that too and see if the application works.
Good news...sorry I did miss that point about the TBDR..yeah without the frame terminator all those operations are going to keep queuing up. However, even though the behavior sounds right for such a GPU architecture and looks like in goes against the specification I think its behavior is valid ( though annoying ) since reading the documentation on glDeleteTextures, its only mention that the texture name is deleted free for reuse and nothing about the texture data store.So all the driver has to do is make the name available for re-use, but it may still hold internal references to the texture data.
Btw do you mind mentioning your use case? Just trying to see if I can suggest possible workaround since its unlikely the issue is going get resolve soon( I say this because driver releases on mobile platform is radically different from release on a PC, may have been mentioned already...as we are at the mercy of the device manufacturer and carriers..) -Though it may not be much..texture you are creating is ~3.5 MB in size, after 10 loop iteration thats 35 MB, 100 loop iteration 350 MB...you get the idea. So is it possible for your use case to use a
smaller texture? -Periodically flush ( this I think was mentioned before. ) so that the command queue get drained.
So, thanks guys for yours answers. It helps me understand how it works under hood.
All workaround was mentioned before (I think, the best one is periodically flush (for example, calls glFinish))
I believe, sometime we can see resolved this issue on the driver side
Kind regards,
Igor
Some general observations for "real app" behavior.
HTH, Pete
cgrant78 wrote: Looking at the ops code snippet I do not see anywhere where the allocated texture is deleted. The driver afaik is doing exactly what is asked of it..blindly allocating texture every iteration of the loop. If the texture is never deleted, how is the driver supposed to know that the texture will not be used in future ?
cgrant78 wrote:
Looking at the ops code snippet I do not see anywhere where the allocated texture is deleted. The driver afaik is doing exactly what is asked of it..blindly allocating texture every iteration of the loop. If the texture is never deleted, how is the driver supposed to know that the texture will not be used in future ?
The code only deals with a single texture handle, texID. Each iteration the texture is completely replaced and a single draw call performed referencing it, so from the API command stream we know that as soon as that draw is complete, we can discard that copy of the texture. There isn't an explicit flush in the command stream (glBindFramebuffer, eglSwapBuffers, glFinish etc) but after a while we should spot we have a large amount of data in flight and flush some of the command buffer to keep things reasonable, but that doesn't seem to be happening here.
Hi, cgrant78!
Thanks for your answer.
I've added generating and deleting texture in the loop.
while(true) { glActiveTexture(GL_TEXTURE0); glGenTextures(1, &texID); glBindTexture(GL_TEXTURE_2D, texID); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MIN_FILTER, GL_NEAREST); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_MAG_FILTER, GL_LINEAR); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_S, GL_CLAMP_TO_EDGE); glTexParameterf(GL_TEXTURE_2D, GL_TEXTURE_WRAP_T, GL_CLAMP_TO_EDGE); glTexImage2D(GL_TEXTURE_2D, 0, GL_RGBA, 1440, 2560, 0, GL_RGBA, GL_UNSIGNED_BYTE, buffer); glUseProgram(shader); int position = glGetAttribLocation(shader, "aPosition"); glVertexAttribPointer(hPosition, 3, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertexBuffer[0]); glEnableVertexAttribArray(position); int texCoord = glGetAttribLocation(data.shader, "aTexCoord"); glVertexAttribPointer(texCoord, 2, GL_FLOAT, GL_FALSE, sizeof(Vertex), &vertexBuffer[0].u); glEnableVertexAttribArray(texCoord); int textureLocation = glGetUniformLocation(shader, "texture0"); glUniform1i(textureLocation, 0); glDrawArrays(GL_TRIANGLE_FAN, 0, 4); glDisableVertexAttribArray(position); glDisableVertexAttribArray(texCoord); glDeleteTextures(1, &texID); }
This code works fine on the the device with PowerVR SGX 544MP (Galaxy S4), but on the device with Mali-T760 (Galaxy S6, OS Android 5.0.2) FC occur, (reason: gpu memory leak). Now texture are obviously deleted, but it change nothing. As Rich, said above, it happens due to deferred rendering, and glDeleteTextures calls just enqueue message and don't release resources immediately (as I understand). So, now driver knows that texture isn't needed at all, but memory leak still happens...