We are running a survey to help us improve the experience for all of our members. If you see the survey appear, please take the time to tell us about your experience if you can.
Dear All,
I confused about eglSwapBuffers() function on ARM MALI400
First, Here is development environment.
Target Chip: A53+Mali400MP2 chip
Target Board OS: Linux OS 32 Bit
Mali Drivers: r6p1-01rel0
Target Application: OpenGLES2.0, fbdev based Triple Buffering
Host OS: Fedora fc20.x86_64bit
Here is my main rendering loop.
The loop receives renderFrameEvent messages sent from another thread that contain a list of OpenGL commands to execute.
All of the OpenGL drawing calls are executed in the renderFrameEvent->commandList->flush() call.
while ((message = thread->msgPort.waitPort())) { printf("Message Loop message=%s,pending=%d\n", message->getClass().c_str(), thread->msgPort.countPending());
if (message->getClass() == "RenderOpenGLFrameEvent") { std::chrono::time_point<std::chrono::high_resolution_clock> startTime, renderTime, endTime;
// Start Timing startTime = std::chrono::high_resolution_clock::now();
renderFrameEvent = std::static_pointer_cast<RenderOpenGLFrameEvent> (message);
// Draw renderFrameEvent->commandList->flush();
eglSwapBuffers(egl_display, egl_surface);
// End Timing renderTime = std::chrono::high_resolution_clock::now();
std::chrono::duration<double> renderDuration = renderTime - startTime;
printf("renderDuration=%f\n",renderDuration.count()); } }
This is the output when you press a button to scroll:
Message Loop message=RenderOpenGLFrameEvent,pending=0
renderDuration=0.000656
renderDuration=0.000631
renderDuration=0.000601
renderDuration=0.000619
renderDuration=0.009073
Message Loop message=RenderOpenGLFrameEvent,pending=1
renderDuration=0.016451
Message Loop message=RenderOpenGLFrameEvent,pending=2
renderDuration=0.016368
renderDuration=0.016457
So there are no frames pending at the start of this log, so two back buffers should be available. eglSwapBuffers() is then called 6 times.
With triple buffering, eglSwapBuffers() should probably not wait for the vertical sync for the first two frames, otherwise it can't pipeline the operation.
I don't know why 4 frames return immediately in this case though, what do I need to do to accurately measure the frame rate?
Thanks!
> hat do I need to do to accurately measure the frame rate?
In general for most embedded systems you need to leave it for a few 100 ms to get into a steady state anyway; if the device has been mostly idle it will take a while for things like DVFS to ramp up the clock frequencies, so in general I would suggest running a test of at least 5 seconds, discarding the first 2.
In terms of measuring GPU time, the standard approaches are:
HTH,
Pete