Hi,for the question about how to combine video and 3D graphics, one solution I have seen is to have 2 framebuffers (e.g. /dev/fb0 and /dev/fb1 on Linux).The Mali-VE6 video decode hardware can decode video into one framebuffer (e.g. fb0). The Mali-400 can request a drawing surface which includes an alpha channel (e.g. RGBA8888) and draw this output into the 2nd framebuffer (e.g. fb1).Then, a hardware overlay mixer can read the color channel of the video (fb0), the color channel of the 3D (fb1) and determine how transparent or opaque to make the 3D by checking the 3D alpha channel, then blend the two color channels together as appropriate.In this way you can overlay a 3D user interface over the video, letting the 3D application choose how translucent to be over the video.If you search for terms such as the ones below I think you should find more reading material on the subject:hardware overlay mixer raster operation alpha blendingHope this helps, Pete