Skip navigation


1 2 3 Previous Next

ARM Mali Graphics

381 posts

360 degree video is changing not only the way we consume content, but the way we create it. We’re no longer restricted to sharing our experiences in selfies, single photos or even panoramas to capture more of a given scene. With 360 degree video we can now share the whole scene, and not just in static images, but in motion. Better still, gone are the days of retrospective slideshows of your favourite holiday pics, now you can share what’s happening right now, with the people you really wish could be there with you.


So how does 360 video actually work? Well, first of all we obviously have to capture the entire scene. This is made possible using a series of two or more cameras, as in the below image, to capture different fields of view. In some cases many cameras are used but more often now we’re seeing two cameras, both of which capture a 180 degree view, configured to capture the entire circular scene. Image quality is really important, especially for use with a VR headset, as we know from previous experience that unrealistic focus or resolution can take an immersive experience from fantastic to failure really fast.


After we’ve captured high quality views of all the angles, we need to consolidate them to create one cohesive scene. We do this by ‘stitching’ together each of the individual views, as seamlessly as possible, to create a single panorama that covers the entire 360 degrees. This is, of course, where using only two cameras can make things easier. Having only two views to stitch together lessens the frequency of the joins and therefore makes it less likely they’ll be visible to the user.


Once we’ve created this circular environment we need to figure out how to use it. To view 360 as a normal video, as you’ve almost certainly done on Facebook, is simple, you can just choose to scroll around the view as you wish, to see the areas not immediately in front of you. To view it In a VR headset for a truly immersive experience requires a little more work. As we know from our previous forays into VR content creation, we need to create two marginally different views, one for each eye. This is to ensure the brain can interpret the images as they would in the real world, whereas if we created the two views identically the brain would intuitively understand that something was wrong and the immersion of the experience would be instantly compromised. To get this right we can use clever technologies like our Multiview extension to create the duplicate views without doubling the rendering overhead. Barrel distortion also then needs to be applied to ensure the pin cushion effect, caused by having the lens right next to the eye, is counteracted. This allows us to experience the 360 video as a fully immersive environment in the privacy of our own headset.


This is still a pretty complex process and might seem beyond the capability of the average user, but it’s no longer the realm of specialist agencies, or several thousand dollar custom cameras like Obama used to promote the protection of US national parks. With the recent release of the Samsung Gear 360, amongst others, 360 video capture just went mainstream. This tiny little device is small and light enough to take with you wherever you go and high enough quality that the benefits are quickly apparent.


As Samsung’s (brilliant) advert shows, the world is no longer off limits just because you’re sick, or unable to travel, or even double booked for an event. With the easy capturing and immediate sharing of 360 content from a small, portable device, immersive environments and virtual spaces become the domain of the mainstream market.


In the interests of research, (and not at all a nice day out) a couple of colleagues and I took a field trip into the centre of Cambridge to see just how easy it was to produce a 360 video, in this example, a walking tour experience. We wanted to see just how simple the Samsung Gear 360 was to use and how much of our local world we could take to our global colleagues.


In this age of unlimited digital images we’re used to taking hundreds of pictures and discarding all but the very best. The disconcerting aspect of 360 video is that, because the cameras go all the way around, there’s no screen and you of course cannot actually see what it is you’re filming. This brings back the retro feeling of waiting to get your prints back from the developer in the pre-digital camera age and was somehow all the more exciting for the wait. Staged as a romantic walking and punting tour of Kings College, my colleagues and I had a heap of fun playing with our new toy. It was actually very easy to use, with great battery life and super easy upload for editing when we were done. (A note to the user though, a sturdy tripod is a must, the little convex lenses don’t do too well when falling face first onto gravel from a couple metres up... Oops.)


Intending to take our tour to our Chinese colleagues, we wanted to feature the memorial stone of Xu Zhimo, famous Chinese poet who spent many years in Cambridge. Not only could we capture a great scene around the memorial stone itself but we also decided we could take it a step further. In implementing the video for use with a VR headset we were able to add graphics pointing the user to the most interesting areas of the scene. This also then allowed us to overlay graphics showing the full poem, effectively taking a 360 video to both a VR and AR application with amazing ease. Best of all is that you don’t need a top of the line smartphone to enjoy these kind of Virtual Spaces applications. We tested this video on our brand new Mali-G51 mainstream GPU, and on its predecessor Mali-T830. As you can see from the video below, Mali-G51’s best ever energy efficiency means applications like this can run smoothly even on mainstream devices.

The speed with which these awesome technologies are reaching the hands of the average consumer goes to show just how fast the adoption of VR and related tech is taking off. With DIY virtual spaces on the rise it’s only a matter of time until distance really is no barrier to our professional and social interactions.

As the days get shorter and the cold weather begins to creep in, we know it’s that time of year when we can start to get excited about the brand new devices for the upcoming year. A major announcement for us in the ARM® Mali™ Multimedia team is the brand new Huawei Mate 9 smartphone, based on the Kirin 960 chipset. Featuring a dual 20MP / 12MP Leica camera set up, 4K video capture and 64GB expandable storage, this is of course great news for both consumers and the smartphone market as a whole. Not only that, but it’s also especially exciting for us as one of the first devices to feature both the premium ARM processers launched earlier this year at Computex, the Cortex®-A73 and Mali-G71.

PremiumMobile 2016.JPG

The Mali-G71 GPU was the first graphics processor based on our new Bifrost graphics architecture and was designed to support high end use cases like immersive VR gaming, as well as brand new graphics APIs like Khronos’s Vulkan. Superior energy efficiency can be achieved through the smart combination of multiple ARM technologies, so as well as the Mali-G71, the Kirin 960 uses ARM big.LITTLE™ technology in an octa-core configuration. It features four high performance 'big' ARM Cortex-A73 cores and four high efficiency 'LITTLE' ARM Cortex-A53 cores. According to xda-developers, the Huawei Mate 9 outperforms its predecessor by around 10% and 18% in single-thread and multi-core performance respectively. Combined with other advantages of big.LITTLE – longer periods of sustained peak performance and a richer user experience – and Mali-G71, the Kirin 960 chipset in the Huawei Mate 9 will push the boundaries of mobile compute for use cases such as Augmented Reality and Virtual Reality, delivering a leading premium mobile experience. bigLITTLE.png


Speed was of the essence in terms of handset performance, with Huawei boasting a clever Machine Learning algorithm that learns your habits as a user and prioritizes application performance accordingly. This allows the power to go where you need it most, ensuring the smoothest performance whilst protecting your privacy by running it directly on the handset rather than bouncing it to the cloud.


This device hits the market just eight months after the Mali-G71 IP was first made available to HiSilicon’s Kirin team of engineers and represents an incredibly fast time to market, especially for a device capable of handling such complex content. With the inherent compatibility between the products, not to mention the ability to exploit the Mali-G71’s full coherency between CPU and GPU, it’s great to see that Mali-G71 is allowing our partners to speed up their time to market and deliver the newest devices to the consumer, faster than previous generations.


The decision to design Mali-G71 with Vulkan in mind seems to have provided additional benefits too. Huawei showcased side by side screenshots of Vulkan demo “The Machines” from Directive Games, claiming between 40% and a massive 400% more efficiency compared to the previous API, OpenGL ES! Our own comparisons also showed a massive power saving on Vulkan compared to Open GL ES, watch the video to see just how beneficial a new, dedicated API can be.

Talking about their decision to make the Vulkan demo for Mali-G71, Atli Már Sveinsson CEO and co-founder of Directive Games explained: ‘With the high-speed growth of VR and AR on mobile devices we knew we needed a GPU with enough performance to deliver a really high quality user experience and the Mali-G71 gave us all the power we needed while still reducing energy consumption.’


With bigger and better products appearing faster and cheaper we’re still seeing huge leaps forward in the smartphone industry. With demanding new use cases emerging every day the journey is far from over and there’s still much to be done, so watch this space to see what other exciting advancements Huawei and ARM can deliver for premium smartphones!


This week, Unreal Engine released their latest UEv4.14 upgrade and it includes several cool mobile features:


VR Multiview Support for Mobile

VR Multiview is an extension of OpenGL ES available on Android devices. Multiview allows the developer to simultaneously render two viewpoints of the same scene, representing the left and right eye perspective, with one single draw call. By effectively halving the rendering requirements this extension reduces CPU and GPU load compared to traditional stereoscopic rendering. The blog “Understanding Multiview” by Thomas Poulet provides more detail on how to use this extension to exploit maximum benefits.


In the UE4.14 editor you can enable Mobile Multiview as per the picture below:


Enabling Multiview.jpgEnabling Multiview in UE4.14


The Multiview feature in UE4 is currently only compatible with Mali GPUs.


Improved Vulkan API Support on Android

UE 4.14 enhances Vulkan support for the Samsung Galaxy S7 device, as well as the latest Android devices supporting Android 7 (Nougat) OS.

Vulkan brings many benefits to mobile devices and graphics developers. For developers, the API works cross-platform, covering everything from desktop to consoles and mobile devices. For mobile devices, Vulkan has a much lower CPU overhead compared to previous graphics APIs thanks to the support of multithreading. Nowadays, mobile devices have, on average, between four and eight CPU cores, so having an API which is multithreading friendly is key.


To learn more about Vulkan benefits and the comparison with OpenGL ES API, read this blog.


Forward Shading Render with MSAA

Unreal Engine releases a new forward shading renderer which combines high-quality lighting features with Multisample Anti-Aliasing (MSAA) support.


Anti-Aliasing is a technique for reducing “Jaggies” or step-like lines that should otherwise appear smooth. These step-like lines appear because the display doesn’t have enough resolution to display a line that appears smooth to the human eye. Anti-Aliasing is the method to trick the eye into thinking that the jagged edge is smooth by packing blended pixels on either side of the hard line depending on pixel coverage.

A_A.jpgAnti-Aliasing (without and with)


Multiple levels of anti-aliasing are available on the ARM Mali GPU hardware. 4x MSAA is provided with ARM Mali GPUs with close to zero performance penalty due to the tile buffer supporting 4 samples per pixel by default. MSAA is well suited to VR applications as the eyes are much closer to the display.


More information about MSAA in ARM Mali GPUs is available as part of our developer resources.


Automatic LOD Generation

This new feature automatically generates several Level Of Details (LODs) of your static meshes. LOD reduces the polygon count in your graphics, as a different LOD is rendered depending on the distance of your mesh from the camera view point. Rendering large meshes consumes a lot memory and battery power. It’s therefore very important to render the images at the most efficient LOD level, that is to say, rendering to the lowest LOD that does not show visual artefacts or compromise visual quality. By rendering content to this optimal point rather than to the highest resolution, mobile devices can benefit from huge decode time and energy savings.


UE4.14 can automatically calculate the screen size to use for each LOD level created by ticking “Auto Compute LOD Distances”.

LOD-Settings.pngLOD Settings in UE4.14


To find out more about LODs in UE4.14, or have more details of all the features released in 4.14, please read the UE4.14 release notes.

Welcome to sunny Shenzhen for the third and final day of the China leg of ARM Tech Symposia 2016. There’s a very different feel to this city from the moment you leave the airport. Just a few short decades ago this was a sleepy little fishing village and the speed of growth to today’s sprawling metropolis of shining skyscrapers with LED displays emblazoned across their sides gives an impression of youthful urgency. The Ritz Carlton plays host to us today, with flashing lights and Hollywood soundtrack all the familiar faces were welcomed back to the stage to share their knowledge with a new set of partners and developers.


Asked for his impressions of this year’s ARM Tech Symposia events, VP Worldwide Marketing, Ian Ferguson (now famous for his keynote describing IoT applications for watering his walnuts), said ‘China is such an important market for ARM and its ecosystem. Our ongoing commitment to bringing experts from across our company and the ecosystem, helps equip local companies to develop compelling new products that benefit us all.’ It’s certainly true that the innovation and progression of the Chinese technology industry is pushing forward many of the solutions we’ll all come to rely on in the future. It seems every seat is filled with bright, talented people keen to take emerging technologies to a broader marketplace.


For those of us lucky enough to attend all three events it’s been a fantastic opportunity to see talks across all three technical tracks, broadening our own learning along with our partners and colleagues. This learning experience is one of the key aims of the event, with Noel Hurley, GM of Business Segments Group, saying: ‘these events are so well attended and organised (well done to the team!) it’s really encouraging to see all these engineers keen to understand what we are doing and how they can design with ARM technology’. The turnout has indeed been great, with a vast variety of attendees and speakers from all over the world, coming together to share their experience.


Having focused on graphics and VR at previous events today I had the chance to join Judd Heape, from our newly created Imaging and Vision Group, for his talk on the Computer Vision products which joined ARM’s portfolio following the acquisition of Apical earlier this year. Many people are already feeling the benefits of assertive display technologies without even being aware of it. This uses pixel by pixel tone mapping to adjust specific areas of an image to allow you to see greater contrast and detail on the screen of your phone, even in bright conditions. Not only does this technology improve the viewing quality of your images but can also save between 20 and 50% power consumption depending on your settings. This technology is already working silently in millions of devices and we are now in a position to leverage its full potential for a greater range of consumers, as well as extending it with the latest versions of this product which can automatically remaster High Dynamic Range (HDR) content to be viewed on mobile displays. Also explained were the Assertive Camera product, enabling HDR to improve image capture quality, and a Geometric Distortion Engine which can effectively ‘unravel’ fish eye style images into standard perspective.


Judd explained that the importance of imaging doesn't stop at capturing and displaying images but now extends to understanding the content of those images. Smart, low power technologies like ARM's Computer Vision engine, which is enabling instant facial recognition and behavior mapping across large groups of people, will also start to change the way we work. Security and safety can be much improved by utilizing this technology to assess overcrowding in transport, for example, and address it before it becomes a danger to commuters. It can also improve personal video content by allowing you to focus specifically on your friends or family members as they play football, or run a marathon for example.


Demos, too, are adding value to these events. With dedicated team members and partners on hand to help you try everything from VR to drones, even the coffee breaks can be hugely informative.


Having seen just a small proportion of the fantastic presentations across these events I’m in awe of the wealth of potential at our fingertips and am already looking forward to next year’s events to see what one more year of innovation will bring. For those of you lucky enough to join the team at the upcoming events in Taiwan, Korea, Japan and India, there’s a lot to look forward to!

Following a fantastic first event in Shanghai it was off to the airport for a short hop to a much chillier Beijing for round two of 2016’s Tech Symposia. On Monday we talked about the keynotes and major product announcements so today we’re taking you to the beautiful Beijing Sheraton ballroom where our technical experts took us through a deeper dive into a huge range of products and processes.


Split into three streams, the Tech Talks covered Next generation processing, Smart embedded & IoT and Intelligent implementation and infrastructure. Focussing on the first stream, first up were ARM Senior Product Managers Dan Wilson and Roger Barker for a closer look at their new products launched on Monday. Having covered the market drivers such as Virtual Spaces and the new Vulkan graphics API in the keynotes, Dan used his session to consider Mali-G51’s Bifrost GPU architecture in greater detail.


The first Bifrost based Mali GPU, Mali-G71 is a high performance product designed for premium devices. In order to facilitate quality graphics on mainstream devices based on Mali-G51, specific architectural optimizations were made to rebalance workloads and prioritise graphics processes. A new shader core design allows partners to choose a flexible implementation of single or dual pixel shader cores, with single pixel cores able to handle one texel per cycle and dual pixel cores handling two texels per cycle. Partners can also choose to implement an asymmetric combination of the two for an MP3 configuration. Not only are there changes to the available shader cores but specific optimizations were also made to the texture and varying units. The texture unit changes have been designed for increased tolerance to high memory system latency while effectively reducing the pipeline length to reduce silicon die area and power consumption. As Mali-G51 based devices start to appear in consumers’ hands in 2018 it will be great to see how our partners have leveraged these developments to provide a greater user experience for mainstream devices.


Following Dan, Roger took to the stage to provide us with more detail about the new video processor, Mali-V61. He highlighted the evolution of video from live stream TV broadcast through on-demand streaming right up to today’s newest use cases where an ever increasing number of users are communicating in live, real-time video. Featuring all-new, super high quality VP9 encode and decode and much improved HEVC encode, the Mali-V61 VPU again provides better than ever configurability and choice to partners. Roger explained that the reasoning behind the importance of this high quality encode was around the emergence of time critical video applications that can’t support the additional processing time required to transcode content from different codecs in the cloud before delivering a VP9 decode, for example. With VP9 encode you can upload your video content in the same format in which it will be decoded, removing the transcoding lag and facilitating the fastest possible video experience. In terms of flexibility, we were given a more rounded view of the options available across mutlicore scaling. With an 8 core configuration partners can choose to support one 4K stream at 120 frames per second (FPS), or alternatively multiple streams at a variety of quality and FPS performance points from 720p right up to 4K. These are of course just a taste of the technical elements so you can visit the Connected Community to read the launch blogs for Mali-V61 and Mali-G51 for all the details.


Another great multimedia session was hosted by our in-house graphics guru Sylwester Bala and boasted the rather poetic title of ‘Virtual Reality: Hundreds of millions of pixels in front of your eyes.’ Sylwester took us through the evolution of mobile gaming from the earliest Gameboy platform right up to the mobile virtual reality applications appearing today. The graphics complexity has increased throughout this timeline but virtual reality has accelerated this trend even further. For starters, we have to render a marginally different view for each eye in order to create a viewpoint that our brain understands, effectively doubling the graphics workload. The lenses within the head mounted display are needed to correct the distance perception in the field of view but have the additional effect of making the final image appear sunken in the middle, known as a pin cushion effect. Barrel distortion therefore has to be applied in post processing to correct this effect, adding another level of processing complexity. Sylwester discussed the factors which often limit the quality of a VR experience such as latency, double CPU and GPU processing (required by the two separate views) and resolution. One of the solutions to reducing graphics workload is the use of foveated rendering. This approach effectively splits the display into two concentric circles to mimic the fovea region of your eye. The central section is rendered in high resolution but the outer section in lower resolution. The reduced quality of the outer section isn’t perceptible to the viewer but greatly reduces the processing power required. It does however, mean that instead of rendering twice, for two eyes, we are rendering four times, for four different sections. Sylwester explained how our Multiview extension can render both views with the required variations, simultaneously. Read more about Multiview in this blog. The really exciting result of the use of Multiview is the impact it has on GPU performance. Graphs demonstrated that GPU utilization could be taken down to from near 100% (for 1024 x 1024 resolution) to 40% for 512 x 512 inset resolution. Further savings can be achieved by reducing the size of the inset section depending on the specific of the device and its optics. It also demonstrated up to 50% bandwidth savings. Further efficiencies could be achieved through the use of ARM Frame Buffer Compression for bandwidth heavy processes like Timewarp, in some cases up to 50%.


There were so many fantastic sessions that I couldn’t possibly hope to cover them all but I hope this has given you at least a glimpse of the vast breadth of knowledge we’re so lucky to be able to share with our partners. From here the Tech Symposia tour moves to Shenzhen on 4th November then on to Taiwan, Korea, Japan and finally, India before finishing up for another year. Don’t hesitate to get in touch if there’s something in particular you’d like to hear more about!


The first of this year’s ARM® Tech Symposia kicked off this morning on a rather damp day in Shanghai. With the rain coming down outside it was a perfect opportunity to check out our latest demos before convening in the ballroom for the first of the day’s presentations. Allen Wu, EVP and President of ARM China welcomed us to the event and discussed ARM’s commitment to supporting the development of China’s technology ecosystem. He then handed over to Ian Ferguson, VP Worldwide Marketing, for a deeper look at what we can expect from this year’s events and the future of ARM and our partners under our new umbrella, the Softbank Group. Ian talked about the opportunities this collaboration has provided for the future of automation and IoT technology as well as stressing the importance of continuity under this new model and ARM’s commitment to ensuring business as usual for all our partners and colleagues.


In terms of the opportunities for automation and IoT technology, China is ahead of the game and taking the lead in accelerating ARM based server infrastructure. For example, parking spaces at the new Disneyland Shanghai have been enabled by Huawei for smart monitoring and reporting of capacity and usage patterns. This accelerated adoption will support the faster deployment of IoT based systems, the importance of this can be seen in a study by the EIU IoT Business Index which showed that in 2013 around 90% of businesses surveyed expected to be using IoT in 2016. Now, 75% of those are indeed seeing the impact of the IoT revolution on their business, with key focuses around security, cost and establishing a sufficient knowledge base to truly enable the industry’s growth.


Ian discussed security requirements in the context of recent incidents such as the covert deployment of thousands of DVRs to simultaneously attack DNS servers, bringing down huge sites like Twitter and Spotify whilst appearing to continue functioning as normal. Silent attacks such as this highlight the need for security technologies like ARM’s TrustZone® in protecting both content and devices. Security is not the only concern for a connected world, with a strong ecosystem required to facilitate sustainable growth. China’s ecosystem is not the same as that seen in the US, with initiatives like OPNFV providing a shift from proprietary hardware to open source software allowing our China partners to compete on a global scale. The distribution of the ARM powered BBC MicroBit to UK schoolchildren can be expanded to the Chinese education system to grow the next generation of programmers with open source platforms, software and specifications.


We’re seeing developments too in automated vehicles. Whilst widespread use may still be a way off, with safety critical implications under careful consideration, recently in the US a beer truck successfully travelled 120 miles to deliver its important cargo without a driver in sight. Drones too are becoming more valuable, with Amazon trialling them for deliveries in the air near our offices in Cambridge. These too require additional layers of technology to sustain their use. It’s not enough to be able to use GPS to program their destination, they need computer vision combined with machine learning in order to assess and avoid hazards, connectivity for real time updates and safety critical mechanisms to ensure security and protection for both content and consumer. Healthcare, too, is beginning to benefit from advances in IoT based applications and microprocessors, with innovative early detection initiatives emerging to detect cancer cells through smell sensors to ensure early treatment. Elsewhere, sound sensors on streetlights in dangerous areas can immediately alert police to gunfire in the vicinity without the delay of waiting for an emergency call from a member of the public. With such a vast range of applications for connected devices and automation, it’s clear that the IoT revolution really is upon us and it’s great to see the huge leaps we and our partners are taking in making this happen.


Next up was the product keynote, with Noel Hurley, VP & GM of the Business Segments Group, discussing the rapid uptake of ARM’s latest Premium Mobile products, Cortex®-A53 CPU and Mali™-G71 GPU, launched in May 2016. With these products starting to appear in devices it’s great to see the annual product launch cycle has been able to benefit our partners’ time to market. The product keynote was a key milestone for us in the Mali multimedia and graphics team, with Noel announcing the exciting launch of not one, but two new products into the ARM Mali Multimedia Suite. First up was the Mali-V61 video processor (VPU), which might be familiar to some of you as it was previewed earlier this year under the codename Egil. Now fully fledged, our brand new video processor boasts better than ever scalability and configurability as well as high quality VP9 encode and vast improvements to HEVC encode. Designed to support video across all device types and tiers from smartphones and drones, to cameras, Mali-V61 is looking to be the go-to IP for next gen video apps.


Hot on the heels of the Mali-V61 VPU was the Mali-G51 GPU, the first mainstream GPU to be built on our exciting new Bifrost architecture. Launched earlier this year with the Mali-G71 high performance GPU, Bifrost has undergone some specialized optimizations in order to perfectly balance quality graphics performance with area and energy efficiency to allow Mali-G51 to meet the needs of the mainstream device market. Not only is VR reaching the mainstream in areas like Virtual Spaces, but the development of new APIs like Khronos’ Vulkan, as well as ever-growing screen resolutions, have been instrumental in creating the need for high performance graphics capability within a mainstream silicon budget.


Not to be forgotten was the recent acquisition of Apical which allowed us to add computer vision and assertive camera and display technologies to our Imaging and Vision portfolio. Read more about the importance of Computer Vision to the future of technology here. Back on to IoT, Noel filled us in on the recent launch of the IoT subsystem block enabling a fast, secure route all the way from chip to cloud. nandannayampally 's blog explains why this was such a significant area of focus for us and what it brings to the IoT environment.


The next stop on the Tech Symposia tour takes us to Beijing where I’ll be bringing you all the highlights from the more technical presentations across the three streams of Next generation processing, Smart embedded & IoT and Intelligent implementation and infrastructure. See you there!

Earlier in 2016 we gave you a sneak preview of our brand new Mali video processor, then codenamed Egil. There was a great deal of interest in this exciting new product, not least because of some of the ground-breaking features included and the industry has been impatient to hear more. Well, the big day has finally arrived and we can now announce the official launch of the Mali-V61 video processor.


In developing Mali-V61 we’ve continued to take an alternative approach to the standard video  processor which tends to target a specific codec or a very limited selection. Instead, we’ve developed a single, unified video solution which controls all the necessary features of the relevant codecs through firmware with all pixel processing handled by specified hardware blocks.  Our firmware is controlled through a single API and we currently provide reference drivers based on the latest Android releases along with a host interface specification. This not only allows flexibility in SoC design but also provides a multi-standard solution to the industry.


Something we consider of high importance through all of our IP development is the need to not only support today’s high end content and devices, but also to be able to adapt to the challenges and additional complexity the future of the industry may bring. With this in mind, we have implemented significant enhancements to our HEVC encode capability as well as creating support for VP9 encode and decode, making Mali-V61 the first multi-standard video processor to be contained in a single IP block.


As well as advanced encode and decode options, we provide an android reference software driver. It handles numerous tasks including the setup of a video session as well as dynamic power gating and memory allocation. The built in core scheduler manages multiple encode/decode streams and maps single or multiple video streams across multiple cores in order to provide maximum performance.



Video conferencing and ‘chat’

The new Mali-V61 video processor’s flexibility in handling multiple encode/decode streams makes it the ideal solution for a range of important use cases. Two-way, real-time video communication is a rapidly increasing use case, whether in more formal video conferencing applications or the growing range of video chat applications that are now prevalent. The complexity required to simultaneously handle multiple video streams from different devices, locations and performance points often means that there are serious compromises in the quality of the final video output. Mali-V61 is able to efficiently handle all of these streams and allocate just the required amount of bandwidth in order to retain the maximum possible video quality and provide a superior visual experience to the end user. This avoids all the awkward delays and accidental interruptions we saw with early mobile video conferencing capabilities.


Video capture

With the rise of 4K displays has come a need for higher quality content to exploit them to their best advantage. Mali-V61 supports 4k video capture and streaming to a larger screen device, such as your home TV, as well as sharing your content directly with friends or on social media. This allows you to take, watch and share higher quality videos without the need for external hardware.


Configurability and scalability

We’ve designed Mali-V61 to be sufficiently configurable to enable multiple levels of use of the video IP whilst retaining the same high quality encoding and decoding performance. This enables our partners to differentiate based on the requirements of their target devices. They can take into account considerations such as the preferred resolution and frame rate to be supported and whether they want to enable encode or offer only decode capability, for example if they are producing a video player without a camera. Partners can also design their configuration based on whether or not they want to support 10-bit and 8-bit video, or just 8-bit, as well as if they want to support all video codecs, or just a subset. This range of options allows partners using Mali-V61 to deliver very specific points of differentiation for their products, providing them with far greater control.scalable.png


Mali Multimedia Suite

Following on from the launch of the Mali-DP650 display processor in January 2016 and the Mali-G71 GPU in May 2016, Mali-V61 provides the third element to complete the latest high performance ARM® Mali Multimedia Suite configuration, designed for next generation premium devices. The entire Mali Multimedia Suite comes pre-optimized to work together to produce the highest quality user experience whilst exploiting the latest advances in energy efficiency and bandwidth saving.


A new version of one of ARM’s top bandwidth saving technologies, ARM Frame Buffer Compression (AFBC) has also been adopted for the Mali-V61 VPU as well as the newly released Mali-G51 GPU. This latest version of AFBC is fully backwards compatible while advances in this technology provide a new level of efficiency across the full Mali Multimedia Suite of products.

It’s not just high-end mobile devices which need to work like a dream, we expect a certain level of performance and a relatively advanced feature set even from a more modest, mainstream handset budget. Whilst it’s the top of the line products which often garner the most media attention, a large proportion of the global smartphone market is based on mainstream rather than premium devices. In manufacturing the high volume chips required for this market segment, the cost to the system contributed by silicon area will have a big impact on the final cost. In order to retain quality performance points within a mainstream budget, silicon area is therefore one of the key areas of focus for cost reduction.


The second GPU to be built on our innovative new Bifrost architecture, Mali-G51 is the first Bifrost GPU in ARM®’s High Area Efficiency roadmap. Exploiting the very latest ARM advances in bandwidth and power efficiency, combined with all-important area reduction, Mali-G51 is our most cost efficient GPU to date with up to 60% more area efficiency than Mali-T830 and 60% more energy efficiency.


Designed to bring premium experience to the mainstream device, Mali-G51 supports all the key everyday use cases from augmented reality (AR) and virtual spaces to casual gaming and a smooth, fluid user interface.


Bringing Bifrost mainstream

In May 2016 you may have seen the launch of the first of our Bifrost based GPUs, Mali-G71. This propelled the new Bifrost architecture into the premium mobile space with the highest performance capabilities designed to support VR gaming and other complex, power hungry content. This doesn’t mean however, that Bifrost is all about the biggest and best of premium mobile capability. Designed from the outset to scale across all levels of device, Bifrost can be carefully deployed to achieve the perfect performance point for any level of product.


Targeting the mainstream smartphone market, Mali-G51 brings the Bifrost architecture to a different market tier with features and capability specifically tuned to the area and power limitations of mainstream mobile. Individual features of the underlying architecture have been analysed and assessed against real graphics applications in order to ensure mainstream graphics needs are prioritized for a well-balanced design.


Bifrost’s low level instruction set, which gives control to the compiler, has been further optimized for Mali-G51 and specifically rebalanced for power sensitive graphics workloads. Not only that, but a new dual-pixel shader core has been implemented to double texel and pixel rates and can be used asymmetrically with a uni-pixel shader core in order to access even further configurability and versatility.



A step change in efficiency

It’s no secret that there are challenges inherent in the mobile form factor that aren’t present in other types of device. Not only do we not have the PC’s lovely big fans cooling everything down, but we also don’t have a handy mains power connection running continuously. Every component in an SoC needs power and in using it, creates heat that the device has to dissipate. This heat dissipation has actually become harder in newer mobile devices where the bevel is getting smaller and ever more of the surface area is taken up by the screen which doesn’t have the same cooling capacity as the metal case. Reducing the power consumed by the GPU frees up this power to be used elsewhere in the SoC and decreases the thermal pressure the GPU adds to the device. It also means less power is consumed from the system’s total budget, a key requirement not only for a smooth experience but also for smartphone users to get the most from their device’s battery life.


AFBC 1.2

Another exciting feature of the Mali-G51 GPU is the addition of the newest version of our advanced bandwidth saving technology, ARM Frame Buffer Compression (AFBC) 1.2. Latest optimizations include improved GPU performance in bandwidth limited scenarios as well as improved display processor performance for rotation use cases.


AFBC 1.2 also improves compression for constant colour blocks, providing further significant savings for user interface and 2D graphics applications. Fully backwards compatible with former versions, AFBC is therefore available across the full Mali Multimedia Suite (MMS) of Graphics, Display and Video processors with the newly launched Mali-V61 video processor. System wide optimizations like AFBC 1.2, Adaptive Scalable Texture Compression, and ARM TrustZone® allow all parts of the MMS to work seamlessly together, optimizing performance and bandwidth reduction and reducing our partners’ time to market.


Virtual spaces & AR

Virtual reality (VR) is one of the more demanding of today’s use cases when it comes to the burden it places on a mobile system. To ensure a fully immersive experience in VR gaming requires extensive power and performance optimization. This however, doesn’t mean that you can only join the virtual world by purchasing top of the line devices. Low power VR is becoming a market segment all of its own and is facilitating some of the arguably more useful, every day virtual interactions.


‘Virtual Spaces’ are how we refer to virtual environments that don’t require the fully interactive, highly reactive elements of AAA VR gaming. Virtual spaces are finite environments that can support interactive elements, like the people within them, whilst keeping the surroundings static and therefore minimizing GPU workload. Virtual spaces represent the obvious business application for VR, where you can collaborate with teammates, colleagues and customers across the globe in a much more realistic manner in a virtual boardroom, conference suite or even breakout area.



Socially, virtual spaces allow you to meet up with friends in comfortable surroundings and talk face to face, without ever leaving your sofa. The ability to look at the person talking to you and respond in real time makes the distance between separated loved ones seem much easier to bridge.


The fact that these low power VR solutions can now be supported by mainstream area and energy efficient GPU’s like Mali-G51 means they are accessible to a much wider audience. Businesses no longer need to be constrained by a tighter tech budget and everyday consumers can experience the future of virtual communication without breaking the bank.


In designing the Mali-G51 GPU the ARM Mali team are excited to have brought such significant savings to such an important area of the market and we look forward to seeing them appearing in next generation mobile devices in 2018.

It’s no secret that everything in China moves fast. At ‘China Speed’ in fact. From building a skyscraper in 19 days, to the fastest trains in the world, China is all about embracing change and getting things done in the shortest possible time. In the technology industry this trend continues with innovation and technology uptake at an all-time high.  Wherever you look, people are talking about it.


In the past, China’s tech industry has been compromised by perceptions that it was a follower in the market and primarily focused on replicating ideas from global leaders.  Although some of this might have been true in the past, it was parallel with a rapid technology learning curve.  More recently, it’s easy to see that China has learned quickly and now has an intensity of innovation and development that puts it on a par with other major players and has enabled a technology revolution of its own. These days, from CNN to Bloomberg, all the major influencers are acknowledging China’s leadership in tech and potential to change the world, with lists of ‘Top Chinese Tech Companies’ rife. China is not only keeping up with global tech trends, but is surging ahead to break new ground and it’s doing it at China speed.


Alibaba, a member of ARM’s new family, the Softbank Group, is possibly the best known name in Chinese tech and indeed, one of the most famous e-commerce giants in the world. Its share price is around 30 times where it was a little over a decade ago and its services have expanded to a point where they rival any standard retail infrastructure. Baidu, China’s answer to Google, has fast become the country’s favourite search engine and has already expanded into food services, online payment systems and much more. Tencent is another name you’ve likely heard. Launching WeChat, the second incarnation of their QQ instant messaging service in 2010, it today boasts over 700 million users and acts as a one stop social shop for chatting, image sharing, news, payment and so much more.  All of these companies boast a now-established trend in innovation with rich R&D activities certain to continue to impress.


ARM partner Huawei is another perfect example of the power of China speed. With sales jumping 40% in the first half of 2016 and smartphone shipments up 25% in the same period, the meteoric rise of China’s premium smartphone maker is nothing if not impressive. On top of that, with flagship devices like the Mali-T880 powered P9, Huawei has managed to break into the global high end premium mobile market. This ability to compete with the world’s leading technology brands has allowed them to become a major driving force, with sales outlets alone up 116%.

CHina mkt.jpg

China’s smartphone uptake is still on the rise, with >62% of mobile users adopting them compared to ~55% in Europe


Not only is the smartphone market one to watch in China but mobile gaming is growing at breakneck speed too. In Q2 2016 the market reached an estimated RMB 24.4 billion (US$3.66 bn) which equates to a phenomenal 120% increase year on year. It’s also predicted that mobile gaming will continue to grow and take a larger share of the overall gaming market, from 33% in 2015 up to around 48% in 2019.


So why is China so much quicker off the mark with new and emerging tech and how have they turned around their image to become the ones to watch for new and exciting products? The answer is of course not a simple one, as multiple factors must come together to make it happen, but an example is the very different approach they take to projects when compared to the West.  While the West has a tradition and preference for exhaustive analysis and planning, China doesn’t wait. They see a great idea and an opportunity to develop it and they leap on it. They’re bold, brave and unafraid of risk and are therefore paving the way in new and emerging technology areas.


Huawei's P9 smartphone ranks amongst the top premium smartphones of 2016


This assertiveness and drive is key in the technology industry, companies need a new product on the shelves before anyone’s even realised they want it. Not only that, but whether they realize it or not, it’s also important to the consumer. China ships the most smartphones worldwide and consumers expect to upgrade relatively frequently, but don’t want to pay a fortune. The market is so big that there is extensive competition and end users are therefore able to demand more premium performance even from a mainstream priced device such as an internet TV box or a smartphone. This puts pressure on all levels of the supply chain to design, implement and release the next bigger and better offering faster and cheaper.


It’s this need for speed which makes the ARM Mali Multimedia Suite such a great fit for the Chinese tech industry. Not only have Mali products led a relentless march in device capability but the flexibility and scalability of the Mali range means our partners can quickly achieve the right balance of performance and efficiency for their particular market needs. Indeed, records were broken in 2014 when Chinese semiconductor company Rockchip were able to produce the first Mali-T760 based silicon just a matter of months after the GPU launched. It’s not just about GPUs though and the ability to address an increasing range of media capability and functionality can be key to whether a product speeds ahead or idles on the side lines. This is why our pre-optimized Mali Multimedia Suite of GPU, Video and Display processors work together seamlessly. This not only reduces risk and implementation time when designing a new product but also allows our partners to fully exploit unique features and significant bandwidth savings through technologies like ARM Frame Buffer Compression (AFBC).  In addition to the technological benefits, ARM is also able to tap into the expertise of our rich ecosystem of software, middleware and application partners to help strengthen the offerings of a new or emerging licensee. Mali no 1.png


Given the rich cultural history, incredible rate of change and huge potential of the Chinese market, it’s no surprise that the Chinese government are keen to secure the country’s tech supply chain. It’s important for them to ensure it develops rapidly and is capable of capitalising on this opportunity and we can see this in such things as the governments ‘made in China 2025’ initiative. 

ARM and the Mali team are happy to be able to support the flourishing of such a strong new ecosystem by working closely to help provide the products and flexibility to make that happen. Our range of dedicated multimedia products, perfectly aligned to support a vast range of configurations and designs, is allowing us to help our partners develop the scalability and flexibility required to reduce time to market and continue leading the way in the future of tech.


Reducing power consumption and optimizing CPU utilization in a multi-core architecture are key to satisfy the increasing demand of delivering sustained high-quality graphics meanwhile maintaining a lasting battery life. The new Vulkan API facilitates this and this blog covers a real demo recording showing the improvements on power efficiency and CPU usage that Vulkan provides compared to OpenGL ES.


Vulkan unifies graphics and compute across multiple platforms in a single API. Up to now, developers had OpenGL graphics API for desktop environments and OpenGL ES for mobile platforms. The GL APIs were designed for previous generations of GPU hardware and whilst the capabilities of hardware and technology evolved, the API evolution took a little bit longer. With Vulkan, the latest capabilities of modern GPUs can be exploited.


Vulkan gives developers far more control of the hardware resources than OpenGL ES. For instance, memory management in the Vulkan API is much more explicit than in previous APIs. Developers can allocate and deallocate memory in Vulkan, whereas in OpenGL the memory management is hidden from the programmer.


Vulkan API has a much lower CPU overhead compared to OpenGL ES thanks to supporting multithreading. Multithreading is a key feature for mobile as mainstream mobile devices generally have between four to eight cores.


On the left hand side of the video image, you can see the OpenGL ES CPU utilization at the bottom. The OpenGL ES API makes a single core CPU work very hard. On the right hand side, you can see the difference the Vulkan API brings with improved threading. The multithreading capability allows the system to balance the workload across multiple CPUs and to lower the voltage and frequency as well as enabling the code to run on little core CPUs.


OpenGL ES - Vulkan comparison - FINAL.png

Fig.1 Video screen capture, showcasing CPU utilisation


With regards to energy consumption, the video shows an energy dial on top which demonstrates the improved system efficiency that Vulkan brings. If we run the sequence up until the end and this is measured in a real SoC, the multithreading benefits bring a considerable saving in energy consumption. Even at this very early stage of software development on Vulkan, we could see an overall system power saving of around 15%.

OpenGL ES - Vulkan comparison - FINAL 2.png

Fig.2 Video screen capture, showcasing overall system power saving



Huawei Mate9 and Vulkan.png

Fig.3 Huawei Mate 9 Vulkan vs OpenGL ES demo comparison

(Picture source: Huawei Device Official Weibo:

On November the 3rd, soon after the making of the first video comparison between Vulkan and OpenGL ES, Huawei launched their new Mate 9 smartphone. This new generation smartphone is based on the latest SoC from HiSilicon, Kirin 960, which has a 180% GPU performance boost compared to Kirin 950. This is the very first mobile device using Mali-G71, our high performance GPU launched early in 2016, as well as ARM’s new Cortex-A73 CPU in a big.LITTLE® octacore configuration, delivering 18% faster performance. This cutting edge device also comes with the latest Android 7 Nougat, so it has native support for Vulkan.


The Huawei Mate 9 launch showcases a Vulkan demo, “The Machines”, from Directive Games. This demo features considerable performance uplift from OpenGL ES to Vulkan from 17FPS to 35FPS and Huawei claims that Vulkan boosts the power efficiency of their new devices by up to 400%.


To get you started using the Vulkan API, there is a wealth of developer resources here, from an SDK with sample code, to tutorials and developer tools to profile and debug your Vulkan application.

Unity-Vulkan-Logo.png On the 29th September, as promised at Google I/O, Unity released the first developer preview for their upcoming Vulkan renderer. Developers have been eagerly awaiting the release since Android Nougat was announced on the 22nd of August with Vulkan support as one of its key features.


Here at ARM we have been supporting graphics developers’ uptake of the Vulkan API since Khronos launched it publicly in February. ARM Mali graphics debugger and driver support were made available on release day and we’ve subsequently provided a set of educational developer blogs on using Vulkan, a Vulkan SDK and sample code. We also gave a series of talks and demonstrations on Vulkan at GDC, the world’s largest game developer conference, just a few weeks after the API was launched. All of our developer resources and content can be found here:

vulkanbanner.pngFig 1. An example of a Vulkan demo developed by ARM


Developer resources and tools are not all we provide at ARM. Not only were we heavily involved in the development of Vulkan as part of Khronos’s Working Group, but we’ve also collaborated closely with Unity, the leading game engine platform downloaded by over 5 million game developers, to support this renderer release.

The results of this collaboration have been great news for mobile game developers as the ARM Mali-based Samsung Galaxy S7 (European version) has been recommended (and tested) as the first Android developer platform to run Unity’s initial Vulkan Renderer Preview. Developers can download the first preview release here: Get the experimental build from Unity’s beta page.


At this early stage of development, the main benefit Vulkan brings to the Unity engine is speed, thanks to the multithreading feature. Current mobile devices have multi-core CPUs and the ability to carefully balance workloads across these cores is key to achieving these improvements. The increase in power efficiency is realized by the balancing workloads across several CPUs to reduce voltage and frequency, while the increase in performance and speed is attributable to the ability to use the full compute resource of the CPU cores.

We in the ARM Mali team are pleased to be able to support such important industry advancement and look forward to seeing what our broad ecosystem of developers can do with the first Vulkan Renderer on Unity!


To know more about Unity's Vulkan Renderer Preview:

In previous blogs we’ve looked at the scalability of the Mali™ family of GPUs which allows partners’ implementations to be tailored to fit all levels of device across multiple price, performance and area points. We’ve also taken a closer look at a high performance Mali implementation in Nibiru’s standalone VR headsets.


This time we’re exploring the other end of the Mali spectrum: Ultra low power. Today, the most shipped GPU in the world is still the Mali-400. Based on our original Utgard architecture, Mali-400 is the GPU of choice for devices where minimizing power consumption is key. Since the Mali-400 GPU was released, further optimizations have been applied in the design and implementation of subsequent Ultra-low power GPUs, Mali-450 and Mali-470.


As you’ll know if you’ve read my previous blogs, VR places a whole lot of pressure on the power and thermal limitations of the mobile form factor. To ensure a great, immersive experience you need a solid framerate, high resolution and super low latency, amongst other things. To achieve this for top end content like AAA gaming can often require the highest performance hardware and a greater power budget than can be supported by a mid-range SoC. That, however, doesn’t necessarily mean you need to queue up and pay out for the next big flagship smartphone just to get on board with mobile VR.


In the tech industry it can often take a long time for high end content, use cases, or applications to become sufficiently well understood and developed to trickle down to the more mainstream device. The beauty of mobile VR is that the flexibility of the medium means you’re not locked out altogether just because you don’t want to spend on a top of the line device. In spite of the comparatively recent take off of VR products, every day use cases are already starting to become available and accessible to all on mainstream hardware. Whilst you wouldn’t want to try high end gaming (you’d almost certainly feel sick, if your system handled it at all) there are other, arguably more useful, ways in which the virtual world can change our lives.


Virtual spaces are where VR can meet mainstream devices to support a vast majority of business, social and communications needs. Whether you want to collaborate with overseas colleagues or just catch up with friends, virtual spaces allow you to interact in a more lifelike manner and can be supported in a much lower power budget than more complex content. The beauty of this concept is that there’s no need to navigate around a fully interactive virtual environment as you need to for VR gaming. Users can be limited to a smaller setting such a virtual boardroom, bar or café, which reduces the rendering complexity. This means you don’t need the highest performance SoC to support devices targeted at this type of content, as one of our innovative partners has recently shown.


Actions Semiconductor (Actions) is a leading Chinese fabless semiconductor company providing dedicated multimedia SoC solutions for mobile devices. Founded in 2001 and publically listed in 2005, Actions now has ~700 employees and one of the most informed and influential engineering teams in the industry.


One of their most recent products, the V700, is an SoC expressly designed for the cost-efficient end of the virtual reality market. Based on a 64-bit Quad-core ARM® Cortex®-A53 processor with TrustZone® Security system, graphics are provided by the powerful but highly efficient Mali-450 MP6 GPU. This provides maximized 3D/2D graphics processing delivering excellent rendering within a very small power and bandwidth budget, making it ideal for mid-range standalone VR devices.


When asked why they chose the ARM Mali family of processors for this device Actions explained that it was very important to them to enable high quality VR content for the mainstream market. Not everyone is interested in spending vast sums of money on emerging technologies, particularly when there’s still some (in my opinion, misplaced) skepticism in the industry about the uptake of VR. Supporting VR content such as virtual spaces for social and business uses allows more people to access and utilize this exciting new technology. The superior power and bandwidth saving features of the products in the Mali Multimedia Suite make them the perfect choice for such a power hungry application as VR. In-built optimizations and synchronized technologies such as ARM Frame Buffer Compression and TrustZone allow our partners to achieve the high quality and security they need without limiting uptake to high-earning consumers.


It’s always great to see partners like Actions take such leaps in supporting exciting new Mali-based products and it will be interesting to watch the emergence of virtual spaces for the mainstream user in the coming months.

I lost a few days wondering why some textures were completely distorted when loaded in OpenGL.

The thing is, they were only distorted when the colours components were packed as GL_UNSIGNED_SHORT_5_5_5_1 or GL_UNSIGNED_SHORT_4_4_4_4. When packing colour components as GL_UNSIGNED_BYTE (RGBA8888), the textures were loaded correctly.


Why ?


Since I'm using a small personal Ruby hack to generate raw textures from BMP with the desired colour packing, I really thought the problem was in the Ruby code. After verifying that the generated 4444 and 5551 textures were the exact counterpart of the working 8888 textures, and tracing the OpenGL glTexImage2D calls to be sure that the data were sent correctly, I wondered if a special parameter was to be passed to glTexImage2D after all.


Ok, maybe I missed something in the glTexImage2D manual...


Sure did...


width × height texels are read from memory, starting at location data. By default, these texels are taken from adjacent memory locations, except that after all width texels are read, the read pointer is advanced to the next four-byte boundary. The four-byte row alignment is specified by glPixelStorei with argument GL_UNPACK_ALIGNMENT, and it can be set to one, two, four, or eight bytes.


The solution


Either :

  • have textures with a width multiple of 4,
  • call glPixelStorei(GL_UNPACK_ALIGNMENT, 2); before calling glTexImage2D.


RTFM, as they always say !

In a previous blog we talked about running Mali Graphics Debugger on a non-rooted device. In this blog we will focus on how you can add support for Mali Graphics Debugger, on a non-rooted device, to your Unreal Engine application. The plan we are going to follow is very simple:

  1. Add the interceptor library to the build system
  2. Edit the activity to load the interceptor library
  3. Install the MGD Daemon application on the target device


For our first step, we will need to download a version of Unreal Engine from the sources available on Github. For more information on this step, please see Epic’s guide.


Once you have a working copy of the engine, we can focus on getting MGD working. You will first need to locate the android-non-root folder in your MGD installation directory, and your Unreal Engine installation folder (where you cloned the repository). Copy the android-non-root folder to Engine\Build\Android\Java\.


Next, we will need to change the Android makefile to ensure that the interceptor is properly packaged inside the engine build. For this, edit the file under “Engine/Build/Android/Java/jni/”  add this line at the end, include $(LOCAL_PATH)/../android-non-root/ It should look like this:

LOCAL_PATH := $(call my-dir)

include $(CLEAR_VARS)

include $(LOCAL_PATH)/../android-non-root/


We will now specify to the main game activity that it needs to load the MGD library, locate inside Engine\Build\Android\Java\src\com\epicgames\ue4\ and edit the onCreate function to look like so:

public void onCreate(Bundle savedInstanceState)
     try {
     catch( UnsatisfiedLinkError e ){
          Log.debug( "libMGD not loaded" );

     // create splashscreen dialog (if launched by SplashActivity)
     Bundle intentBundle = getIntent().getExtras();
     // Unreal Engine code continues there


Engine wise we are all set, we will now prepare our device. Install the MGD daemon on the target phone using the following command whilst being in the android-non-root folder:

adb install -r MGDDaemon.apk


Now before running your app you will need to run this command from the host PC (please ensure that the device is visible by running adb devices first):

adb forward tcp:5002 tcp:5002


Run the MGD daemon application on the target phone and activate the daemon itself:



At that point you can connect it to MGD on the host PC, start your application and begin debugging it. Please refer to the MGD manual for more in-depth information on how to use it.

Following these steps you should be able to use MGD with Unreal applications on any Mali based platform. If you have any issues please raise them on the community and someone will be more than happy to assist you through the process.

Filter Blog

By date:
By tag:

More Like This