It’s no secret that data is big business – and there’s no shortage of it. Google claims that, on average, it now processes over 40,000 search queries every second, which equates to over 3.5 billion searches per day, worldwide – or 1.2 trillion searches per year. (If you want to freak yourself out, you can look at the real-time count of searches performed each day.)
Increasingly, our lives are lived online: we bank online, pay our bills online, do our shopping online –even order our food online – and each of these actions generates data. Every day, we share our photos, memes and innermost thoughts on social media, generating more and yet more data in a personal data explosion that shows no sign of slowing down.
As far back as 2012, the IDC Digital Universe Study – a paper that’s still cited today – estimated that by 2020, about 1.7 megabytes of new information would be created every second for every human being on the planet. When you consider that some people – like my Dad – are generating precisely zero digital data at any one time, that means that some of us are overstepping our allocation BIG TIME.
(We can debate the actual size of my dad’s digital data footprint; just know that he’s an ardently analogue man who regards even a digital clock with deep suspicion.)
Interestingly, when the IDC study was produced, it also predicted that, “Cloud computing will increase in importance as the number of servers will grow 10x by 2020 and information managed by enterprise data centers 14x”. Today, however, while the number of servers may well continue to increase, anyone who’s anyone is looking to edge compute to define the future of huge swathes of data processing.
Recent advances in ML mean that more processing (and pre-processing) can now be done on-device than ever before, and there are significant benefits for doing so – not least the considerable cost and power savings. Transmitting the world’s data back and forth to the cloud doesn’t come cheap, and the more data you transmit, the more expensive it becomes – so the more processing that can be done on-device, the better.
What’s more, all that data that we’ve been busily producing has more impact while it’s fresh. Sure, my holiday photographs aren’t exactly time-critical fodder, but hot-off-the-press data is vital for any sort of real-time analytics. Take the Internet of Things (IoT), for example. IoT sensors are already creating massive amounts of data, and since we’re well on the path to a trillion connected devices, the volume of that data is set to increase exponentially. Much of that data is extremely high value in the seconds after it’s produced – particularly in emergency situations such as account hacking or cyberattacks – but would lose a lot of its worth being transported to the cloud and back. Edge processing can provide the near-instant intelligence necessary to make swift decisions and nip crisis situations in the bud.
For those of us who are simply concerned with a smoother experience on our personal devices, such as tablets and smartphones, on-device processing also minimalizes latency, removing the lag factor so feared by time-critical applications that rely on a connection to the cloud. Furthermore, data that stays on the device maintains the level of privacy that we’re beginning to demand as default.
In fact, edge ML is already a common feature in many premium smartphones: if you’ve ever summoned the services of Alexa or Siri, the keyword spotting that wakes up your favourite assistant will have been done on the edge (although your subsequent requests will almost certainly have been processed in the cloud). Predictive text, personalised suggestions on your shopping or media apps, recommendations for films you might like or music you may want to hear … all these handy features rely on edge ML to create human-machine interaction that’s as effortless as possible.
If you’ve bought a new phone in the last couple of years, it may be equipped with biometric security features utilizing edge ML – such as fingerprint identification, iris scanning or face unlock – that allow you to bypass the lock screen by analyzing over 100 identification points, shadows, bright areas, and reflections – all while checking for ‘spoofing’ to make sure there’s a real person present, rather than a photo or a video.
Computational photography is fast becoming the norm in mobile, too. By replacing optical processes with algorithms, and using computer vision to identify the contents of an image, computational photography makes studio effects achievable at the click of a button – or, like research announced last year by Google and MIT, can retouch your photos before you even take them. How’s that for service?
We’ve all taken advantage of on-device ML to add bunny ears to our selfie (haven’t we?) but the impact of edge compute goes far beyond creating novel selfies or playing your favourite tune on demand: it’s already being deployed in homes and factories, on farms, and woven into the fabric of our public infrastructure.
Super-powered smartphone apps are now capable of detecting disease in plants with near-100 per cent accuracy; smart cameras monitor traffic flow and take action to minimize congestion in real time; smart modules in aquifers monitor water quality and issue near-immediate alerts if a problem is detected.
Of course, edge ML also means that real-time intelligence can now reach the parts cloud-based compute cannot reach: operations in remote locations, such as oil rigs, often have limited connectivity, but edge compute means that data collected from sensors can be analyzed and acted upon locally, speeding response times, avoiding damage to equipment and mitigating risk for personnel.
Edge analytics of this kind are also highly scalable – and by distributing ML capabilities at the edge, analysis can take place on significantly smaller data sets, calibrated to an individual device or group of devices, making processing highly efficient.
Another area where the edge is a natural choice is self-driving cars, which can generate as much as 25 gigabytes of data per hour. Despite the heft of all that data, decision-making must be immediate; cloud-based processing and the potential latency that accompanies it can have no place in such a time-critical situation. Of course, not all workloads need to be at the edge, but where response time is of the essence, edge is where it’s at.
The edge is where we live, where we work, where we learn, and where we kick back and relax. As edge devices become smarter, more powerful and increasingly capable, real-time decision making will proliferate – unhindered by latency and, in many cases, connectivity.
This connected future is set to enhance our lives in countless – as yet unimaginable – ways, unleashing a new generation of intelligent, edge-based applications that drive efficiency, reduce cost and bandwidth, and safeguard our privacy.
To find out more about the shift of machine learning from cloud to the endpoint, read this research report on edge computing from GigaOm.
[CTAToken URL = "https://pages.arm.com/ai-edge.html" target="_blank" text="Download AI at the Edge: A GigaOm Research Byte" class ="green"]