Arm Research is responsible for delivering a clear vision of disruptive and emerging technologies and how they may affect our future. This disruptive technology landscape is used to develop our research strategy, which guides internal research, external engagements, and collaboration with the wider Arm ecosystem. We curate this disruptive technology landscape and build on insights through a combination of the expertise of our own research teams along with valued perspectives from academic and industrial partners.
Many aspects of future computing are multi-faceted and span multiple abstraction levels. Faced with such complex topics, it can be very productive to bring together technologists from a variety of disciplines including architects, software developers, hardware designers, and system integrators, in order to highlight the best way forward. Earlier this year, a group of Arm researchers, gathered in Cambridge with members of the wider Arm community, including academic researchers, to discuss the next generation of machine learning algorithms and optimization techniques.
Arm Research arranged this ‘novel algorithms’ workshop because it is clear that such algorithms will have implications on everything from future architecture design and hardware implementations thereof, through to software construction. The workshop focused on exploratory algorithms, in which ‘good-enough’ results to certain optimisation problems might be derived through metaheuristics, rather than the more traditional exploitative algorithms, which typically search for optimal results in a more bounded way.
Whilst Machine Learning has evolved quickly and is accelerating development across multiple computing disciplines, there is also justified cause for concern. Recent evidence suggests that the electronics industry will soon reach a point at which the generation of data will far outstrip our ability to process or make sense of that data. This data deluge will arise, not directly, but from the abundant data generated by an abundance of Internet of Things (IoT) sensors, smart devices, and virtual assistants or software agents acting on our behalf.
The graph below shows how the amount of compute used to train machines to perform various intelligent activities has been increasing exponentially over the last few years. In contrast to this rapid increase in the computational demands of intelligent systems, we have seen a slowing of the performance gains typically attributed to Moore’s Law. This grim reality is compounded by the fact that much of the data being processed by machine learning systems today is unstructured, and generally not imbued with adequate metadata. Labelled bias-free datasets that are more easily digested are prohibitively expensive to create, and can be error-prone. Therefore, cross-disciplinary research discussions, which consider the challenges that future algorithms might present, are crucial to ensure the Arm ecosystem is suitably prepared to meet these increased demands.
Image Source : https://blog.openai.com/ai-and-compute/
The workshop covered a wide range of topics relevant to emerging machine learning and optimization techniques such as reinforcement learning, neuromorphic computing, evolutionary algorithms, and swarm algorithms. Many of the algorithms discussed were biologically-inspired, and there was a realization at the workshop that an assemblage of locally self-organising entities could be one tractable path towards the emergence of intelligent global behaviour. In fact, one question posed by several of the attendees was whether the intelligent systems of the future will reside in the cloud or end up as artefacts of an effective implementation of swarm intelligence on a trillion IoT devices.
The attractiveness of these biologically-inspired methods may be attributed to the fact that the lack of centralised control structures or explicit global coordination might enable systems built on these precepts to scale far more easily than conventional systems which suffer from synchronisation delays, amongst other coordination problems. The efficient scaling of resources, and the feasibility of reducing the level of human engagement in the design of future machine learning systems, were common themes throughout the workshop.
Looking across the industry we can see a surge in applications using Reinforcement Learning (RL) and this trend is primarily due to the ability of RL algorithms to solve problems with minimal human intervention. In applications using RL, a goal or reward function is defined, instead of painstakingly providing a labelled ‘correct’ answer for every input. This enables systems built upon such algorithms to react to changing circumstances, and not fail dramatically when they encounter unstructured environments. In fact, reinforcement learning can itself be used in the design of neural networks. These so-called ‘automated machine learning systems’ are gaining in popularity and can already perform various tasks such as neural architecture search and the tuning of hyperparameters on behalf of the user. An excellent example of this approach is the AlphaGo Zero program which used reinforcement learning in conjunction with Monte Carlo tree search to reach unprecedented skill levels.
Neuromorphic computing is another fascinating research topic. Here, intelligence arises out of a large collection of processing elements and (incorporated) memory states, communicating using simple messages or spikes in a manner similar (in principle) to the biological brain. The workshop provided an overview of the neuromorphic landscape, and explored the possibility that neuromorphic architectures could take advantage of evolutionary approaches to improve training times and the efficiency of machine learning hardware. Such efficiency improvements could be of critical importance and give machine learning system designers other avenues for optimisation – beyond the current costly strategy of improving machine learning efficiency by building custom ASICs.
Some successful demonstrations of neuromorphic architectures, which are competitive with conventional neural networks, can be found in academic literature. In general, the claimed advantages of neuromorphic architectures are threefold. Firstly, there is the possibility of high energy efficiency due to their event-driven design. Secondly, it is claimed that they have the potential to achieve better scalability as datasets continue to grow. Lastly, they have the capacity to easily assimilate new information on-the-fly, through online training or the incorporation of context information during inference.
The workshop was also an opportunity to showcase some internal developments in the domain of evolutionary computing. Arm Research highlighted some exciting projects which used genetic algorithms to underpin new methodologies or to enable the creation of new intellectual property within a framework that could take the familiar constraints of electronic system design into account.
Whilst many of the talks at the workshop examined the upcoming algorithmic shifts, a few talks went beyond algorithms, and highlighted key areas where exploratory algorithms are emerging as credible options for tackling difficult challenges. Some of the applications discussed in this context were supply chain management, large-scale traffic control, cloud resource scheduling, financial systems, and the control of autonomous vehicles.
Contemplating the future sometimes takes us down unusual avenues, but there was certainly no shortage of curiosity in this particular gathering! As a result, we held a discussion about advanced concepts such as the overlap between quantum computing and genetic algorithms. The key question was whether hybrids of quantum computers and classical ones could be automatically programmed using forms of genetic programming – that is, something akin to quantum evolutionary programming. It is still early days, and there are formidable challenges ahead in the field of quantum computing. That said, the existence of an efficient quantum algorithm for machine learning could be spectacularly disruptive, particularly if the quantum speedup attained can justify the potentially very high cost of such devices.
We have started planning another Novel Algorithms Workshop, to be held sometime in 2019, which will include an in-depth analysis of topics such as Bayesian machine learning, neuroevolution, and federated machine learning. The landscape of future algorithms is complex, and will probably change in unpredictable ways, but our hope is that our Novel Algorithms Workshop series will provide much-needed clarity, enable us to assess the impact of emerging algorithms across various levels of the computing stack, and stimulate lively discussions about the future of machine intelligence. More details on the next Novel Algorithms Workshop will be provided in a subsequent blog post. In the meantime, if you would like to any further information regarding the workshop, please feel free to contact me.
Contact Bo