Over the last few years, there’s been a terrific amount of interest in artificial intelligence, and specifically the branch of machine learning known as 'deep learning'. To help computer architects get “up to speed” on deep learning, I co-authored a book on the topic with long-term collaborators at Harvard University. This book is in the Morgan & Claypool Synthesis Lectures on Computer Architecture series, and was written as a “deep learning survival guide” for computer architects new to the topic. In this blog post, I’ll give a little background on the origins of computer architecture, describing the impact that deep learning is having on modern hardware and software, before talking a little about the publication.
The electronic devices that litter our modern-day existence would be inconceivable without many decades of progress in architecture. No, not the kind of architecture that involves buildings and bridges; I’m talking about computer architecture. Computer architecture is all about the essential high-level design of the hardware and software components in those electronic gadgets. Probably the most obvious example of computer architecture is the Instruction Set Architecture (ISA). The ISA is the foundation that essentially describes every operation that the hardware is able to perform, and therefore represents a really important boundary between software and hardware. Of course, a great example of this is the Arm ISA, which is the architecture inside nearly all mobile devices.
Some other important topics in computer architecture include micro-architecture, which is the design of the hardware that implements the ISA, and memory systems, which are responsible for storing data and moving it around between hardware components. Computer architecture has been around for a while: some of the earliest documented work on this topic is probably from Charles Babbage and Ada Lovelace, with subsequent seminal work from pioneers such as John von Neumann and Alan Turing. Babbage’s computers were mechanical rather than electronic, but many of the same concepts hold true today. For a more up-to-date perspective, the prized text “Computer architecture: A quantitative approach” by Turing prize winners Patterson and Hennessy is a great place to start.
The analytical engine by Charles Babbage. Credit: Science Museum London / Science and Society Picture Library, Babbages Analytical Engine, 1834-1871. (9660574685), CC BY-SA 2.0
These days, it is clear that computer architecture is a well-established field, backed by a huge industry. However, I think it’s fair to say that the phenomenal success of Deep Learning is starting to seriously shake things up. So much so, that many think we may in fact be entering the golden age of computer architecture!
But what's the reason for this? The way we approach writing software in certain cases is fundamentally changing. We used to design algorithms, often based on well-established theory, and implement these in code. However, the success of a new approach known as 'deep learning' is starting to radically change this process. The approach taken in deep learning is quite different, and is fundamentally based around learning by example. This involves first collecting a dataset of many (many!) examples of the problem that you would like to solve, in the form of pairs of (and desired) output examples. This dataset is then applied to a very large neural network, which performs an elaborate 'curve fitting' exercise to 'learn' the data. The trained neural network can then be used to generate a prediction output when presented with a new input data example, in the form of pairs of *input* and (desired) output examples.
Although seemingly a little inelegant, it turns out that this is a very powerful approach. In recent years, deep learning has proven very successful at challenging problems that were previously almost intractable, such as image classification and speech recognition. So much so, that this new approach is becoming known as 'Software 2.0' and is rapidly replacing the traditional approach of algorithm design, as discussed by Andrej Karpathy and Pete Warden.
This trend is having two main effects on computing, firstly plunging software engineering into a domain of statistics and optimization theory, and secondly resurrecting a golden age in computer architecture.
Deep learning is characterized by the use of a large statistical model. Making predictions with this model requires manipulating a large amount of data and calculating lots of large linear algebra operators, such as matrix multiplication. While traditional CPU and GPU architectures are more than capable of processing these workloads, there is a unique opportunity to optimize specifically for deep learning workloads. The Arm ML Processor – part of the recently launched Project Trillium – is a great example of the kind of specialized processors that can really take advantage of deep learning workloads. However, to continue to develop this opportunity, we need to educate engineers across the stack in the nascent field of deep learning.
'Deep Learning for Computer Architects' is a response to this demand for a background in deep learning fundamentals and their impact on computer architecture. I co-authored the book with long-time collaborators at Harvard, Profs. David Brooks and Gu-Yeon Wei, and their PhD students Brandon Reagen and Robert Adolf. Below is a brief summary, describing the contents in more detail. If you’re working in this area, I encourage you to check this out!
This text serves as a primer for computer architects in a new and rapidly evolving field. We review how machine learning has evolved since its inception in the 1960s and track the key developments leading up to the emergence of the powerful deep learning techniques that emerged in the last decade. Next we review representative workloads, including the most commonly used datasets and seminal networks across a variety of domains. In addition to discussing the workloads themselves, we also detail the most popular deep learning tools and show how aspiring practitioners can use the tools with the workloads to characterize and optimize DNNs.
The remainder of the book is dedicated to the design and optimization of hardware and architectures for machine learning. As high-performance hardware was so instrumental in the success of machine learning becoming a practical solution, this chapter recounts a variety of optimizations proposed recently to further improve future designs. Finally, we present a review of recent research published in the area as well as a taxonomy to help readers understand how various contributions fall in context.
Deep Learning for Computer Architects
Moving forwards, low-power neural networks have taken center stage in mobile computing systems and are rapidly becoming a cornerstone of a vast number of application domains, across product segments including IoT, automotive and datacenter. The Arm Research ML Group is currently expanding to meet the broad ML needs of the Arm ecosystem. If you are a motivated machine learning or computer architecture researcher, and interested in working on solving real problems, then please get in touch. For more details, and to apply, please see the link below, or contact me directly.
Explore Research Careers