This blog is a companion to my keynote at Arm TechCon on 10 October, 2019. I am publishing a blog form because I cover a LOT of ground in that 20 minute talk, and there are invariably topics that various people would like to know more about. Also, keynotes needing to be crisp and polished, I couldn’t cram my slides full of the many references and background tidbits that my inner engineer really would prefer! Add to that lots of fancy animation, and simply sharing the slides would mean key points would definitely be lost. So, here is an expanded discussion of the talk… it’s a long discussion so here’s a table of contents to help you drive around:
Overview
Tenet #1: Know your history
Tenet #2: Extrapolate some data
Why is EUV so expensive?
Self-Assembly
Fractal tiling of DNA Origami
Tenet #3: Piggybacking
The 2030 Switch?
Trivia Break: What is a volt?
Superconducting Electronics (SCE)
Quantum Computers
Plasmonics
Metamaterials
Tenet #4: Virtuous Cycles
What ever happened to graphene?
Computational Materials
Examples from Arm Research
Cavendish Labs and Tenet #5
For my talk this year, I chose ‘The ICs of 2030’ as a title because I wanted to push out much farther than my 2017 keynote, which covered a more evolutionary set of topics such as 3DIC and novel non-volatile memories. (I put the 3DIC discussion out a companion 3DIC blog here, and/or the entire talk ended up on YouTube if you would like to watch it). What I specifically wanted to do by picking the year 2030 was have a bit more fun and talk a bit more speculatively about larger challenges and potential solutions from the technology community that could possibly have huge impacts for us toward the year 2030. This is not going to be a discussion parsing the exact amount of slowing of Moore’s Law, which is to say “how do we shrink MOSFETs?”, but rather a look out past that discussion to what might come next. Can anything replace lithography, the key tool that has driven Moore’s Law more than anything? What about the silicon MOSFET, can it get to 2030 and if not what else is out there?
As part of the ‘fun’ aspect here, given the audacity of attempting to predict anything in 2030 in our industry, I sprinkle through this talk five ‘tenets’ of predicting the future. These are simply five tools I find helpful in my day job of trying to sift through what feels like a growing tsunami of publications promising some new groundbreaking technology. And my talk starts with a first tenet:
The first tenet is the most fundamental—all the smart people tell us this—to understand the future you must know your history. To wit, here’s a couple of quotes from smart people:
This tenet is self-evident to everyone, I’m sure, but I lead off with this not only because it is so important, but it lets me light-heartedly show why someone from Arm should be talking to you about underlying technology. As wonderfully illustrated in an accompanying video to start the talk, I have an unfair advantage working for Arm because several times per year I get to go walk around in one of the most historically important places for our industry: Cambridge (the original Cambridge in the UK, no offense to MIT or Harvard!). Outside of the office, I’ll typically be walking between my hotel and the pub… so dozens of times per year I pass this way:
And that innocuous looking plaque is actually hugely important to our industry:
Of course we all know Maxwell as the father of modern physics, but he was just the beginning. In fact, check out the first six Cavendish Lab professors:
Discovery of the electron, understanding atoms, and light. A lot of Nobel prize hardware—so much important history for us happened right on the other side of that wall…I feel smarter through osmosis every time I go for a beer in Cambridge! But beyond a good joke, I do come back to these folks throughout the talk, so stay tuned.
OK still having fun here—warming the crowd up because we’re deep into the last day of the conference at this point, so I remark that “in our industry we all know that to predict the future you must extrapolate some data on a log-linear plot” (this is the only crowd of people in the world who would chuckle at that).
But what I’m showing here is not Moore’s Law. Note the Y-axis is in dollars. And these data points, extrapolated out, tell us that by the year 2030 we will be spending one billion dollars for each one of these:
That is, each lithography tool going in to a fab! This particular picture I lifted from an Extreme Ultra-Violet (EUV, which represents the last 2 data points on the chart) lithography discussion at extremetech, but across many years, manufacturers and technologies, all of that data is a pretty clear trend that doesn’t take a lot of faith to extrapolate to 2030! A side note: If you would like to build this chart, I simply digitized data from an interesting presentation shared by Reinhard Voelkel as well as in this EETimes article.
I live and breathe this stuff, but as the Arm ecosystem continually broadens, I encounter more and more people at events such as TechCon who have never heard of EUV, but as engineers, who are intellectually curious and are always interested to learn. So to that end, I spent just a bit of time showing why EUV is so expensive and how we can see that yes, in fact, it will very probably get much more expensive in the future.
For EUV, we had to switch from lenses to mirrors because the 13nm wavelength of ‘light’ is so small it just can’t go through a lens. Here’s mirror from a first generation EUV tool:
Photo credit: Fraunhofer talk at euvlitho.
And in fact, the EUV light’s wavelength is so small that you can’t use a simple mirror, you have to make an incredibly precise arrangement of films on the surface of the mirror:
See ntt-att.com
Those are alternating layers of materials where the scale bar says 7nm! – That’s an actual 7nm, as opposed to what we call ‘7nm’ in the semiconductor industry! – and that is some precise engineering. This arrangement of alternating materials is required to increase the reflectivity of the mirrors to an acceptable number. The arrangement of alternating films to reflect the light is called a Bragg reflector, and if that name sounds familiar, you already saw it, second from the right here:
Bragg famously wrote that he was walking along the river Cam as a first year research student when he envisioned how x-rays bounce off crystal lattices:
Braggs Law n λ = 2 d sin θ
And for this insight he was (and still is) the youngest ever Nobel laureate in physics. I like to think that it lends credence to the idea of insight through osmosis in Cambridge… I also try to walk along the river every time I visit!
Back to these EUV mirrors, it’s not just that we need these very precise coatings, but the mirrors also have to be built to unheard-of tolerances. Picture a flatness of 75 microns – that’s a bit smaller than the width of one of your hairs. While 75 microns across the mirror shown above would indeed be flat, to proper scale you need to be picturing a flatness of 75 microns across the entire country of Germany – now that is some serious engineering!
The surprisingly sad outcome is that even with these incredibly precise mirrors, they can only reflect about 70% of the EUV light. Which in a single case isn’t horrible, but if you scroll back up to the picture of the EUV stepper, you’ll note that the light has to bounce off 8 or 9 mirrors. 0.7 to the 8th power is less than 6%. And because we need a decent amount of light to get to the wafer – to run wafers fast enough to get a return on this large investment, among other things – you end up needing a very bright source of light. How bright? Well, a light bulb won’t do – you need a very strong laser. So strong that it doesn’t even fit in the box shown as the EUV lithography tool – it’s actually much larger than that and sits in the basement. And shoots the light up through the floor, into the EUV exposure tool:
Image credit: Technical paper by Gigaphoton, Inc.
If I say this laser is about 20 kilowatts, that does sound big but probably doesn’t fully drive home how powerful this laser is, and how hard it has been to develop. To help you understand how powerful, consider this laser developed by the United States Navy:
Image credit: Lockheed Martin, as featured in IEEE Spectrum
This laser, on the U.S.S. Ponce, is a prototype laser being developed to shoot missiles out of the sky. The EUV laser above is more powerful than that (note, just a prototype—the final missile laser systems will be more powerful than EUV lasers, but then again in an EUV system even a 1 degree rise in temperature would result in zero yielding wafers, so the EUV laser system is still a feat of engineering). So, yes, this EUV stuff is in fact Extreme, and one of the above tools goes for around $110M (laser included, you just need three Boeing 747s to ship each system!). And the crazy thing is, we need something even bigger and better shortly, and that is the story of the final dot in my graph:
So by around 2024 we expect to need an upgrade called “High Numerical Aperture” (high NA) EUV, which crosses the $250M barrier! (you are right to think that this sounds insane and no one would ever pay that much money for a fancy light bulb, but alas you’d be wrong as three pre-orders have already been placed).
Why are we doing this, exactly? The answer to that question actually comes back to the second Cavendish professor, John Strutt:
As far as fundamental contributions to society go, he comes in pretty high (as any parent would attest) by answering the question “why is the sky blue?”. That has to do with the diffraction of light, which he formulated into what is called Rayleigh’s Equation (taking his name post-Lordship), which dictates the resolution available in a light-based system (the same equation can be used to describe microscopes as well). To image smaller features, he tells us through his equation in 1896, you can use a smaller wavelength of light, or you can increase the numerical aperture, which is related to the angles of the light you collect. In the case of EUV, a higher NA means bigger mirrors. Here’s what a test chamber for one of the new mirrors looks like:
You can find a ton of interesting photos, including this one, in a tutorial I gave at this year’s Custom Integrated Circuits Conference (CICC), if you have access to IEEE’s Xplore, but there is also another interesting photo in this presentation from Zeiss.
Bigger mirrors means bigger boxes into the fab. Below, I’ve taken a drawing of a High-NA EUV prototype and drawn a red box to approximate the size of the existing 1st generation EUV tools (the ones that need the three 747s to ship).
So all of this is following John Strutt’s (Lord Rayleigh’s) 1896 equation. All of the data points in my extrapolation from the 1980's onward are fundamentally about chasing after that equation.
Some Lord Rayleigh trivia (because where else are you going to get that):
Even these amazing systems, should they come to fruition, will not be enough to propel us through the year 2030. We’ll need something better, and perhaps given the truly fundamental challenges, something radically different. To give you perspective on how challenging semiconductor lithography is, the current generation of EUV tools push the equivalent of 30 million UHD TV screens’ worth of information on to each wafer! Extrapolate that out to 2030, and you’re asking a lot for any top-down patterning approach. (For this I modified a statistic also from the Zeiss presentation).
There options for extending top-down patterning.
However, none of these proposals circumvent the basic problem of top-down information density, and with only 20 minutes I want to move on to ideas that are potentially fundamental changes to the way we do things. And that gets us to the next topic.
The self-assembly topic also starts in Cambridge. And my walks to the pub. In fact, the most famous pub in Cambridge, The Eagle, and this plaque:
I’ve blurred out part of the plaque, because this is a story. 1953. Cavendish Labs… let’s go back to our Cavendish professors:
1953 was Professor Bragg and his x-ray diffraction. And this particular x-ray diffraction image led to the discovery of the structure of DNA, and Nobel prizes for three of the four people working with Cavendish at the time:
Why only three of four? Well, that’s a good discussion left for the pub. And, speaking of the pub, here’s the full plaque outside The Eagle:
And, you can go inside and find another plaque at the actual table where this famous discovery was announced:
So, again, I benefit from extreme osmosis just by getting a beer.
Again, with only 20 minutes to talk. I don’t get to discuss the Watson-Crick base pair binding rules of DNA. I don’t get to discuss the many orders of magnitude progress in sequencing DNA that was enabled by semiconductor industry technology. I don’t get to discuss the discovery of CRISPR and the creation of an entire industry in customized synthetic DNA. All of that would be its own 20-minute presentation. Instead, I have to skip ahead about 40 years to the point where researchers started figuring out how to use the Watson-Crick base pair binding rules, together with easily obtainable custom DNA sequences, to manipulate the shape of DNA. Here are a couple of seminal paper references in this field:
And in particular the paper by Paul Rothemund in 2006, which I’ve illustrated in the cartoon at the top. Here, Rothemund showed how he could take loops of single stranded DNA, code in specific binding locations, throw in some smaller DNA strands called staples, and get his DNA to fold into specific shapes. This folding of DNA eventually became known as DNA origami. Here are some of the actual shapes he was able to engineer with DNA origami:
And to the right I’ve placed a rough example of what 3nm CMOS transistors will look like. Way back in 2006, Rothemund was making stars and smiley faces in the same area where maybe 2 gates of 3nm CMOS will fit. And the comparison is actually worse that that. Take, for instance, the triangle on the right. Not only could he make the nanometer scale triangle, but he could also place binding locations more or less anywhere on that triangle, at about a 5nm precision:
Which is far better than what our semiconductor industry can hope to do. Of course, sprinkling tiny triangles or smiley faces all over our wafers isn’t going to help us much, but that then gets us to a follow-on technology called fractal Assembly of DNA Origami.
What you see depicted in graphics much nicer than I can make, Lulu Qian’s group at Caltech started off with Rothemund DNA tiles, which you yourself can order and receive through the mail in a test tube.
But, they specifically engineered the edges of these tiles, and ordered up three other test tubes containing DNA tiles that could then only bind to each other in one specific way.
They ended up with a tile 4x larger, in a test tube. They then repeated this process twice more, to end up with a tile 64x the size of the base unit (hence the term fractal. Fractals re-use constructs at different size scales). At the end of the day, they created the world’s smallest Mona Lisa, at about 700nm on a side:
You can see the picture is made up of an 8x8 array of unit cells. Those are the base tiles stemming from Rothemund’s engineering.
Our fancy $255 million dollar patterning tools will never be able to make something like this.
But you know who did? Two post-docs in Qian’s lab, with an Excel spreadsheet to order up about $20 worth of DNA:
And they actually get about a billion copies per test tube, so you’re looking at a 20 nano-cent Mona Lisa. And this wasn’t a one-time wonder. They have an online compiler you can use to create your own picture, should you want to:
http://qianlab.caltech.edu/FracTileCompiler/
Here, I took a picture of Rosalind Franklin and uploaded it:
And the Qian Lab Fractal Compiler shows me what the individual tiles would look like, along with the intermediate sized tiles, and shows me the lists of DNA sequences I would need to order to do this in the convenience of my own home.
Of course there’s a little more to it than pouring test tubes into each other, but yes you really could do this at home. And there aren’t any concrete ideas on how to go from a Mona Lisa to a Neoverse chip by 2030. But let me illustrate some of the ‘Room at the bottom’ that Richard Feynman famously posited in 1959. First, while a 700nm tile still doesn’t get us to chip-level scale, that test tube with $20 of raw material in it contains a billion copies, which together actually sum up to an area larger than a 64 core Neoverse die manufactured in 7nm CMOS. And, given the roughly 100x “room” afforded by the DNA origami “resolution”, you could go one of two ways with that 7nm core, you could make it a 2mm x 2mm die or you could keep the die area the same and pack about 6000 cores into it instead of 64.
But of course DNA origami doesn’t have to lie on a flat wafer – we’re just used to two dimensions because we need extremely flat surfaces for our lithography to work. Here’s a couple of examples of 3D DNA origami:
Above left, TU Munich can make you do-decahedrons and they can do it with reasonable yield (yield is one of the big challenges ahead if DNA origami wants to mature to semiconductor grade usefulness). And on the right, Harvard researchers have made 3D pixels (voxels) at around 2.5nm resolution from which they can make just about any 3D shape you want. At the Harvard resolution, that 2mm x 2mm 2D DNA origami die above could be folded into a cube about 130um on a side! So, even after 50+ years of amazing progress in shrinking CMOS circuits, Richard Feynman is still correct:
As I’ve tried to underscore above, it is of course highly speculative to consider DNA origami technology intersecting our semiconductor industry’s requirements in 2030. But I want to bring up my ‘third ‘futurism’ tenet now, one that tells me that we can expect a great deal of continued progress in this field.
We spend a LOT of money on R&D in our industry. But one industry outspends us:
And part of that industry’s R&D spend is looking into curing cancer using DNA origami. Here’s a couple of references:
The point is that if this technology is possibly curing cancer in mice, it has its reason for being. It doesn’t need to wait for the semiconductor industry to come along. This is the concept of ‘piggybacking’ – riding on someone else’s shoulders. We can see potential in this nascent technology but we benefit from someone else who is going to be highly motivated to spend the R&D to move it forward. So – stay tuned on the way to 2030 with self-assembly!
Let’s just assume for now that in 2030 some way to cost-effectively meet the ‘functions/$’ prescription of Moore’s Law will have surfaced. For the last 5 decades, that’s been the correct bet – even though all the experts, Gordon Moore included, have from time to time pronounced that this is the end for transistor shrinking. But if we make it to 2030 on the patterning side, what is it that we will be patterning? This industry has (literally) squeezed an impressive run out of the silicon MOSFET:
But most experts agree that the silicon MOSFET will run out of shrink benefit sometime this decade. I.e., before 2030. As I’ve shown beyond 2020 in the above graph, some think that 3D transistor stacking will save us, but there are a lot of cost issues in the way there, not to mention more fundamentally the power/performance roadmap looks bleak for 2D and would probably be worse for 3D. That’s a separate 20 minute discussion, and again it is something more incremental that we can work through. In the spirit of this talk, I want to think about what might be coming that doesn’t look like more of the same. And in the area of logic switches, there is no lack of ideas. And most of these ideas are coming from new materials, and even new physics:
Here’s some buzzwords for you to look up in your spare time, but the topics below are the little trick I put into the title of this talk. By 2030 is it quite likely that we’ll be looking into new “-ics” beyond the electronics that we have relied on for the entirety of our industry to date:
As you can imagine, again in 20 minutes we don’t have time to go through all of these (and several that didn’t make the final cut here due to lack of physical space on the slide!), but I want to discuss a couple of these that will help me with the important broader points.
But to lead off that discussion, how about a trivia break?
We all know a volt from our schoolbooks – the potential difference along a wire conducting one ampere dissipates one watt – but for that you’ll need a voltmeter, and how do you know the voltmeter is correct? Well, according to the U.S. National Bureau of Standards, there is an accurate way to create a voltage standard… you just put this structure into a microwave:
Image credit: A Practical Josephson Voltage Standard at One Volt, paper
Which is to say if the antenna on the left is subject to the write microwave energy, the dots on the right generate a precise voltage proportional to known physical constants, e and h:
Each one of those dots (here’s a close up) is a ‘Josephson Junction’.
These are named after a 22 year old physicist who observed that funny things might happen when you sandwiched two superconductors close together. This is an example of ‘new physics’ beyond the basic electronics that we rely on for most of our current electronics.
And yes, this insight came from Cavendish Labs, and 1962 would have been Nevil Mott’s era. And yes, there was a Nobel prize. And yes, there is another plaque to walk by and ‘osmose’ (that’s the actual verb form of ‘osmosis’):
I hope you learned something in this break – but it’s not solely for fun, because you can actually make some interesting computing hardware out of Josephson Junctions. The very property that makes them interesting as voltage sources, a ‘quantum effect’ that gives rise to a discrete relationship between input energy and output energy (a so-called ‘quantum flux’), can be harnessed to create digital logic that works just like normal CMOS logic. This field is often called ‘Superconducting Electronics’ or SCE.
Yes, you have to super-cool these things to get them to work, because, so far, we don’t have good room-temperature superconductors, so this field also has the word ‘cryogenic’ attached to it, which is confusing to some degree because you can also cryogenically cool standard CMOS and get better power/performance out of the standard CMOS. But the exciting thing here is the ‘single flux quantum’ (SFQ) behavior, where the output signals are tiny mV burps generated from Josephson’s Nobel-winning insight. So you can possibly end up with low power electronics, but on top of that these circuits can work at amazing speeds— at least 10’s of GHz and possibly higher. Part of the advantage is that below 4K temperatures, you can cheat on the resistance of the wiring by using the superconductors as wires. Niobium is commonly used as the superconducting wiring. And while SFQ circuits probably can’t shrink to 7nm CMOS dimensions, they do benefit from an interesting compactness of layout pertinent to CMOS logic – the state retention is more ‘built-in’ to the devices than for CMOS transistors, so the state retention standard cell, a flip-flop, requires far fewer devices than when building them out of CMOS. All of the above discussion is captured in my one presentation slide here:
You can’t buy any SFQ-based CPUs today – not for a lack of trying. And that is because I’m glossing over a LOT of problems with making complex circuits out of this stuff. Entirely new design challenges require entirely new EDA tools to work with. And the product revenue/Design/EDA game of ‘blink’ is all too familiar to us for much simpler problems. However in this case the U.S. Government, in the form of IARPA, is endeavoring to break this impasse with its Supertools program. Investment like this is reason enough to keep SCE on our 2030 watch list.
You can’t buy an SCE computer today, but it turns out you can buy a different kind of Josephson Junction-based computer today… or at least rent or use one. The circuitry looks a bit different than the voltage reference circuit you saw earlier:
This circuitry uses the Josephson Junctions to create what are called Transmons, or Xmons, and if you are familiar with those terms then you know what kind of computer we are about to discuss.
Yes, the humble Josephson Junction from Cavendish Labs in 1962 is what powers this box, The IBM Q quantum computer.
It also powers other variants under development at places such as Google, Microsoft, Intel, and Rigetti, for instance. That’s a lot of horsepower behind something that most experts say won’t produce a useable general purpose computer for another decade, isn’t it? Well, there are some specific intermediate opportunities between where quantum computing development is right now and the grand future of general purpose quantum computing. One oft-cited example is the Haber-Bosch process that is used to produce ammonia-based fertilizer (see for example here from Microsoft). The reason quantum computing proponents point to something like Haber-Bosch is that a class of chemistry problems fit nicely into the existing quantum computers – with only a small number of quantum bits (qubits) that can only stay stable (cohere) for a short amount of time, you can actually do some pretty serious chemistry calculations. And Haber-Bosch is a really important chemical reaction – it’s how we feed the world. But it is very expensive. It requires a very high pressure and temperature to crack the nitrogen triple bond, and that results in a use of 2% of the world’s energy! As well as 5% of the world’s natural gas, and more importantly it has recently been estimated that the fertilizer industry might be creating methane biproduct at a rate that surpasses all other industries combined. To re-cast that into terms more familiar with a consumer, this means that making fertilizer accounts for 40% of the carbon footprint of a loaf of bread.
But as it turns out, there is a class of microbe that can break down nitrogen at room temperature and room pressure, using nitrogenase molecules, but scientists can’t “crack” the equation to explain how this is done.
The number of qubits you need to ‘do chemistry’ is proportional to the number of atoms, and the complexity of the atoms (number of electron states you need to model). So while the current largest quantum computers are in the 50-100 qubit range can simulate simpler molecules, we need to wait for slightly larger quantum computers in order to attack Haber-Bosch. But very likely not that long. And then the list of potential advancements could open up well past fertilizer, as a quantum computer could quickly optimize candidate materials to identify better solar cell materials, better and/or personalized drug formulations, etc. So, basically, feeding the world, ending global warming, and curing disease.
The reason to bring this up, other than the obvious cool factor, is that we in the semiconductor industry should take note. It is quite possible – likely – that by 2030 quantum computers will have sifted through and identified improved materials to advance conventional computing. This is a reason to pay attention to quantum computing in the near (5-10 year) term.
One word of caution here – we are very early into this quantum stuff. It is quite likely that the transmon is like the point contact transistor in 1947 – a very important first step, but not where we end up.
Transmons happen to be the best option at the moment for advancing the field, but you can make qubits in many other ways, including good old silicon:
And photons:
Out of many possible papers in the field, I picked this one so you would know the current address of Cavendish Labs. Back in the 1970s, it outgrew its spot on Free School Lane and moved toward the outskirts of the city.
Photons as qubits are attractive because they can probably work without the cryogenic temperatures. A number of quantum computing startup companies are working in this space.
But photons are a holy grail of conventional computing as well, because they are fast, and don’t bump into each other like electrons tend to do. We have a problem of our own creation with harnessing the photon, however: we’ve been so good at scaling the transistors that they are now actually smaller than photons. Here’s some perspective:
But there is a promising method that could help circumvent this issue and allow us to better harness the photon for conventional computing:
Plasmons, or more properly, surface plasmon polaritons, are actually an interaction between light and matter. With proper structure in the matter, you can condense the photonic information into a much smaller space (I’m keeping that photon in the upper right corner for reference):
And specifically by ‘proper structure’, what you need are regular-ish arrays of metal/dielectric interfaces:
More broadly, what you have now entered is the exciting field of metamaterials.
A way to think about metamaterials is in analogy to the crystal lattices that Bragg worked his x-ray magic on. The atoms in the crystal had repetitive structure around, or smaller than, the wavelength of light, and that made the light do weird things (i.e., Bragg reflector). And in the photo above you can see a structure that is similar to the Bragg reflectors on those EUV mirrors. If you generalize this beyond simple layers on a surface and add 2D structure to the surface, then you can make even weirder stuff happen. That is, you are no longer just relying on the fundamental materials properties, but you are now engineering your own ‘meta’ material properties.
How weird? Well, Nathan Myrvold, fomer CTO of Microsoft described it as: “The closest thing to magic I have ever seen”. He is now involved in a venture capital firm that dabbles in metamaterials... The weirdness includes the ability to make ‘superlenses’ that break the 140 year old tyranny of Rayleigh’s equation, break Maxwell’s laws to allow revolutionary antennas and filters, and even extends to the realm of invisibility cloaks!
That’s a lot of opportunity. And hence, Tenet #3 definitely applies: Metamaterials progress is taking off in so many different ways.
But, you can turn tenet # around here. Metamaterials rely on precise nanometer arrangement of metal-dielectric interfaces. Let me remind you which industry is best at that:
And this then brings up the fourth tenet:
In the metamaterials example, we have two fields who can help each other. There are a ton of promising technologies out there, but when you find a case where two of them can help each other make progress, then you’ve found a virtuous cycle that foretells a higher probability of success than other options. In this case, it’s a pretty simple proposition, and we can expect some new capabilities based on metamaterials by 2030.
“OK Greg, I get it—you are excited by a lot of new materials that are out there. But since you brought it up, I have a question for you:”
So, it turns out that they do give out Nobel prizes to other universities. In this, case, we can have some fun by reducing the discovery of graphene (a 2D sheet of carbon atoms) to using a pencil and some scotch tape.
Graphene will change our lives. I’d put it on my ‘Tenet #3’ short list, except for one pesky problem: graphene doesn’t have a band gap. And band gaps are what make semiconductors do work. So while there are many exciting applications of graphene, making better electronics is probably not one, unless possibly some fancy engineering along the lines of metamaterials pays off. But we might not have to go to those kinds of extremes to harness 2D materials as semiconductors. Graphene merely unlocked the Pandora’s box of 2D materials. There are many others, some of which do have band gaps. But there’s something funny going on here when I say ‘many’. Here’s a paper from last year that identified 51 new 2D semiconductor material candidates:
Here’s another one from last year describing 54 more:
There are currently over two thousand candidate 2D semiconductor materials, and lately, they are coming along in bunches. I know what you are thinking right now. You are thinking “should I be investing in Scotch Tape?” Well, that’s not what is going on here. What’s going on here gets back to another Nobel Prize for the University of Cambridge:
Here are the two papers I referenced above. And they were not using Scotch Tape. They were coding. They were coding based in part on breakthroughs that earned John People his Nobel prize (NOT from Cavendish, this is chemistry, but we’ll take it):
Basically, as computing power advanced (thanks to us in the semiconductor industry), it began to make it feasible to fully simulate the chemistry, ab initio. Even today, any one point on one of the above figures (say, calculating the band gap of a hypothetical material) could take several days of computing on a server node, so this field is still getting started. But of course, as we produce even more powerful computers, we’re going to get more ideas back from this community. Smells like Tenet #4 to me.
And just for fun, let me point out that the field of computational chemistry is also identifying potentially better qubit materials:
You would rightfully guess that computational chemistry methods can create a lot of data. And you know what you can do with a lot of data – machine learning. So, an interesting side-effect of this new wave of computational chemistry is to enable a wave of machine learning in this field:
Let’s summarize all this new materials stuff. Today, we have silicon FinFETs. We’ll get to gate-all-around nanowires/nanosheets, and possibly to stacking them, and we’ll push device scaling out a bit more. But fundamentally new/different options are coming, and we’ll be sorting through a lot of these by 2030. By the, quantum computers will be dreaming up new materials for us to consider in conventional computing. Computational materials and associated machine learning methods will be dumping thousands of new materials into our lap. We’ll have metamaterials that give us entirely new physical properties from existing materials. And then there’s self-assembly that will help make all of this. In fact, if you are looking to pass some time, just Google ‘DNA origami metamaterials’. That will keep you busy for a while.
Think of where we were 15 years ago. We had one 2D material. DNA origami hadn’t been invented yet, and computational materials work was just getting off the ground. Trace to today, then project out to 2030. The possibilities should be mind-boggling to you. It’s the exact opposite of the worry that Moore’s Law is ending and we won’t have anything to do!
Of course, we at Arm Research monitor these big trends in materials, to stay up on potentially disruptive shifts in the industry. But we are already working on some nearer-term materials innovations as well. I’d like to share a couple of them with you.
Here’s an example where we are working with Applied Materials, using a material developed at The University of Colorado at Colorado Springs:
And this actually does go right back to Cavendish Labs as well.
As incredible as it sounds, our entire industry was built on an assumption that electrons never really bump in to each other inside a transistor! It made the math easier.
You might have thought I forgot about the sixth Cavendish professor on my slides. Mott looked beyond this assumption and predicted interesting new devices, for which he won a Nobel prize. This correlated electron material is trying to make a memory element from a bidirectional Mott transition, and, if successful, we think it could provide improved neuromorphic computing as compared to existing memory devices.
The second materials-based example I want to show you involves two rather odd bedfellows: Arm and Unilever. Computer chips and deodorant (computer chip designers and deodorant: another discussion).
It turns out, there are people at Unilever who are essentially the master sommeliers of armpit sniffing. Which is important to developing those products. But you can’t take these armpit-sniffers with you when you are traveling, so how can you travel with confidence? Well that’s the problem this project is attempting to solve:
With a material advancement from The University of Manchester (sorry about that Scotch Tape comment earlier), and unique fabrication capabilities from PragmatIC, we are endeavoring to create literal ‘slim AI’ chips that can be woven into shirts and help you with this sensitive issue. You might be laughing a bit, but the above example of multi-level collaboration from materials to architecture to application is something that you can expect more of in the future.
I’ve highlighted a lot of interesting technology in this post, but I want to finish by going back to tenet #1, and in particular the success of Cavendish Lab. I’ve covered only 7 of the 29 Nobel prizes that have come out of that lab in Cambridge.
There’s a whole lot more to James Maxwell than equations in a book, of course. You’ll find that the Cavendish Labs were located at ‘the New Museums Site’. That is because at the time of Maxwell, physics was taught in the museum. Sounds pretty stuffy, and yes some people thought that all the good physics had been discovered at that point.
Maxwell saw that innovation in physics required a new way of doing things. He envisioned a full lab, which cost money of course, but the university wasn’t doing particularly well at that time. Keep in mind – he hadn’t published his famous thoughts on electromagnetism at this point. The appointment of an unproven youngster to lead this new lab was met with plenty of raised eyebrows. Furthermore, he was up against an establishment typified by this quote.
"Experimentation is unnecessary for the student. The student should be prepared to accept whatever the master told him.“
Dr. Isaac Todhunter
But Maxwell persevered, and within three years brought his vision of modern physics to fruition:
Cavendish Labs, circa 1910
As you can see, an open and collaborative environment. Not unlike this scene:
Arm TechCon, circa 2016
I believe TechCon in some way carries on Maxwell’s vision. Which takes us to the fifth and final tenet of futurism: The best way to predict the future, or better yet even create it, given what I believe will be a decade of new technology widgets, is to connect with other parts of the ecosystem. TechCon brings a diverse set of people together, and affords us this ability. And it is the way to 2030.
Thanks for such a good materials for learning future computing technology & Cavendish Labs memories.