We have made so much progress over the course of Moore’s Law, especially in the 21st century as we have enabled a new data economy. We now are faced with predictions that the energy of electronics may soon consume 20%, 25%, or perhaps up to 33% of the global energy supply.
As recent progress from the semiconductor industry shows, progress along Moore’s Law continues from 7nm to 5nm to 3nm. Historically, we know that Moore’s Law guarantees a reduction in cost per transistor. We in the DAC community know that this is achieved through reduced feature size, which in turn provides the twin benefits of power reduction and reduction in circuit delay.
Will this continue to 2nm? 1nm? The future of Moore’s Law does not have a binary answer of course, it will simply be a point of diminishing returns. It is difficult to notice shifts in the progress of an exponential with the sustained history of Moore’s Law, so I decided in this talk to project out to 2030. Extrapolating out 10 years, we can perhaps put into better relief the problems (and opportunities) that are presented. Later in the talk I will show why I think the co-location of DAC and SEMICON West (taking place virtually this year, due to COVID-19) is emblematic of some of these key challenges and opportunities that we will face in 2030.
Starting with cost, we find an amazing projection out to 2030:
The previous graph shows the price of one patterning tool over time. This sustained exponential increase in the cost of the tools required continue Moore’s Law is a major contributor to “Moore’s Second Law”, although Moore urged the name Rock’s Law. Extrapolating this trend to 2030 predicts that by the end of this decade, it requires $1B dollars for each patterning tool going into a fab. That may not be a sustainable path. The last dot, by the way, is a prediction that the industry needs to upgrade from EUV to High Numerical Aperture (High NA) EUV by mid-decade. It is amazing to think that a $255 million patterning tool actually comes in as a bargain, but you can see it well below the trend line on this graph.
After High NA EUV, we do not have an identified path. There have been some impressive attempts at disruptive patterning techniques, such as direct write with electron beams and nano-imprint patterning. But to put the fundamental challenge into perspective, the EUV tools in use today have to push the equivalent of 30 million HDTV screens worth of data down to each wafer. Extrapolate out to 2030, and that is asking a lot of any top-down patterning technique.
If we could define an ideal way around this problem, it would then be from the bottom up, using self-assembly. I am not referring to the directed self-assembly (DSA) research that has been performed over the years. But the fundamental assembly of the sorts of complex geometries that we need for microprocessors today. There have been some impressive results in this area, notably in the field of “DNA Origami”. This has shown the ability to create patterns that are fundamentally beyond the reach of our light-based top-down patterning, and some ability to do this across larger fields than just a few strands of DNA. I have some more detail about DNA Origami in a previous blog, where I highlight the Qian Lab at CalTech. They have an online tool you could use to make a 700nm wide self-portrait with DNA origami. Given the incredibly high bar that is set by the semiconductor industry in terms of high volume manufacturing of nanometer scale features, it is unclear how any technology can provide what we need in 2030, but assembly at the molecular level seems to be a field to watch.
Regarding energy and delay scaling, we have been battling the issue of diminishing returns from dimensional scaling for more than a decade. As I highlight in my SkyTalk, a clear demonstration of the challenge is given by this year’s DAC keynote speaker, Philip Wong, in a paper from 2009. In this paper, he (and his students) showed that while the intrinsic capacitance of a transistor does indeed reduce with each smaller process node, there are parasitic capacitance's that do not scale, or even reverse scale. This results in the MOSFETs of today’s modern nodes having less than half of their total capacitance be the scalable intrinsic gate capacitance. It is unavoidable math that can be ameliorated, but not avoided by electrostatically controlled devices.
Today’s FinFETs and tomorrow gates all around nanosheets offer no help to us in this regard. There is a scaling path to 2030 with the stacking of nanosheets. However, by 2030 our computing systems expect power and delay reductions that will not be possible by devices that are then one sixth the size of today's transistors - assuming we keep on shrinking dimension.
Again, if we take a large leap to 2030 and ask what the ideal case might be, 2-dimensional semiconductors look attractive. We could possibly hyper-shrink the vertical dimensions in the transistor and recoup some of the parasitic capacitances. We all know of course that the initial promise of graphene did not pan out for the semiconductor industry. But also, as I discussed in the prior blog post, there are new 2-dimensional semiconductor candidate materials showing up in bunches now, thanks to computational methods. These simulations require, sometimes, days of computing per data point, so in some ways the field is just taking off. But as we continue to enable more powerful computing systems, we are helping this field to find for us some viable new materials to continue performance scaling beyond 2030. So, I used 2D semiconductors, not because I think they are the best path forward beyond MOSFETs (because they are still MOSFETS), but they are an excellent example of a virtuous circle we are enabling in fundamental computer modeling of new materials. Interestingly to the DAC community, one of the underlying techniques (resulting in a Nobel prize) is “DFT”, but not as we know it in DAC—here it stands for Density Functional Theory.
I believe that by 2030, we are presented with many potentially disruptive new devices, illustrated very roughly in the following graphic from my talk:
The 50+ year history of MOSFET dimension shrinking could be the most impressive technological run in history. But by 2030, fundamental limits such as parasitic capacitance puts in bold relief, with the opportunity to replace these electrostatically controlled devices with something more energy-efficient. (Please do not read anything into the previous Y axis, and the relative placements of Speculative technologies therein; this is just a cartoon illustration).
The chicken and egg graphic references a key problem we face with many of these alternative switches. They break our design flows and make it very difficult to accurately assess up front how valuable they could ultimately be in advancing the world of computing. We (designers, system architects), will need to understand how to use devices that are very different than MOSFETs. Although we cannot expect the DAC community to invest in Speculative design flows for every proposed device on the growing list of disruptive new devices. Therefore, we have a chicken and egg problem
One example I will cite in my talk is the field of Superconducting Electronics (SCE), where single flux quantum logic can perform digital logic with much higher speed and lower energy. But its operation is so different than CMOS, that it does indeed break any conventional design flow. If you are interested, there are some SCE introductions here and here. In this case, IARPA stepped in to help solve the chicken and egg problem. However, there will not be enough organized funded projects like this to deal with the many examples I expect to start showing up by 2030.
I will also give two examples from Arm Research to help illustrate the environment I see us in by 2030. The first is shown in my talk with this slide:
Unfortunately, I recorded the talk well ahead of the conference and missed the opportunity to give you the link to the Nature Electronics article about this research project. The project actually uses good-old MOSFETs, but “thin-film” versions that can be made on flexible substrates (by PragmatIC), and thereby woven into shirts. The end application is odor-sniffing, which Arm accomplishes in the form of an efficient “univariate Bayes feature voting classifier” machine learning implementation. There are of course many other potentially very impactful applications, for instance smart bandages. This example does not require a new EDA paradigm, but it does serve to illustrate the “atoms to applications” or “materials to systems” combined efforts required to bring some of these future technologies into fruition.
The second Arm Research project I will show is a novel memory device, shown in the following as a collaborative project funded by DARPA’s Electronic Resurgence Initiative (ERI). Specifically the Foundations Required for Novel Compute (FRANC) program:
As a memory device alone, the technology is a potentially exciting addition to the available technologies, because Mott transitions can be femto-second fast, are relatively temperature independent, among a host of novel attributes. This would not break EDA flows, but correlated electron switches (based on the Mott effect, which can create metal-to-insulator transitions with electron orbital interactions), are a fundamentally different type of device with a wide variety of (as yet untapped) potential, according to their physics. The aim of the FRANC program is analog neuromorphic applications (enabled by multi-level cell capabilities), but there are many possible other new applications. I highlighted a nice review paper in my talk, in no small part because it has a good visual representation of the wide potential of correlated electron devices:
Source: Oxide Utilizing Ultrafast Metal-Insulator Transitions, Harvard, Ann. Rev. Mat. Rsh (2011)
Back to our memory development program using one of these devices. I chose this example from Arm Research because it is a poster child for the conference environment that I thought we would be immersed in this week: The first co-location of DAC and SEMICON West. Design and Manufacturing. Finally, a reason for this talks title comes forth. I was excited to be invited to speak in this environment (we all know how that turned out). This is because I believe the combination of the two fields of expertise in the semiconductor industry becomes more important as we work through the opportunities with fundamentally new types of devices, like correlated electron switches. This was the case back in the birth of MOSFET-based microprocessors, where engineers, such as those from Fairchild and Intel, combined new device technology and new design methods on the fly. But the Intel 4004’s 2300 transistors were comprehensible by the human brain. Even Federico Faggin’s brain could not bring a Single Flux Quantum switch to the level required to compete with several billion MOSFETs on a CPU today.
We are somewhat victims of our own success here. The processors we make today are so complex and run on such finely controlled devices. This makes it virtually impossible to rationalize how a new type of device wiggling in a laboratory can compare to the CMOS chips that will be produced 5 years into the future. I took our correlated electron FRANC project example to illustrate this. Analog neuromorphic capabilities depend on how many resistance levels you can cram into a cell. Your answers, and the success or failure in this application, will vary depending on whether the answer is 2, 8 or 64. Determining that answer in an actual device will ultimately depend on commercial fab capabilities: what are the variation mechanisms in the device, and how well can each mechanism be controlled in a 120,000 wafer per month fabrication facility? This conundrum is the “lab to fab” gap, which has been discussed and addressed in many ways over the years, including in the figure below which I took from a U.S. GAO publication.
Source: Nanomanufacturing, United States Government Accountability Office
Our correlated memory proof of concept is sitting at the left edge of that gap, having been government funded research. We do not yet know ultimately how impactful it can be, until we navigate this gap and understand the actual capability in a relevant production environment.
Here in the U.S., we have some fantastic facilities that can address some of this gap—some good examples for instance here from the Department of Energy—but for the most part those lean to the left side of this gap. We no longer have a SEMATECH, for instance. We do see some renewed interest in SEMATECH-like scale coming from various sources, including two bills recently proposed in the U.S. Congress, the American Foundries Act and the CHIPS for America Act. I personally would be excited to see one or more SEMATECH-like entities emerge from this activity, but SEMATECH operated squarely within the MOSFET era, and could focus predominately on improving the existing manufacturing paradigm. What I see coming in 2030 are technologies that demand concurrent progress in manufacturing and design. Therefore, the title of my talk, and my excitement about the co-location of the two conferences. If there will ultimately be any SEMATECH-like “national nano electronics laboratory”, it is my hope that the fields represented by both conferences would find a place to co-develop the future. Hopefully in another year, we are able to get together in San Francisco and discuss.
The full talk was shown at the virtual Design Automation Conference (DAC). Catch up on the talk, and download the slides.
Download Slides Watch Talk