Interview: A brief history of IP integration

It was the philosopher George Santayana who first proclaimed “Those who do not heed history are doomed to repeat it” over a hundred years ago, and it remains highly relevant today to those who are tackling the issue of developing faster, smaller, cheaper SoCs. I recently sat down with Norman Walsh to discuss IP integration. As well as having 21 years in the IC design industry Norman is a keen historian and so I figured he would be a fountain of knowledge when it came to piecing together how we have gotten to this point in time. As it turned out, he proved to give a very interesting overview of the challenges facing IP integration, and how it all got to this point. I’m shortly due to sit down with David Murray and talk about the future possibilities and probabilities in the industry, so be sure to check that out as well. If you have any questions you would like David to answer, please PM me or ask in the comments below.

Hi Norman, can you give me a brief overview of how IP integration has gotten to where it is right now?


Well to give a proper explanation we must bring it back to Moore’s Law. I’m sure everybody knows this already, but Gordon Moore was an engineer working for Fairchild Semiconductor who stated back in 1965 that the number of transistors in an integrated circuit will double approximately every 2 years. It’s a cliché but it’s testament to its accuracy that only in the last few years has there been a shift away from Moore’s Law. Now this was obviously going to grow exponentially, from the very small systems that were around back then until very quickly it grew into millions of transistors and nowadays we’re looking at a system that can have anything from 7 to 10 billion transistors. The backstory to Moore’s Law was that he was highlighting this phenomenon as a challenge for the semiconductor industry to keep up with the growth. The way the IC design industry kept up with this growth was through periodic shifts in the level of abstraction in the designs. At that time in the 60’s and 70’s a chip was first sketched out on paper, and each transistor was designed individually. The people within the companies doing this at the time soon realised that this was not a reasonable way to continue, so they developed CAD (computer aided design) tools in-house to automate the process of putting these transistors together, and to give them a schematic for putting them together. In the early 1980’s some of these design teams went off on their own and formed what is now a large and vibrant EDA industry. As well as developing the tooling to do this, the level of abstraction had to improve as well. In the late 70’s and early 80’s the industry went from a design that focused on individual transistors to what was called a standard cell-based design. This was a way of defining commonly-used features on an IC like an inverter or an AND gate so that they could be called out instead of constantly having to describe these functions.


hp-daily.gif(Source) how chip designers felt with the introduction of AND and OR gates



So that was the first level of abstraction then? The introduction of AND gates?

Yeah. It doesn’t sound too complicated now but at the time it was pretty revolutionary stuff. So they created these libraries of logic gates that gave designers the ability to keep pace with the growth in transistors. This kept everything on an even keel up until the late 80’s early 90’s when it became really difficult again as we’re talking about hundreds of thousands of transistors on a chip, and even with the logic gates people were struggling to keep up. This is when a hardware description language came in to raise the abstraction another level to what we now know as RTL. This time it modelled the flow of digital signals between hardware registers and the logical operations performed on those signals. This was done in conjunction with the emergence of two hardware description languages, Verilog and VHDL (which was actually developed by the US Department of Defense back in the 1980’s). Using the RTL along with the synthesis tools developed by some of the larger EDA companies gave some breathing space back to system designers and allowed them to keep ahead of the Moore’s Law curve. Predictably, within 15 years the problem was back on the table and had reached critical levels by the time IP-XACT came along. IP-XACT was developed by the now defunct SPIRIT Consortium around 5 or 6 years ago, and raised the level of abstraction up to where designers were now describing interfaces between individual blocks of IP. Over the last number of years it has taken a while to be adopted, just as the other abstractions did, but in the last 2 years it has definitely become the standard across the industry. The adoption of IP-XACT contributed to the rise of commercial IP as more and more teams have been incorporating 3rd party IP or reusing old IP in their designs, because they now have a way to standardise the interfaces and make the integration quicker and easier. A number of people don’t see this growth of commercial IP as a shift in abstraction, but I believe it is. It essentially allows SoC designers to leave behind a lot of the repetitive tasks in chip design that do not make any overall difference, and allows them to focus on real design decisions that can differentiate their SoC. But if you really want take advantage of all of this IP, there are still some nuts to turn and things that are necessary to make them work together really well. That’s the real key point, as the end user cares more about the performance of the SoC (and the new device) than the performance specs of each individual piece of IP.


Slider-Electronic-integration-600x225.jpg(SourceSuccessful SoC design requires the integration of various components

It sounds like the industry has changed a lot over the last five to ten years. What does the landscape look like now?


Newer companies don’t have the collateral or library of IP to make it all themselves. If a company started out now it would take decades of man hours to come up with the IP necessary to build a chip in-house. So in that sense the growth of commercial IP has reduced the barriers of entry for this industry, and we are seeing new companies come into the market and become competitive by focusing on a particular niche. However as with all of the levels of abstraction and solutions that I mentioned earlier, it has presented its own set of problems to be solved.

There is always a big question over the quality of 3rd party IP. In the past there were doubts, about whether it had gone to silicon? It created a bit of a Catch-22 situation as nobody would trust IP that hadn’t gone to silicon, and some of the newer designs struggled to get off the ground because of that. Once it gets to a silicon-proven stage then people are more likely to use commercial IP, as they know that there is less of a risk involved with it.

The interoperability between IP blocks is always an issue. It’s becoming less so over the years as a lot of people have moved to the AMBA protocols like APB or AXI, but it still takes a bit of time for companies to move away from something they’ve developed internally. It also took a while for standard interfaces to be recognized and become adopted by the majority of the industry. Five years ago, a lot of IP was internal rather than commercially available. Interoperability was a problem. You had this thing that you grew internally that didn’t connect to anything very easily. Then you had this other IP with a standard interface and you couldn’t connect them. Nowadays people still build internal IP, but they build them in a way that they can be connected easily. The ARM AMBA specifications help because they are a standard, but it’s more of a catalog of things you could do with the interface. So there is still a lot of tweaking. You can see a trend, with the chips becoming more complex and the levels of abstraction being raised. To be honest we are probably overdue for another level of abstraction at this stage.

patch.jpg (Source) Simplified version of the interoperability issue

And what do you think that level of abstraction will look like?


You have to think outside the box on this one. There’s no magic bullet here, ESL has been talked about for a while but I’m not so sure, it just describes the chip in a different way. The way I see it is through IP-XACT and standardisation, we need to standardise formats. A shift in abstraction is all about improving productivity and making sure that formats are correct across different teams or different companies would absolutely make a difference. One of the results of commercial IP and IP reuse is that there we have seen a growth in subsystems as more and more parts become standardized. To a certain extent this makes the integration side of things easier to manage as there are fewer custom interfaces to deal with. It is essentially delivering an entire system within an IP block; the processor clusters, the interconnects and everything. There needs to be a more standardised methodology. There’s been a lot of talk about back-end methodology, and stitching things together in your block diagram. One of the challenges that people are going through right now is that the design on the front end is taking weeks and weeks to get right, and then it is being rushed through the back-end in a matter of days. If we could find a way to manage the front-end of design in a better way, that would cut down the design cycle considerably.


IP integration.jpg

(Source) The next level of abstraction?

Sounds like you have plenty on your plate going forward. Finally, how important is it to have internal IP that can interact with ‘foreign’ IP?


The use of standard and intelligent formats is very important for the continued use of 3rd party IP. It’s one thing to have an IP-XACT language, but we need to have a consistent way of describing IPs using IP-XACT. This way IPs will come together a lot quicker and it needs to happen going forward. It’s something we’re working on in a big way at the moment, creating standardised formats for IPs to interact with each other. This needs to become standard across the industry but it needs to happen internally first. It needs to happen with the interfaces for all IPs to be 100% compatible and make it simpler for people who are not experts in this to input ARM IP or any other 3rd party IP. I think that once this happens we will see far more people open to the idea of truly vendor-neutral IP, and design times could be reduced dramatically.

I certainly found his answers quite helpful and enlightening to uncover some of the history within the IC design industry. If you have any questions for Norman, please enter them in the comments below and we'll get them answered as soon as possible.

Additional information can be found with these 2 white papers:

Anonymous