Skip navigation

Blog

1 2 3 Previous Next

SoC Implementation

174 posts

I'm looking forward to the Synopsys Users' Group (SNUG) Silicon Valley this week. I hope I'll get to see some of you there. Maybe 3GHz+ will get your attention? Read on...

 

It's always a great (and huge) gathering of leading edge designers sharing about their leading edge designs -- challenges, solutions, warts and all. That's the hallmark of SNUG - deep technical content, and a lot of it. SNUG is the event where designers learn, share and engage.

 

Thank you, ARM, for being a global Platinum Sponsor of SNUG - you'll see ARM participation in SNUG events worldwide. In preparation for SNUG, I sat down with Dipesh Patel, ARM EVP incubation businesses, and he shared his thoughts in two short videos, one on collaboration with Synopsys and another on what SNUG means to him and ARM.

In the videos, Dipesh comments on:

 

SNUG worldwide kicks off March 31, 2016 in Silicon Valley (Santa Clara Convention Center) for a two day run. There's great content for everyone, implementation, verification, security, and IP, and many presentations are especially relevant to ARM-based designs.

 

Looking forward to seeing you there, brendaw, sandralarrabee, sandrachang!

 

Here are a few ARM-related presentations that I plan to attend:

  • Optimized Implementation of 3GHz+ ARM® CPU Cores in FinFET Technologies (Broadcom)
  • Best Practices for High-Performance, Energy Efficient Implementations of the Latest ARM Processors in 16-nanometer FinFET Plus (16FF+) Process Technology using Synopsys Galaxy Design Platform [Ok, a mouthful, but this session will cover lessons learned not only from Cortex-A72 implementation, but also the latest, not-yet-announced ARM application CPU]
  • Best Practices for a Performance and Area Focused Implementation of High-Performance GPUs Using Galaxy Design Platform
  • VIrtual Platform Methodology for Large Scale Pre-Silicon SW Development (Broadcom)
  • High Level Performance Estimation on Virtual Prototypes Employing Timing Annotation (ARM)
  • Writing Efficient Timing Constraints and Accelerating Timing Closure with PrimeTime (ARM)

 

Visit our SNUG Silicon Valley website to see the full technical and social program.

 

Hope to see you there! If you can't make it, check back at the Synopsys SNUG website to see archived papers and presentations.

Understanding Your Chip’s Age

By Ramsay Allen

 

In this, the second instalment of the "Let's Talk PVT Monitoring" series I chat with Oliver King about understanding your chip's age. As Moortec’s CTO, Oliver has been leading the development of compelling in-chip monitoring solutions to address problems associated with ever-shrinking System-on-Chip (SoC) process geometries. An analogue and mixed signal design engineer with over a decade of experience in low power design, Oliver is now heading up the expansion of Moortec's IP portfolio into new products on advanced nodes.

 

Oliver_King (732x1024).jpg

 

Questions:

 

1.Why is understanding your chip's age important?

Semiconductor devices age over time, we all know that but what is often not well understood are the mechanisms for ageing or the limits that will cause a chip to fail. In addition, there is bound to be a requirement for a minimum lifetime of a device which will depend on application but could be two or three years for a consumer device and up to twenty-five years for a telecommunications device. Given that lifetime requirement and often poorly understood ageing processes, many chips designed today are over designed to ensure reliable operation. If you understand that ageing process or better still can monitor the ageing process then you can reduce the over design and potentially even build chips that react and adjust for the ageing effect, or predict when that chip is going to fail.

 

Chips at the moment are not getting anywhere near their total lifespan because in most cases there isn’t any in-chip monitoring taking place. I sometimes use the analogy of a rental car which you want to give back with an empty fuel tank. If your chip has a defined lifetime, then you want to run it as hard as you can to just perform within spec for the lifetime, or looking at it the other way, you want to hand your rental car back just as you run out of fuel.

 

2. What are the effects and mechanisms of ageing?

There are a number of mechanisms which contribute, the most notable ones are electromigration, hot carrier effects, and bias temperature instability. Whilst some of this can be mitigated through design techniques, and CAD tools exist to help with that, they can only go so far. In the case of bias temperature instability, the mechanisms are not fully known. Whilst traditionally only negative bias temperature instability (NBTI) was considered an issue, now, with the introduction of high K metal gates at 28nm positive bias temperature instability (PBTI) is now a problem as well. The result of BTI is to raise threshold voltages, and the effect is very temperature dependant, so without a good model of device use it is hard to predict and thus design for. In addition, ageing effects in general are, by nature, hard to measure because it takes a long time even with acceleration techniques such as HTOL to get a device to end of life

 

3. How can we help predict device lifetime?

From Moortec’s perspective we are working on monitors that can be used to measure the ageing process of a device in the field, by having reference structures and comparing them to live structures, we can compare the two over time. This is one application that is being used at the moment, alongside using the information to adjust the supply to bring the chip back to the performance level that you expect, or need. This is actually quite common, particularly in devices where there is a requirement for a particular throughput.

 

4. How does this help with choosing the lifestyle or your chip?

The thing is that ageing is complex and very dependent on use case and environment. In most modern applications neither of those is well known and often will vary over time itself.

 

If we take the smartphone as an example, there will be modes where it is doing very little - where the clock frequency is low, the voltage supply is low. At the other extreme it will be playing HD video - the clock will be run at high rates and the supply will be correspondingly high. Obviously if you took that device and left it in the low power state it would age at a significantly lower rate than if you left it in the high power state. The trouble is at design time you don’t know what that ratio is. Of course this example is actually already a simplified case because more often than not there will be more than two states so you have to make assumptions about time spent in each state, and build margins in to cope with the unknowns. By allowing the system to monitor that ageing then potentially you can optimise DVFS schemes, you can predict lifetime or perhaps even reign in certain modes to insure that a particular lifetime is met.

 

Another example is the bitcoin mining application. This is at the other end of the scale, where devices are manufactured to sit in large arrays. Each chip will vary with process and they will age differently partly as a result of process variation, and partly because their loads won’t always be equal. If you can monitor all those conditions, then you can optimise each of those chips to run at peak performance.

About the interviewee

Oliver King is the Chief Technology Officer of Moortec Semiconductor. Before joining Moortec in 2012, Oliver was part of the analogue design methodology team at Dialog Semiconductor and prior to that was a senior design engineer at Toumaz Technology. Oliver graduated from The University of Surrey in 2003 with a degree in Electrical and Electronic Engineering.

 

About Moortec Semiconductor

Moortec Semiconductor, established in 2005, provide high quality analog and mixed-signal Intellectual Property (IP) solutions world-wide specialising in die monitoring. Having a track record of delivery to tier-1 semiconductor and product companies, Moortec provide a quick and efficient path to market for customer products and innovations.

 

For more information please visit www.moortec.com.

 

Contact: Ramsay Allen, +44 1752 875133, ramsay.allen@moortec.com

saiganeshk

SoC implementation

Posted by saiganeshk Feb 19, 2016

Good morning, I want a header file of Cortex-m4 to cross compile my C code in GCC  . Because the LR and PC not getting updated when i cross compiled. Can any one give the link to download that header file. Thanks in advance

Why pinpoint accuracy is so important when monitoring conditions on chip

By Ramsay Allen

 

Last week I had the opportunity to speak with Oliver King about accurate PVT monitoring as part of Moortec’s new “Let’s Talk PVT Monitoring” series. As Moortec’s CTO, Oliver has been leading the development of compelling in-chip monitoring solutions to address problems associated with ever-shrinking System-on-Chip (SoC) process geometries. An analogue and mixed signal design engineer with over a decade of experience in low power design, Oliver is now heading up the expansion of Moortec's IP portfolio into new products on advanced nodes.

 

Oliver_King (732x1024).jpg

 

Questions:

 

1. Why is there an increasing requirement for monitoring on chip?

 

A. Since the beginning of the semiconductor industry, we have relied on a doubling of transistor count per unit area every 18 months as a way to increase performance and functionality of devices. Since 28nm, this has broken. As such, designers now need to find new ways to continue increasing performance.

 

Using the analogy of the internal combustion engine, for decades it was fine to have the fuel consumption and emissions that they had as the innovation was limited. Improvements to cars focused around adding features, adding things which made the car a nicer place to be. People bought new cars because they wanted the latest features. Then oil prices started going up, we became aware of the environmental impact, and with this innovation aimed at improving the efficiency of the engine. The result is quite astounding, the modern car engine delivers more power whilst consuming less fuel and emitting less harmful gasses.

 

The semiconductor industry is now in the position where it has to do the same. We can no longer rely on adding more transistors to make a better, faster, chip. The customer still wants their new computer, phone, or tablet, to be faster and have more storage than their old one.

 

One technique being deployed to provide the improvement is device optimisation. By being aware of a devices thermal and voltage environment and understanding where a given device is within the ever increasing sphere of device variation, allows the system architects and circuit designers to get more from a given piece of silicon. With the increase as well in the cost of advanced nodes, this is becoming even more important to ensure every last drop of performance is extracted from a die.

 

2. Is this just an issue for the advanced nodes?

 

A. In short, no. With the growth in the IoT market, we are going to see an explosion in the number of wireless devices. A majority of these will be on older process nodes, however, the same performance gains can be found on these nodes. These devices will typically be battery powered and sensitive to power consumption. Because of this there will be a drive to improve efficiency in these products rather than perhaps improving performance, but they are different names for essentially the same problem. By understanding where a given die is with respect to process, voltage, and temperature, a more optimum solution can be found, whether that optimum solution is measured by performance or efficiency doesn't matter.

 

Then we also have to consider automotive, which is a big growth area for the semiconductor industry as a whole. With the growth of the Advanced Driver Assistance Systems (ADAS) and Infotainment areas we will see more advanced nodes being used, certainly down to 28nm in the near future. Ideally some of these products would be on more advanced nodes but those are not qualified for automotive. As a result, the available technologies will have to be squeezed by designers to get the extra level of performance. In addition, the environment in automotive is harsh. So when you look at all of these together it is clear there is definitely room for die optimisation, as well as the requirement for basic monitoring purely for safety and reliability reasons.

 

3. How important is accurate monitoring?

 

A. Accurate PVT monitors are key to implementing die optimisation. We all know the relationship between power consumption and supply voltage of CMOS logic. Being able to reduce the supply by even a few percent based on that particular die’s process point, also combined with the environmental conditions that allow, will result in power savings worth having. The same is true with performance, if a given clock speed can be met with a lower supply. But none of this is possible if the monitors are not accurate.

 

4. How critical has monitoring become?

 

A. PVT monitors are not anything new. They have been used in the industry for a long time, however, not generally in what I call mission critical roles. Once you start looking at optimisation and potentially putting them into dynamic control systems, which is where we are now seeing customers use our latest generation of die monitors, then reliability and testability are absolutely critical.

 

Having features within the PVT monitors to ensure they can be tested easily in production is a minimum entry requirement, but in addition to that, being able to know the data from these monitors can be trusted is fundamental. As such, I believe having in-field, fault detection and reporting built into the monitors is key. Let us consider the situation where a chip within a smartphone or tablet contains a temperature monitor which fails and this failure effectively tells the system that the temperature is 0C when it is actually 50C, and because of this the system decides it can run with higher clock rates pushing the die temperature up even further. The result could be very serious indeed. Because of this robust operation of PVT monitors is becoming a primary concern.

 

Moortec's latest generation of monitors feature exactly this level of robust operation, and fault detection, complete with all the usual production testability which should be expected from such devices.

 

5. Where do you see the future of on chip monitoring?

 

A. On chip PVT monitoring is here now and it is here to stay. The costs of advanced node technologies are continuing to increase, and I think we are already starting to see a fragmentation, with the really advanced nodes becoming more niche for those devices which really need the performance. For those nodes, optimisation will be part of the architecture to ensure the cost of those expensive technologies is minimised.

 

As the rest of the industry moves down to smaller nodes, they will look to differentiate their products from their competitor and good die optimisation will play a part in that.

 

About the interviewee

 

Oliver King is the Chief Technology Officer of Moortec Semiconductor. Before joining Moortec in 2012, Oliver was part of the analogue design methodology team at Dialog Semiconductor and prior to that was a senior design engineer at Toumaz Technology. Oliver graduated from The University of Surrey in 2003 with a degree in Electrical and Electronic Engineering.

 

About Moortec Semiconductor

 

Moortec Semiconductor, established in 2005, provide high quality analog and mixed-signal Intellectual Property (IP) solutions world-wide specialising in die monitoring. Having a track record of delivery to tier-1 semiconductor and product companies, Moortec provide a quick and efficient path to market for customer products and innovations. For more information please visit www.moortec.com.

 

Contact: Ramsay Allen, +44 1752 875133, ramsay.allen@moortec.com

Interesting article raising questions about how designers approach SoC architecture to achieve power and performance targets. Semiconductor Engineering .:. Do Circuits Whisper Or Shout?

 

If performance is your thing see my earlier blogs

Exploring the ARM CoreLinkTM CCI-500 performance envelope - Part 1

Exploring the ARM CoreLink™ CCI-500 performance envelope – Part 2

Following on from Tuesday's announcement from ARM of the CoreLink CCI-550 interconnect and CoreLink DMC-500 memory controller products it is clear there are a lot of performance advantages to be had from a well matched interconnect and memory system. You can discover some of those benefits in eoin_mccann's blog The Foundation for Next Generation Heterogeneous Devices. For those in Silicon valley next Thursday, you can find out all about Exploring System Coherency from jdefilippi's talk on the new products at ARM TechCon.


Configuring your SoC infrastructure to match the number and type of processors and DDR channels present in your design is a vital step to ensuring your product is competitive. Measuring the resultant performance is the proof you need to know you have met your design objectives.


At ARM TechCon on Nov 12th I'll be teaming up with nickheaton from Cadence to discuss some of the key interconnect configuration options, demonstrating tools for automating the configuration process and verify erformance of the interconnect and memory system.

 

In our presentation on Architecting and Optimizing SoC Infrastructure I'll be discussing ways to optimise cache coherency, avoiding stalls while waiting for transactions to complete, setting up QoS contracts and keeping the memory controller fully utilized and Nick will be demonstrating with performance verification tools the importance of correct buffer sizing and having multiple outstanding transactions in flight.

 

William.

System optimization involves running Linux applications and understanding the impact on the hardware and other software in a system. It would be great if system optimization could be done by running benchmarks one time and gathering all needed information to fully understand the system, but anybody who has ever done it knows it takes a numerous runs to understand system behavior and a fair amount of experimentation to identify corner cases and stress points.

 

Running any benchmark should be:

  • Repeatable for the person running it
  • Reproducible by others who want to run it
  • Reporting relevant data to be able to make decisions and improvements

 

Determinism one of the challenges that must be understood in order to make reliable observations and guarantee system improvements have the intended impact. For this reason, it’s important to create an environment which is as repeatable as possible. Sometimes this is easy and sometimes it’s more difficult.

 

Traditionally, some areas of system behavior are not deterministic. For example, networking traffic of a system connected to a network is hard to predict and control if there are uncontroled machines connected to the network. Furthermore, even in a very controlled environment the detailed timing of the individual networking packets will always have some timing variance related to when they arrive at the system under analysis.

 

Another source of nondeterministic behavior could be something as simple as entering Linux commands at the command prompt. The timing of how fast a user is typing will vary from person to person and from run to run when multiple test runs are required to compare performance. A solution for this could be an automated script which automatically launches a benchmark upon Linux boot so there is no human input needed.

 

Understanding the variables which can be controlled and countering any variables which cannot be controlled is required to obtain consistent results. Sometimes new things occur which were not expected. Recently, I was made aware of a new source of non-determinism, ASLR.

 

Address Space Layout Randomization

 

Address Space Layout Randomization (ASLR) has nothing to do with system I/O, but the internals of the Linux kernel itself. ASLR is a security feature which randomizes where various parts of a Linux application are loaded into memory. One of the things it can do is to change the load address of the C library. When ASLR is enabled the C library will be loaded into a different address of memory each time the program is run. This is great for security, but is a hinderance for somebody trying to perform system analysis by keeping track of the executed instructions for the purpose of making performance improvements.

 

The good news is ASLR can be disabled in Linux during benchmarking activities so that programs will generate the same address traces.

 

A simple command can be used to disable ASLR.

 

$ echo 0 > /proc/sys/kernel/randomize_va_space

 

The default value is 2. The Linux documentation on sysctl is a good place to find information is the randomize_va_space:

This option can be used to select the type of process address space randomization that is used in the system, for architectures that support this feature.

0 - Turn the process address space randomization off. This is the default for architectures that do not support this feature anyways, and kernels that are booted with the "norandmaps" parameter.

1 - Make the addresses of mmap base, stack and VDSO page randomized. This, among other things, implies that shared libraries will be loaded to random addresses.  Also for PIE-linked binaries, the location of code start is randomized. This is the default if the CONFIG_COMPAT_BRK option is enabled.

2 - Additionally enable heap randomization. This is the default if CONFIG_COMPAT_BRK is disabled.


There is a file /proc/[pid]/maps for each process which has the address ranges where the .so files are loaded.

 

Launching a program and printing the maps file shows the addresses where the libraries are loaded.

 

For example, if the benchmark being run is sysbench run it like this:

 

$ sysbench & cat /proc/`echo $!`/maps

 

Without setting randomize_va_space to 0 different addresses will be printed each time the benchmark is run, but after setting randomize_va_space to 0 the same addresses is used from run to run.


Below is example output from the maps file.

 

sysbench1.jpg

 

If you ever find that your benchmarking activity starts driving you crazy because the programs you are tracing keep moving around in memory it might be worth looking into ASLR and turning it off!

We are so pleased to have Arenta people and technology join Synopsys!

 

The former Atrenta products, widely used by designers of ARM-based SoCs, complement Synopsys' solutions for ARM-based design.

 

Verification requirements have exploded as designs have become increasingly complex. Atrenta's early design analysis tools enable efficient, early verification and optimization of SoC designs at the RTL level. Combined with Synopsys' industry-leading verification technologies, Atrenta's leading static and formal technology further strengthens Synopsys' Verification Continuum™ platform and enables customers with this unique verification environment to meet the demands of today's complex electronic designs. Atrenta's SoC design analysis technology also fortifies the Synopsys Galaxy™ platform with additional power, test and timing-related analysis technologies. By integrating Atrenta's complementary technology into Synopsys' platforms, Synopsys can offer designers a more comprehensive, robust portfolio of silicon to software solutions for  complex electronic systems.

 

The Atrenta products include:

 

More information on the former Atrenta products is available at synopsys.com/Tools/Verification/Atrenta

SAN FRANCISCO—Design complexity is soaring. Node-to-node transitions now take a year to a year and a half, not several years. Market pressures mount.

This means third-party IP integration is crucial not only to managing system-on-chip (SoC) design complexity but getting to market in a reasonable amount of time. But IP integration has often been easier said than done. If done improperly, design teams can experience schedule slips, which means added cost and lost market opportunity. So is it worth the risk? Are there any real alternatives? These were fundamental questions a panel of experts addressed here at the 52nd Design Automation Conference in June.

 

DAC 2015 IoT Integration Panel.jpeg

The complexity problem

Albert Li, director with Global Unichip Corp, said his company is “hugely dependent” on IP but “there are a lot of problems with HW IP” in terms of implementation and verification.

“Things are getting complicated,” said Navraj Nanda, senior director of marketing for the DesignWare Analog and mixed-signal IP products at Synopsys. “In terms of technology nodes on one end they’re getting smaller. On the other end, (more technologically mature) nodes are getting refreshed.”

He said a key challenge is how does the industry serve those markets “with the same types of IP?”

Thomas Wong, director within strategic business operations at Cadence's IP Group, said with the node-to-node transition shrinking from two to three years to sometimes 18 months, that pace is “outstripping the capacity of smart engineers” to keep up and exploit the node benefits.

While it’s always cathartic to talk about the shared challenges when it comes to the evolution of electronics design, the panelists quickly coalesced around the notion that IP—for all its challenges—is here to stay but that optimization and efficiencies must be found.

“I don't think there's any other way of designing chips with a very small number of exceptions,” said Leah Schuth, director of technical marketing with ARM.

Schuth suggested that the industry address IP and tools the same way it looks at consumer devices. “We need to make the complexity almost invisible to the user” though increased standardization or some kind of certification, she said.


Feeling Bloated

File sizes are part of the integration problem, and here experts on the panel—which was moderated by Semiconductor Engineering Executive Editor Ann Steffora Mutschler—offered some jarring challenges as well as potential solutions.

Cadence’s Wong said that a customer recently told him that downloading libraries for a project was going to take seven days. And even delivering a hard drive with the terabytes of information on the drive took several days to download.

Schuth wondered how much data across IP is duplicated, bloating file sizes. Is there a way to not transmit “non-data” like header fields or duplicative data to cap file size, Schuth asked.

Nanda said he believes the file-size problem is actually worsening, even with EDA solutions to manage database sizes like OASIS (Open Artwork System Interchange Standard).

“You can be idealistic and say ‘hey let’s try to limit the data size because we understand the applications in the customer’s market,’” Nanda said, “but in reality our customers are in brainstorming mode so they want the whole enchilada.”

Wong noted that Cadence’s multi protocol IP offering can be one way of getting around the file size problem because you load a database once and get to use various protocols with various different designs.

“It was invented for that, but it’s a bonus,” he said.

Schuth said another way to improve IP integration challenges is to work hard to ensure the IP works “right out of the box” for customers, along the lines of ARM’s Socrates Design Environment or IP-XACT.

Wong suggested thinking about an integrated approach, and he summoned the ghosts of PC design past as an example. Chips & Technologies soared to prominence in the 1990s as a chipset vendor because it delivered complete motherboard reference designs into the market to ease and speed design, he said. This model carries over today into smart phone design, he added.

At the end of the day, in design engineering there are always challenges and usually gradual improvement. As the IP market and methodology mature, the integration stress eases and becomes a “100-piece puzzle instead of a 1,000-piece puzzle,” said Schuth. That’s because IP vendors are learning more and more about customer needs and then applying those lessons to subsequent engagements.


Related stories:

DAC 2015: IoT’s Long Winding Road

Interview: A brief history of IP integration

Interview: How to solve the IP integration problem


Recently, Carbon released the first ARMv8 Linux CPAK utilizing the ARM CoreLink CCN-504 Cache Coherent Network on Carbon System Exchange. The CCN family of interconnect offers a wide range of high bandwidth, low latency options for networking and data center infrastructure.

CoreLink

The new CPAK uses an ARM Cortex-A57 octa-core configuration to run Linux on a system with AMBA 5 CHI. Switching the Cortex-A57 configuration from ACE to CHI on Carbon IP Exchange is as easy as changing a pull-down menu item on the model build page. After that, a number of configuration parameters must be set to enable the CHI protocol correctly. Many of them were discussed in a previous article covering usage of the CCN-504. Using native AMBA 5 CHI for the CPU interface coupled with the CCN-504 interconnect provides high-frequency, non-blocking data transfers. Linux is commonly used in many infrastructure products such as set-top boxes, networking equipment, and servers so the Linux CPAK is applicable for many of these system designs.

 

Selecting AMBA 5 CHI for the memory interface makes the system drastically different at the hardware level compared to a Linux CPAK using the ARM CoreLink CCI-400 Cache Coherent Interconnect, but the software stack is not significantly different.

 

From the software point of view, a change in interconnect usually requires some change in initial system configuration. It also impacts performance analysis as each interconnect technology has different solutions for monitoring performance metrics. An interconnect change can also impact other system construction issues such as interrupt configuration and connections.

 

Some of the details involved in migrating a multi-cluster Linux CPAK from CCI to CCN are covered below.

 

Software Configuration

Special configuration for the CCN-504 is done using the Linux boot wrapper which runs immediately after reset. The CPAK doesn’t include the boot wrapper source code, but instead uses git to download it from kernel.org and then patch the changes needed for CCN configuration. The added code performs the following tasks:

  • Set the SMP enable bit in the A57 Extended Control Register (ECR)
  • Terminate barriers at the HN-I
  • Enable multi-cluster snooping
  • Program HN-F SAM control registers

 

The most critical software task is to make sure multi-cluster snooping is operational. Without this Linux will not run properly. If you are designing a new multi-cluster CCN-based system it is worth running a bare metal software program to verify snooping across clusters is working correctly. It’s much easier to debug the system with bare metal software, and there are a number of multi-cluster CCN CPAKs available with bare metal software which can be used.

 

I always recommend a similar approach for other hardware specific programming. Many times users have hardware registers that need to be programmed before starting Linux and it’s easy to put this code into the boot wrapper and less error prone compared to using simulator scripts to force register values.The Linux device tree provided with the CPAK also contains a device tree entry for the CCN-504. The device tree entry has a base address which must match the PERIPHBASE parameter on the CCN-504 model. In this case the PERIPHBASE is set to 0x30 which means the address in the device tree is 0x30000000.

dt-ccn1

All Linux CPAKS come with an application note which provides details on how to configure and compile Linux to generate a single .axf file.

 

GIC-400 Identification of CPU Accesses

One of the new things in the CPAK is the method used to get the CPU Core ID and Cluster ID information to the GIC-400.

 

The GIC-400 requires the AWUSER and ARUSER bits on AXI be used to indicate the CPU which is making an access to the GIC. A number between 0 and 7 must be driven on these signals so the GIC knows which CPU is reading or writing, but getting the proper CPU number on the AxUSER bits can be a challenge.

 

In Linux CPAKs with CCI, this is done by the GIC automatically by inspecting the AXI transaction ID bits and then setting the AxUSER bits as input to the GIC-400. Each CPU will indicate the CPU number within the core (0-3) and the CCI will add information about which slave port received the transaction to indicate the cluster.

 

Users don’t need to add any special components in the design because the mapping is done inside the Carbon model of the GIC-400 using a parameter called “AXI User Gen Rule”. This parameter has a default value which assumes a 2 cluster system in which each cluster has 4 cores. This is a standard 8 core configuration which uses all of the ports of the GIC-400. The parameter can be adjusted for other configurations as needed. 

 

The User Gen Rule does even more because the ARM Fast Model for the GIC-400 uses the concept of Cluster ID to indicate which CPU is accessing the GIC. The Cluster ID concept is familiar for software reading the MPIDR register and exists in hardware as a CPU configuration input, but is not present in each bus transaction coming from a CPU and has no direct correlation to the CCI functionality of adding to the ID bits based on slave port.

 

To create systems which use cycle accurate models and can also be mapped to ARM Fast Models the User Gen Rule includes all of the following information for each of the 8 CPUs supported by the GIC:

  • Cluster ID value which is used to create the Fast Model system
  • CCI Port which determines the originating cluster in the Cycle Accurate system
  • Core ID which determines the CPU within a cluster for both Fast Model and Cycle Accurate systems

 

With all of this information Linux can successfully run on multi-cluster systems with the GIC-400.

 

AMBA 5 CHI Systems

In a system with CHI the Cluster ID and the CPU ID values must also be presented to the GIC in the same way as the ACE systems. For CHI systems, the CPU will use the RSVDC signals to indicate the Core ID. The new CCN-504 CPAK introduces a SoC Designer component to add Cluster ID information. This component is a CHI-CHI pass through which has a parameter for Cluster ID and adds the given Cluster ID into to the RSVDC bits.

 

For CCN configurations with AXI master ports to memory, the CCN will automatically drive the AxUSER bits correctly for the GIC-400. For systems which bridge CHI to AXI using the SoC Designer CHI-AXI converter, this converter takes care of driving the AxUSER bits based on the RSVDC inputs. In both cases, the AxUSER bits are driven to the GIC. The main difference for CHI systems is the GIC User Gen Rule parameter must be disabled by setting the “AXI4 Enable Change USER” parameter to false so no additional modification is done by the Carbon model of the GIC-400.

 

Conclusion

All of this may be a bit confusing, but demonstrates the value of Carbon CPAKs. All of the system requirements needed to put various models together to form a running Linux system have already been figured out so users don’t need to know it all if they are not interested. For engineers who are interested, CPAKs offer a way to confirm the expected behavior read in the documentation by using a live simulation and actual waveforms.

“ARM's

What do electronics design engineers and ancient people have in common? Not much, save for one significant passion: Tools.

That was hbullman's assessment during an hourlong presentation he gave July 9 at Cadence's San Jose headquarters.

Bullman, general manager ARM's Development Solutions Group, spoke on “Dealing with a Complex World" and the ancient and constant human need to develop tools to solve problems.

Read Cadence Editor Christine Young's complete coverage of Bullman's presentation here.

 

Related stories:

ARM TechCon 2014 Tools Roundup

We had a very successful series of events at DAC in which we highlighted collaborative successes with our strategic partners.

 

The video is now available for the Samsung+ARM+ Synopsys event, which included the following presentations:

  • ARM solutions for energy-efficient network computing
  • Samsung enabling network computing design with 14LPP FinFET technology
  • ARM Cortex-A53 network computing reference implementation (4 clusters of quad-core Cortex-A53 processors = 16 cores!) with IC Compiler II
  • Live panel Q&A with the audience

samsung-breakfast-video.jpg

While we had incredible attendance and interactions at the DAC breakfast event, we're happy to now be able to make the video of this event available for those who were unable to attend live!

 

Would love you hear your feedback and/or questions.

max-0062-emb-gary-01.jpgMany of you no doubt have heard of the passing of longtime EDA industry analyst Gary Smith, who died after an illness July 3. Gary was a life-loving, spirited and brilliant man whose long passion for electronics design was reflected in everything from his work to his music-playing to his efforts with the ITRS semiconductor roadmap. Richard Goering, longtime EDA editor with EE Times and Cadence, penned an obituary as did Dylan McGrath for EE Times.

A celebration of Gary's life, hosted by his widow, ARM's lorikate, is scheduled for Sunday July 12, 11 a.m. at the DoubleTree Hotel in San Jose. Additional information can be found here at Gary Smith EDA.

ARM @ DAC

This year at the 52nd Design Automation Conference (DAC) in San Francisco, ARM® and its ARM Connected Community® partners will showcase how we drive industry innovation through collaborative initiatives that shape the technology for tomorrow's leading-edge devices, such as the Internet of Things (IoTs) and wearables electronics.

 

ARM Exhibiting (booth 2428)

Presentations and demonstrations of ARM's newest industry-leading technology including:

ARM experts will also be participating in more than 30 technical sessions, panels and events during the conference. View the ARM speaking schedule.

 

The ARM Connected Community Pavilion

Join us to learn how ARM works with our community of EDA and embedded partners to enable leading semiconductor and consumer electronics companies with integrated and optimized ARM-based IP for advanced systems-on-chips (SoCs) targeting a wide range of end applications from sensors to servers.

 

Over a dozen ARM partners will be speaking and featuring their advanced tools, services and products in the ARM Connected Community Pavilion (2414), including ANSYS, Carbon, eSilicon, Imperas Software, Lauterbach, Magillem, Mentor Embedded, Mentor Verification, NetSpeed, Rambus, Sonics and Synopsys.

Click here to view the ARM Connected Community Theater schedule.

 

The ARM Scavenger Hunt

ARM is sponsoring a Scavenger Hunt to win cool ARM-based products in the daily raffle drawing. To participate:

  1. Pick up a playing card in the ARM booth (24428)
  2. Visit 5 partners in the ARM Connected Community Pavilion and collect an IoT sticker from each one
  3. Go visit 1 ARM partner on the exhibit floor to collect the last sticker
  4. Return the completed card to the ARM booth for entry in the daily drawing and receive an ARM lapel pin and raffle ticket

 

Drawings will be held at the ARM Connected Community Theater on Monday/Tuesday at 6:45 pm and on Wednesday at 4:45 pm.

YOU MUST BE PRESENT TO WIN.

I recently had the honor of presenting the Keynote to SPIE's EUV (Extreme Ultraviolet) conference.  The SPIE conference is the premier gathering place of the lithography community.   al[1].gif

I was delighted to receive a significant amount of enthusiastic feedback regarding my talk.  The general premise of my talk is seemingly obvious to those of you in this SoC Design Community, but not necessarily to many in the lithography community:    The papers at the lithography conferences focus on what line and space can be printed, but when you put an SoC together, there's a lot of other stuff going on. 

That other stuff was quite interesting to much of the audience. 

Below is a summary write-up of my talk that I'm working on for the SPIE newsroom.

 

EUV and SoC:  Does it Really Help?

 

Of course EUV, should it meet its production targets, would help.   But as we get closer to an actual EUV insertion point, it is informative to move beyond the basic metrics of pitch and wafers per hour to consider the key SoC metrics:   Power, Performance, Area, and Cost (PPAC), and how lithography mixes with other concerns to produce the final results.       In so doing, we find pluses and minuses that might nudge the argument one way or the other. The first section of my talk detailed that we are in an era where transistor performance is not helped by shrinking the gate pitch.   This is primarily due to parasitic resistance and capacitance.   This is not solved by switching to a high mobility channel material in the fin or to even a nanowire, but only mitigated.    Contacting ever-shrinking transistor source/drains is becoming as limiting of a problem as the transistor itself.    Along these lines, I showed cases where larger gate pitch can result in smaller chips.    One of the morals of those examples is that the overall chip area is a combination of transistor characteristics and wiring capabilities and the two should be considered together when defining a process technology.    A related issue is the scaling of minimum area metal shapes, which are now often not lithographically limited but limited by copper fill.   Larger minimum area metal shapes take up more routing resources and can result in larger chips.

 

Furthermore, the interconnect parasitics that the transistors must drive in order to operate a full circuit are also not helped by feature size scaling.  As circuits are scaled, the ratio of wire load to transistor drive can be expected to worsen, and then in order to hit frequency scaling targets the transistor drive strengths have to be increased—either by increasing the width of a given transistor or by increasing the number of repeater gates in a critical path.  Either way, area and power suffer due to poor interconnect scaling.

A further aspect challenged by feature size scaling is variability.

Designers must include margins for variability—if we are presented with larger variability, design margins increase, and those larger margins translate into larger, more power-hungry chips.    Part of what we are fighting is Pelgrom’s Law, which reminds us that transistor variability increases as we decrease the area of the channel.   The fabs have made remarkable progress in reducing this variability over the generations, including a big step with the 2nd generation of FinFETs by including multiple gate work function materials that allow us to reduce the doping in a wider range of transistor options.  Historically, the two largest contributors to random variability in transistors have been the dopant fluctuations (reduced as noted above) and line edge roughness (LER) of the gate.    With the reduction of the dopant component of random fluctuations, and with FinFETs adding fin edge roughness variability to gate etch roughness, reducing LER in general will become more important.  This will be a key issue to monitor with the development of the EUV ecosystem, assource power and resist improvements will be needed to improve LER.

 

The circuits that are most affected by local variability are the memories. Scaling the minimum area SRAM bitcells (which are typically the ones bandied about in public marketing materials) from 28 to 20 to 16/14 to 10 can take them from relatively normal operation to non-functionality.   This does not mean that we can’t make embedded memory anymore, just that designers need to choose one of three actions:   Use a bitcell that has larger transistors, use a bitcell that contains more transistors (8 transistors instead of 6 is a popular option) , or wrap “assist” circuitry around the memory arrays which help them overcome their native variability limitations.   Any of these options will add area and/or power, and might end up limiting the speed of the chip as well.    I received a lot of feedback on this point—underscoring that many people outside of the direct chip design ecosystem did not know that we were already in a regime where the inherent variability of the transistors has limited our abilities to scale memory to the degree that pitch scaling would imply.

 

The key message of this opening part of the talk:   There are many issues that will dilute the final product metrics, independent of the pitch scaling we enable.  The good news side of this is somewhat tongue-in-cheek:    All of the semiconductor industry isn’t hanging on the fate of EUV pitch scaling.

Transistors, their contacts, and the vias and wires, will all need to have fundamental improvements in order to fully take advantage of the pitch scaling that EUV may offer.

 

Transitioning to more specific lithography topics, I spent some time comparing the “with-EUV” option to the “without EUV” option (multiple patterning).

In the case of the most critical logic layer, the 1st metal layer, which necessarily contains a high degree of 2-dimensional shapes, we’ve taken on Litho-Etch-Litho-Etch style patterning, making do with this in the absence of EUV.   Interestingly, with the great capabilities in overlay in the state-of-the-art 193i steppers, many of our critical 2-dimensional shapes (such as a line end facing the side of a line) may not scale if switched to EUV at a next node, due to the numerical aperture limitations.   This disadvantage must be added to the obvious advantage of printing with fewer masks.   One wild card here would be the ability to route in M1 with EUV.   We removed the ability for the routers to manipulate M1 shapes several technology nodes ago, because the associated rules had become too complex for routers in the extreme low-k1 regimes.

 

EUV should help with some of the key constructs used to create low power (i.e., small) standard cell logic.   A key area is in the local interconnect that is used to wire transistors under the M1.   This leads to an interesting point, quantified in a later paper by Lars Liebmann of IBM, that low power designs will likely benefit more from EUV than higher performance designs, as the higher performance designs can use larger standard cells that would not put as much pressure on local patterning.

Another possibility to consider ties back to the interconnect parasitics that I discussed as key scaling limiters above.   The only reason we haveadded the local interconnect is to make up for patterning limitations of 193i  in these small standard cells.

 

It’s possible that EUV may allow us to eliminate or at least simplify the local interconnect, saving wafer cost but perhaps also importantly reducing wiring parasitics—which would reduce the area and power of chip implementations, all other things being equal.

There are many other “second order” issues that EUV may help with—meaning chip design issues that extend beyond basic pitch scaling.   One example I used was the need for multiple patterning in the signal routing layers.   If for instance LELE type multiple patterning is used, different wires on the same metal layer will move with respect to each other, due to misalignment between the different masks of the decomposed layout.    This misalignment creates extra capacitance variation, which then increases the required design margins, and as discussed earlier will end up with larger, more-power hungry designs.   Self-Aligned Double Patterning (SADP) is a possible alternative to LELE, and with SADP the line spacing won’t vary according to mask overlay, but it will vary according to “pitch walking” due to mandrel CD variation. Furthermore, most embodiments of SADP impose extra restrictions on the placement of line ends (where we via up/down to other metal layers), and any extra restrictions increase chip area.    This brings up a point I emphasized during the talk:  Many designs, especially low power designs, are limited by the wiring, and much of the wiring limitations  are not from simple line/space but from the line ends and the rules associated with vias at the line ends.

Either way we choose to do multiple patterning, we will limit the ability to scale power, performance and/or area (PPA) of our designs, and this is another issue that should be comprehended when comparing EUV to non-EUV options.

Another potential EUV benefit relates to the routers that I mentioned earlier:  merely enabling simpler design rules for the routers can result in improved design implementations, because optimizing the PPA of a design typically involves dozens and dozens of iterations of the floor-planning, placing, and routing of the design.  The slower the router (due to more complex design rules), the fewer iterations will be possible prior to design tape-out, and the products taping out won’t be able to achieve ultimate PPA entitlement.

 

And that brings me to the message I wanted to leave the SPIE EUV conference attendees with:

No one would dispute the benefit that an ideal EUV capability would bring to the industry.

But as EUV closes in on its pitch and throughput targets and approaches viability, we must consider the practical design aspects in order to accurately quantify the potential benefit of EUV.    Unfortunately, these design questions are not easy to answer—they ideally require a full Process Design Kit (PDK), with transistor models, parasitic extraction models, and wiring design rules, and fully considered implementations of mock designs in order to benchmark the PPA results.

Furthermore, there won’t be one right answer—low power and high performance designs will likely arrive at different value assessments for EUV, which will add complexity to the choices the foundries will have to make regarding the timing and specific process layers for EUV insertion.

Filter Blog

By date:
By tag:

More Like This