Mobile devices and many electronic systems are powered by batteries. In such systems, optimization of power is a key design constraint. In fact, it is more accurate to look at total energy usage. System-on-chip (SoC) designers spend significant amounts of time trying to save battery life in such systems.
Power usage can be divided into two components: dynamic and static. Both are important. In random-access memories (RAMs), static power consumption occurs whenever the RAM blocks are powered, and no read or write operation is performed. The proportion of power consumption due to leakage gets significantly higher as we move to more advanced manufacturing process nodes; smaller geometries are much leakier. Dynamic power consumption occurs due to the switching of transistors and is a function of the clock speed, voltage and the number of transistors which change state per cycle.
Now let’s look at key power-saving features in memories.
1.0 Memory Architecture
By choosing the right memory architecture, you can optimize both static and dynamic power in your SoC. In general, memories can be divided into two architectural categories: high speed and high density. High-speed memories are optimized for higher performance, and high-density memories are optimized for smaller area. In general, high-density memories provide lower static and dynamic power because high-density memories are typically designed with a smaller bitcell and smaller bus drivers than the high-speed memories.
2.0 Periphery Device Selection Based on Threshold Voltage
Threshold voltage options in the periphery devices provide power and performance tradeoffs. The threshold voltage (Vt) is the voltage at which the device effectively turns on: the higher the Vt, the higher the turn-on voltage, the higher the leakage, and the slower the device. Flexible memory compilers support two to three different Vt devices for the memory periphery as a compile-time option. You can select the periphery devices based on your performance and power requirements. Typical devices supported for the periphery are:
· Standard Vt + Low Vt : Highest performance, higher leakage
· Standard Vt : Mid-range performance, mid-range leakage
· High Vt : Lowest leakage, lowest performance
3.0 3.0 Dual Rail Memories
Separate power supplies can be provided for the memory core and periphery. Level shifters are implemented at the boundary of the core and periphery, allowing you to run memory core and periphery at different voltages. If performance is not a constraint, you can operate the memory periphery at a lower voltage than the core; this provides both lower static and dynamic power.
4.0 Power Management Modes to Reduce Static Power
In addition to power-saving techniques described above, static power can be lowered by switching the memory into low-leakage modes when a read or write operation is not being performed. Figure 1 shows various low-leakage modes.
Memory instances can be generated with or without power gates. Without power gates (“Power Gating OFF” in Figure 1), the following low-leakage modes are supported:
1. Standby
2. Selective Precharge
3. Retention
4. Power Down
Figure 1
With power gates (“Power Gating ON” in Figure 1), the following low-leakage modes are supported:
3. Retention 1
4. Retention 2
5. Power Down
5.0 Dynamic Voltage Frequency Scaling (DVFS)
DVFS is a power-management scheme which jointly optimizes performance and power consumption for energy-constrained applications. The main idea is to reduce the supply voltage (and operating frequency) when the design is not doing critical tasks. This leads to significant savings in power consumption, both dynamic and leakage. Many memory compilers support under-drive and over-drive voltage domains. This allows the memories to operate across large range of voltages, which then can be utilized to implement DVFS.
6.0 Write Assist
One of the key reasons to push the SRAM Vmin lower is to enable DVFS to save power and energy. Typically, however, the lowest limit of Vmin is set by SRAM arrays, and hence the supply voltage of the whole system cannot go lower than the SRAM Vmin. It is imperative to push the SRAM Vmin lower so that the DVFS scheme can be more efficient. With technology scaling, it is becoming difficult to write to SRAMs even at nominal supply voltage, and the challenges energy-saving techniques become more apparent at lower supply voltages. Write assist techniques can compensate for some of the low supply voltage challenges; these techniques can significantly increase the range of applications that can utilize low-voltage operation.
As memories occupy a larger and larger percentage of SoCs, it is imperative to select memories and memory compilers that provide flexible power management techniques. Power management within memories is key to continuing to extend the battery life, especially in all of the mobile devices we use every day.