Microprocessor power challenge, AMD achieves 25×20 plan

With the rapid growth of the past 20 years and its social benefits to business, education, research, medical institutions and other industries, the calculated energy and environmental footprint has increased accordingly. The world's 3 billion PCs consume more than 1% of the total energy consumption each year; the world's 30 million servers will add another 1.5% of total electricity consumption, costing about $14 billion to $18 billion annually.

Due to the increasing number of Internet users worldwide, it is predicted that by 2018, the total footprint of the global data center will increase from 1.5 billion square feet in 2013 to nearly 2 billion square feet. The servers in these computing centers are not only connected to PCs, phones and tablets, but also to a host of new networked devices and systems. Although it may be different from expectations, it is conservatively estimated that by 2020 there will be nearly 26 billion devices including wearable computers and industrial sensors connected to the Internet. This means that Internet traffic will increase dramatically and is expected to increase from 245 EB in 2010 to 1,000 EB by 2015.

Combined with the user's need for energy-efficient performance, smartphones, tablets and game consoles will be used in computationally intensive tasks such as streaming media, games with richer visual experience and augmented reality. At the same time, the demand for laptops and desktops is increasing in terms of video editing, voice and gesture recognition, and data security based on biometric information. These factors strongly drive technological innovations that increase processor performance while reducing energy consumption.

Current status of energy efficiency

Energy efficiency is one of the main drivers of the digital mobile revolution. Since the 1940s, computing power has increased by several orders of magnitude, so laptops, tablets, and mobile phones can continue to work for hours when the battery is fully charged. As battery technology is growing at a slower pace than computing performance, mobile device manufacturers can only integrate multiple technologies to extend battery life. For example, smartphones and laptops automatically go to sleep after a certain amount of idle time.

These improvements will have far-reaching implications: If the computers sold in the US are ENERGY-Star certified, they can save $1 billion a year and reduce greenhouse gas emissions by 15 billion pounds, which is equivalent to 1.4 million units. The annual emissions of cars.

Microprocessor power challenge

The 1980s and 1990s were the golden age of dramatic improvements in microprocessor performance and computational efficiency. As transistors become smaller and smaller, designers can integrate more transistors on a single chip, and the clock frequency of the processor is simultaneously increased, and the performance of the user's computer is improved. But the transistor is small and the power density remains essentially the same - this phenomenon is known as Denn's scaling law. This means that the energy consumption per unit of computing power per generation of new processors will be reduced to 1/4 of the previous generation, while the voltage and capacitance are reduced accordingly.

However, at the beginning of the 21st century, transistors are still getting smaller and smaller, and the number of transistors that can be integrated on a single chip is still increasing, but the growth rate of energy efficiency is gradually slowing down. The main reason is that the size of the transistor is close to the physical limit. The smaller the transistor, the more likely it is to leak during manufacturing because the threshold voltage of the transistor has dropped to a point where the device is not completely turned off. This end of Denn's scaling law increases the power consumption of high-integration, high-performance devices that consumers expect, requiring more sophisticated cooling technologies and innovative power management techniques.

This ultimately leads to semiconductor manufacturers not relying solely on process improvements to improve energy efficiency. In addition, even if engineers keep Moore's Law in line with their historical performance trajectory, they also need to explore new technologies to make energy efficiency growth comparable to early growth.

AMD 25&TImes;20 plan

AMD engineers have carefully studied these trends and the need to reduce the market impact of information technology on the environment, as well as the need to extend battery life and improve the performance of thinner, lighter and smaller products. Therefore, in the past few years they have greatly improved the performance of AMD processors. AMD recognizes that it cannot be satisfied with the status quo, so in June 2014 it proposed a goal of achieving a 25-fold increase in energy efficiency of the accelerated processor (APU) by 2020, or the “25&TImes;20” program.

AMD uses platform performance divided by the typical application efficiency index obtained for typical application energy consumption to achieve a single measurement per unit of energy consumption. By using curves, it is clear that typical applications are actually dominated by idle power rather than peak computing power. There are many power-related innovations that can maximize idle time and reduce idle power without compromising performance. Of course, performance is a key parameter—users want fast response, fast computing, and seamless video playback. They also want to have longer battery life, thinner and lighter size and smaller environmental impact. As long as you can optimize the energy efficiency of typical applications, the above problems can be solved.

To achieve the 25x20 goal, you must dramatically improve the efficiency of typical applications by using technology and new methods. According to this goal, from 2014 to 2020, the reduction in power consumption of AMD products is at least 70% higher than the historical efficiency trend predicted by Moore's Law. This means that by 2020, the time for a computer to complete the same task will be At present, 1/5 of the personal computer, and the average power consumption will be less than 1/5 of the current personal computer. This is like turning a 100-horsepower car into a 500-horsepower car in just six years. At the same time, the distance traveled per gallon of fuel has increased from 30 miles to 150 miles.

Achieve 25x20 goals

Architecture Innovation For decades, CPUs have been used to run general programming tasks. It excels at using a variety of complex techniques such as branch prediction and out-of-order execution to run computational instructions serially, thereby increasing speed. In contrast, the graphics processor (GPU) is a dedicated accelerator that was originally designed to display millions of pixels simultaneously on the display. The GPU implements this process by performing computations in parallel using a simpler execution pipeline. In the past, CPUs and GPUs were more integrated, but they operated independently of each other.

The AMD Acceleration Processor (APU) integrates the CPU and GPU into the same silicon. This has many advantages, such as increased efficiency through shared memory interfaces, power and cooling infrastructure. GPU parallel execution improves the processing efficiency of many workloads such as natural user interfaces and pattern recognition, and the execution efficiency of these workloads can be multiplied several times when the GPU is used in conjunction with the CPU. Optimizing GPU and CPU parallelism maximizes device performance, reduces task time, and increases the frequency of entering energy-saving modes.

A long-term challenge is that software developers have difficulty writing applications that take full advantage of CPU and GPU. Traditionally, these two processors have separate memory systems. This means that whenever the CPU wants to take advantage of the GPU, it has to copy the data from its memory into the GPU's memory. This makes application writing not only inefficient but also difficult, so GPUs can only be used in large dataset applications. In addition, independent memory also increases power usage because the processor often transfers cached data between the CPU and the GPU.

Through AMD's newly developed heterogeneous unified memory access (hUMA), the CPU and GPU can share the same memory. Both have access to all platform memory and can also distribute data anywhere in the system memory space. This shared memory architecture greatly reduces the complexity of programming because software developers no longer have to point out where the data is cached, and this operation is prone to errors, which can lead to vulnerabilities that are difficult to detect and fix.

The benefits of a unified memory architecture are obvious, allowing software developers to streamline the GPU's parallel processing capabilities with high-level languages ​​such as Java, C++ AMP, and Python to improve performance and efficiency. The results of a recent mainstream video and photo editing program show that the performance of certain functions can be increased by a factor of 17 if the parallel processing of the GPU is echoed by the CPU. However, since the GPU and CPU share the power/thermal infrastructure, the power requirements are the same as when the CPU is used alone.

hUMA is part of the AMD Heterogeneous System Architecture (HSA) implementation. When designed and programmed according to the HSA architecture, such power and performance improvements can be extended to other fixed-function devices such as digital signal processors (DSPs) or security processors.

The AMD processor codenamed "Carrizo" is the industry's first processor to comply with the HSA FoundaTIon HSA 1.0 specification. This architecture greatly reduces programming difficulty while improving application performance at low power.

High power efficiency silicon technology computer workload changes will have an impact on the power consumption of the microprocessor. The greater the workload (such as complex server transactions or video rendering), the greater the current drawn by the processor, and then the current decreases as demand decreases. A sudden change in current can cause severe fluctuations in the chip supply voltage. To address the voltage drop, microprocessor designers typically provide approximately 10% to 15% additional voltage to ensure that the processor voltage is always sufficient. However, overvoltage will be at the expense of energy consumption because the power wasted is proportional to the square of the voltage increase (ie, 10% of overvoltage will cause 20% of power wasted).

AMD has developed several technologies to optimize voltage. Its latest processor is equipped with a voltage tracking circuit that compares the average voltage to the voltage drop with nanosecond accuracy. It can recover most of the wasted power by running at average voltage and then briefly down-clocking to offset the sudden drop in the supply voltage. Since the frequency adjustment can be done at the nanosecond level, the computational performance is hardly affected, and the power consumption is reduced by 10% to 20%. Starting from the "Carrizo" APU, both the CPU and the GPU use adaptive voltage operation. Features.

High Speed Slip Ring

A high-speed slip ring is an electromechanical device that allows electrical current to pass between a stationary and a rotating assembly. It is typically used in applications where the speed requirement exceeds that which is achievable with a standard slip ring. High-speed slip rings are available in a variety of sizes and configurations to meet the needs of most applications. They are commonly used in military, aerospace, and medical applications where speed and reliability are critical.


The speed is required to be at least tens of thousands of revolutions per minute, the structure of the Conductive Slip Ring directly determines the contact of high speed, as well as the size of impedance. High-speed slip ring materials have high wear resistance, high life characteristics, and multi-contact contact can ensure signal transmission and power reliability, with the development of science and technology, high-speed slip rings will be more and more widely used.



If you need to transfer large amounts of data or power between two stationary objects, a high-speed conductive slip ring is what you need. Traditional slip rings only offer a few thousand revolutions per minute (RPM), which is more than enough for most applications, but can be limiting in some circumstances. Some applications, like those found in the medical or military field, require speeds of tens of thousands of RPM in order to function properly. That's where our high-speed conductive slip ring comes in. With speeds up to a certain range, it can handle even the most demanding applications.


High Speed Slip Ring,Rotary Slip Ring Electrical,Slip Electrical Connector,Hydraulic Rotary Joint

Dongguan Oubaibo Technology Co., Ltd. , https://www.sliprobs.com