Page 22

EETE MAR 2014

POWER MANAGEMENT SMPS: a crucial component of the “system Data Center” By Francesco Di Domenico Improving the standard of living requires ever-increasing demand for energy, particularly in electrical forms. People do not use directly electrical energy, but mainly IT and telecommunication equipment, transportation vehicles, white goods, light, mechanical work or media: these are all the tangible effects of the electrical energy. Power electronics is the science studying the ways to convert electrical energy into the forms typically used in the daily life. A modern power conversion system consists of an energy source, an electrical load, a power electronic circuit, and control functions: the control circuits take information from source and load, determining how the switches must operate to achieve the desired conversion. This is exactly the principle of operation of the SMPS (Switch Mode Power Supply), which uses a high frequency switch (in practice a transistor) with varying duty cycle to maintain regulated output voltage. An AC/DC SMPS is a system consisting of three main stages, as shown in figure 1 in the case of a typical IT server application. In each of these 3 stages the role of power or logic components Fig. 1: An AC/DC SMPS system consisting of three main stages. based on semiconductors is fundamental: high voltage power MOSFETs, diode and controllers in the PFC and PWM/ resonant stages, Low Voltage power MOSFETs or diodes in the rectification stage. More recently the focus has been moving from a ‘‘devicedriven’’ to an ‘‘applications-driven’’ scenario, in a “system engineering” approach. This transition has been mainly triggered by the fact that advanced semiconductors with suitable power ratings already exist for almost every application of wide interest, so designers show an increasing interest in a more flexible, reliable and of course efficient way to use them. According to the new “system” approach, efficiency and power density are definitely more and more in the focus of SMPS design, especially in IT computing applications. Figure 2 shows the most popular efficiency standard followed in this environment, the 80Plus, which fixes the minimum efficiency requirements in typical operating condition (20, 50 and 100% loading). The most recent one, the Titanium, imposes the requirement even at 10% loading, so at very light load: this is consistent with the operation of modern computing systems, where each power supply is typically used in a paralleled N+1 redundant configuration, so it will work most of the time at load much closer to 10% than to 100%. In fact in the past the increasing efficiency need was mainly driven by the capability of heat dissipation at full load without excessive impact on fan’s acoustic noise generation: as a result, maximizing the full load efficiency was more in the focus. However, more recently the explosive growth of consumer electronics and data processing equipment had pushed to the introduction of various requirements aimed at the optimization of light-load operation. For example, the workload of Web services can significantly vary based on diurnal cycles, application weights, external events, etc. And this is mostly valid even for High Performance Computing (HPC) and cloud servers. Meeting this stringent light-load efficiency poses major design challenges to power supply manufacturers and huge effort has been dedicated by both power semiconductors and control ICs providers in developing technologies able to comply with these specifications and making SMPS efficiency plot as much as possible “flat” in the entire load range. A modern data center looks like an array of racks; in each “drawer” of it we find a server, and in each server a SMPS can be found – see figure 3: therefore a large number of power supplies are expected to be inside such a structure. Looking at the diagram of the power delivery system of a typical large server farm, for each Watt consumed by data processing, more than two Watts are wasted in power conversion and cooling. An important parameter used to quantify the server efficiency inside a data center is the so called Total Cost of Ownership (TCO), defined as the cost to equip and run the servers. In fact, TCO consists of two main components, the CAPEX (Capital Expenditure, so equipment costs) and the OPEX (Operating Expenditure, so the energy cost). With steadily decreasing prices of IT equipment, the cost of electricity over the equipment lifetime has become a significant fraction of the initial acquisition cost, especially for low-end equipment such as “blade”, 1U, and 2U servers (where “U” is a unit of height measuring 1.75 inches or 4.45 cm), where the cost of power and cooling Francesco Di Domenico is Senior Staff Engineer Application and Systems at Infineon – www.infineon.com 22 Electronic Engineering Times Europe March 2014 www.electronics-eetimes.com


EETE MAR 2014
To see the actual publication please follow the link above