The Dynamic Data Center: Driving Lower PUE
February 06, 2019
- Server Technology
- Data Center
In support of the new Marc Cram white paper, “Flexibility and Sustainability for the Dynamic Data Center,” we are going to be rolling out a three-part blog series that supports some of the key data center themes covered in the document. The paper itself, a copy of which can be found here, is a mission-critical state of the union address that calls attention to the challenges facing today’s dynamic data center operators.
One of the factors in the larger equation of flexibility and sustainability is the role of PUE. Power Utilization Effectiveness, a simple metric posited by the Green Grid over ten years ago, initiated the energy efficiency movement in the industry. Since then, the drive to get to 1.0 has made PUE the one acronym to rule them all. Despite the fact that many companies were quickly posting numbers less than 1.5 very early in the adoption cycle, the drive to lower PUE has continued to be a large part of the ongoing design of new and retrofit facility projects.
There are several recent trends that contribute to lowering PUE numbers for data center operators. Among them are these eight:
- New LED lighting: continued advances in lighting technology have driven not only better visibility in the rack row but also allowed operators to eke out more energy savings.
- Higher operating temperatures: once controversial, the idea of your cold aisles running warmer has become possible through broader operating ranges of IT equipment, and advances in remote monitoring technologies.
- Free air cooling versus CRAC or CRAH: rethinking cooling has shifted the geographic position of data centers around the globe to more northern latitudes. Hot chocolate, anyone?
- Higher levels of automation: improvements in the quality and price points of remote sensors, better understanding of energy algorithms, and the coupling of server-level data with building controls systems have all led to more precise and efficient mechanical control systems.
- Deduplication of data: fewer computers and storage devices processing extraneous data means lower energy usage. It is a simple equation, but one that has proven effective for reducing energy consumption.
- Workload consolidation via virtualization and containerization: containers have allowed for computing in an even smaller footprint, reducing the need to provide cooling for large volumes of space, while virtualization has reduced the number of computing devices needed in the first place.
- Adoption of flash storage versus rotating media: modern storage devices are not only faster and higher in capacity, but they consume less energy thanks to fewer moving parts.
- New CPUs, GPUs, RAM, and connectivity: while some would say less is more, in this case the opposite is true: more means less. Faster processors can process for data faster while using less energy.
To learn more about the role of PUE in the quest for the flexible, sustainable data center, follow this link to download the “Flexibility and Sustainability for the Dynamic Data Center” white paper. The next Server Technology blog will discuss the evolving demands on the data center, and the challenges that they present. See you next time