August 11, 2021
New from Server Technology, this blog introduces a 4-part series into High Performance Computing and explores the most commonly specified features for rack PDUs used in support of Research I academic institutions.
- Industry Trends and Solutions
September 21, 2017
Yes, Microsoft, Amazon, Facebook, and Google – we’re talking to you and your kind. While most would consider it a good day to dominate the world of commerce, you and your hyperscale computing compatriots have completely changed the landscape of information technology and data centers as we used to know them. Your executive leadership is brilliant, your stocks are soaring, and you’ve got the world at your feet. Well, ever think about what life would be like without the right rack PDU? OK, you have to admit, we’ve got you there.
August 29, 2017
It’s no secret that data centers are massive energy hogs. As explained in a recent Server Technology white paper, “The Power of Hyperscale Compute,” a typical data centers can be 10 to 100 times more energy-intensive than an office. And altogether, data centers use about 3 percent of the U.S. electricity supply. Over the last two decades, as compute and storage densities have increased, rack power densities have also skyrocketed. In the past, a typical rack would consume an average of 1 to 2 kilowatts of power. Now, as we move deeper into the hyperscale era, loads are hovering around 20 to 40 kilowatts. More servers and hard drives are being put into single racks today than ever before, in a scale out approach.
September 19, 2017
Your hyperscale data center operates on a lean budget. You want to install your hardware and be done with it until you decommission it for the next efficiency-driven replacement cycle. But real world hardware does fail, and when it does, you want your suppliers to be both knowledgeable and responsive. They need to be able to troubleshoot remotely or on site, and get you answers and replacement product quickly so that your application can be restored.
August 28, 2017
You have a deadline, and you have your goals. Your hyperscale data center design needs to maximize power efficiency. Use free air cooling or adiabatic cooling. And support ambient air operating temperature of 25-35oC, with hot aisle exhaust temperatures approaching 60oC. You need lots of outlets – C13s, C19s, or even something that allows you to blind mate servers to the power strip. And it needs to meet regulatory requirements in most major geographical regions around the world.