High Performance Computing: Who needs it?

Posted by Isaiah LaJoie on March 30, 2020


We’ve been accused recently of spending a lot of blog time focused our power distribution unit(s), to which we can only offer the following reply: guilty.  But hey, we’ve got server power distribution on the brain, and it comes through in the shape and form of our blogs.  So today I propose we take teeny, tiny step back and look at a trending compute strategy we’ve been seeing in the industry.

You don't have to be a power distribution unit manufacturer to see that the demand for increasingly powerful computing has led to a cascade of demand across many data centers. As technology has advanced and allowed computers to do more, businesses and consumers seem to be always ready to fill the computing void. Even the average consumer is aware of the ongoing explosion of digital utilities of every type, and the dizzying capabilities of digital technology in general.

A resurgence in HPC

This has given rise to more and more enterprises looking into high performance computing (HPC). In decades past, HPC was favored by mathematicians and research scientists who required computing power far beyond the needs of anything commercially available in order to execute complex mathematical calculations.

Today, the demand for HPC is far greater. In addition to the realm of research science (which continues to demand increasing computing power), there are financial institutions doing risk modeling, governments calculating the impact of changing demographics and aerospace companies analyzing the flight capabilities of aircraft and spacecraft. Imaging systems alone—which covers the demands of everyone from health care to the movie industry—can require mind-boggling levels of computing power.

Because of the high cost of securing, deploying and operating a supercomputer and its proprietary software, this technology is still out of reach for most businesses.

Integration Over Investment

While it may sound like something from a recent action movie or TV series, the integration of multiple computers into interconnected clusters in order to facilitate HPC capabilities actually is, as they say these days, "a thing." In this scenario, computers with compute cores (processors), graphical processing units (GPUs) and memory are organized in multiple nodes, effecting a viable supercomputing system.

These clusters, consisting of inexpensive computers running commercially-available software, are substantially more affordable than a supercomputer—but they still represent a significant investment for most organizations.

Turning to the Cloud

With the growing popularity of the cloud and the increasing and diverse utility it offers, some businesses would like to achieve HPC access without breaking their IT budgets.   They are turning to the public cloud services offered by tech giants like Google, Microsoft, Amazon and IBM. Engaging these services allows them access to HPC capabilities without all of the investment in hardware infrastructure.

These cloud-based utilities have also served to significantly level the playing field, giving smaller companies the ability to break out with HPC alongside large enterprises.  In some cases, organizations that had successfully adopted integrating clusters later found that user demand was exceeding compute resources and capacity.  Many of these companies have been able to meet the demand through the cloud, which affords a flexible, on-demand HPC environment.

This approach has proven to be cost-effective for many organizations.  The available platforms also have user-friendly, efficient features for scheduling and streamlining workflow.  Now, more and more businesses are dependent upon the collection, analysis and distribution of data.  They see a need to move to the next level of computational power, and they will likely turn to the cloud for their HPC needs.


The Impact of IoT on Your Data Center Infrastructure