Powering the Future: Strategies for Data Center Expansion and AI-Ready Infrastructure

With rising power densities driven by high-performance computing and AI workloads, data center leaders face technical hurdles in power management. The piece discusses innovative solutions like liquid cooling, UPS upgrades, and modular designs to optimize energy use and maintain resilience.
Jan. 27, 2026
7 min read

Key Highlights

  • Design for higher rack density: Rack density is rising rapidly, so reassess power distribution, cooling, and space layouts now.
  • Treat power as a core IT constraint: AI/HPC load swings and energy-price volatility can derail capacity plans and uptime targets.
  • Modernize reliability and control layers: Upgrade to lithium-ion UPS systems, integrate IoT monitoring, and use UPS systems as grid-interactive assets where possible.
  • Reduce risk with architecture choices: Pair liquid cooling and modular buildouts with workload placement decisions (what stays on-prem vs. shifts to the cloud) to manage power limits and costs.

The trend toward data center expansions and AI-ready infrastructure continues to gain momentum. At the same time, executives and business leaders face unpredictable power demands to drive these new initiatives. They’re tasked not only with managing the complexities of wide load fluctuations, but also with meeting unprecedented energy requirements, such as advanced cooling systems for high-performance GPUs, AI and machine learning (ML) adoptions. 

Resource demands at multi-megawatt scale require new levels of expertise for integrating power alternatives — and these present significant technical hurdles. Moreover, capacity limits and volatility in energy costs add to the complexity. We explore technical solutions to ensure power reliability in hyper-dense computational environments, steps to assess infrastructure limitations, and options for choosing the best path forward. 

Assessing current infrastructures

New technologies are redefining the digital infrastructure landscape, including AI, LLMs and GPUs, as well as Internet of Things (IoT) initiatives, data analytics and the edge. The standard processing tasks that typify classical computing have been superseded by hyper-dense computational environments with unprecedented power demands. Facilities managers are facing greater power densities due to high-performance computing (HPC) with its significantly higher core counts and increased reliance on GPUs. 

And while the average rack density for most data centers is currently 12 kW, McKinsey projects that densities will reach 30 to 50 kW per rack by 2027. As a result, C-suite leaders are reconsidering infrastructure layouts, power distribution, and liquid cooling strategies from the ground up. AI-related projects are now the leading driver of data center construction, with $31.5 billion spent on new developments in 2024 alone.

“Traditional data centers use more power, but they do it less efficiently. Of course, the cost-ratio relationship presents its own set of challenges,” says Peter Feldman, CEO of QTD Systems, a data center, colocation and fiber network provider. “Ultimately, for any data center operator, including hyperscale providers, being cognizant of your electric costs is one of your primary concerns,” he adds.

The surge in power densities is prompting a shift in traditional power management strategies, and C-suite leaders are noticing. However, to stay competitive, organizations need to perform more sophisticated computations, such as big data analysis and running large language models (LLMs) for AI training. As a result, GPU semiconductors are increasingly hungry for power, and data centers are seeing exponential growth in energy consumption. 

These dynamic workloads require near-instantaneous responses, further increasing complexity and project risk. A number of power management tools are helping IT directors meet these infrastructure challenges. For example, in addition to redesigning power distribution systems, organizations are adopting liquid cooling, updating uninterruptible power supplies (UPS), and assessing renewable energy alternatives.

In comparison to on-premise AI deployments, the total power consumption of quantum computing scales at a lower rate. Quantum uses less energy than similar breakthrough technologies, including AI, ML, GPUs and the newest breed of processors. In fact, quantum may offer possibilities for novel sources of power generation, as well as helping optimize power grid operations and accelerate the development of new materials that expand the potential for renewable energy.

“Quantum computers require only kilowatts — a minuscule amount compared to GPUs,” according to Yuval Boger, chief commercial officer at QuEra Computing, a quantum processing technology provider. “Preparing infrastructures for adoptions of these new technologies has already become a major issue, and quantum computing may become a source in the future that will help to resolve these power issues,” he adds.

ID 42200550 © Tashatuvango | Dreamstime.com
kiss
418627714 | Business © Hugo Kurk | Dreamstime.com
Exterior overview of a new build, hyperscale data center in Eemshaven, Netherlands.

Managing power consumption: From the grid to the IT rack

The emphasis on thermal management is driven by the rise of high-performance computing (HPC), AI processing, and the growth of digitization across industries. Within the data center itself, specialized switchgear protects, controls and isolates electrical equipment to ensure safe and reliable power supplies. These components include medium-voltage (MV) and low-voltage (LV) switchboards that control energy flow to the uninterruptible power supply (UPS), cooling systems and other essential infrastructure.

UPS systems keep critical operations running when power quality issues occur and guarantee up-time during grid outages. Primarily comprised of high-capacity lithium-ion batteries — in contrast to valve-regulated lead-acid batteries — UPS ensures business continuity (BC) and maintains the integrity of IT workloads. It also provides clean, uninterruptible power to IT equipment when switching to on-site power sources, allowing time for backup generators to start.

And with the introduction of high-capacity lithium-ion batteries, UPS is moving from an important back-up alternative to becoming a dynamic energy hub. Both the technical advances and increased storage capacities are turning UPS systems into a distributed energy resource (DER) that interacts with the grid in real time and proactively responds to frequency fluctuations in power delivery. Enterprises are also exploring and investing in new thermal management strategies and on-site power generation/storage in the form of wind and solar energy and battery energy storage systems (BESS). 

These behind-the-meter (BTM) energy sources include co-located gas power plants, hydrogen fuel cell farms and solar energy parks. While research into small-format nuclear reactors continues to grow, that technology may be appropriate only for newly designed super-scale data centers. 

However, one technical advance that’s becoming essential for high-density racks is liquid cooling. It represents a new standard in data center construction and offers an effective retrofit solution. In 2023, the U.S. data center cooling market was valued at $4.31 billion and is projected to increase to $8.34 billion by 2029, driven by increased adoption of liquid cooling. The inefficiencies of air-cooled approaches for higher power densities and the effectiveness of liquid cooling have spurred the dramatic growth. The process uses water or dielectric coolants to remove heat from IT equipment and can also significantly reduce data center power consumption.

Key next steps

As C-suite and IT leaders consider strategies to overcome power density challenges and the surge in demand for more compute power, they’re looking to take the first step in assessing readiness and reducing complexity. As Boger explains, a data center manager should start by understanding who their users are and what workloads they manage.

“You assess what your users may want to run in the next two-to-three-year period,” Boger says. “Are they primarily training AI models? What are the applications they require?” 

Organizations can also join with their local grid provider to improve demand management. This approach features “peak shaving,” which lowers overall energy consumption by switching to on-site power generation or using stored battery capacity when needed. It also helps protect against grid outages caused by overconsumption. Further, integrating UPS technology with the IoT can offer significant gains. IoT-enabled UPS systems improve visibility into energy usage through smart sensors and remote connectivity, delivering real-time performance data, battery health and energy usage, and identifying potential faults.

Increasingly, IT leaders are recognizing the efficiency of modular data center designs. Flexibility and scalability are critical to application deployments and AI initiatives in general. As the industry moves toward microservices and modularity, enterprises can incrementally add infrastructure to meet evolving AI demands and to improve control of energy consumption. The trend represents a shift away from traditional data center designs and toward more customized, efficient and resilient infrastructure capable of supporting the computational power demanded by AI and HPC workloads.

Finally, choosing specific workloads to run in the cloud forces the provider to manage power requirements. Administrators can assess how critical — and costly — it is to keep certain types of data on-site, and then choose to run only those strategic, sensitive workloads on-premises. They can then resolve key local power issues by offloading other workloads to the cloud.

“IT leaders can better understand what’s required in terms of power capacity by asking which workloads can be handled at the current site. Or, would it offer improved performance if it were located in some other county or state?” says Boger. “Also, work with the utility company to find out what they recommend. It’s not just about whether the utility can supply enough power, but how much is this going to cost?” he adds.

 


Want the edge? Subscribe to our weekly TechEDGE newsletter — it's free!

About the Author

Kerry Doyle

Kerry Doyle

Contributor

Kerry Doyle focuses primarily on issues relevant to both C-suite and enterprise leaders through technology articles, white papers and analyses. He covers a diverse range of topics, from nanotech to the cloud, open source to AI. Passionate about both the written word and communicating the value of technology, his experience stems from senior editorial positions at PCWeek, PCComputing, ZDNet, and CNet.com. He's a graduate of Boston University with a bachelor's degree in comparative literature.

Quiz

mktg-icon Your Competitive Edge, Delivered

Stay ahead of the curve with weekly insights into emerging technologies, cybersecurity, and digital transformation. TechEDGE brings you expert perspectives, real-world applications, and the innovations driving tomorrow’s breakthroughs, so you’re always equipped to lead the next wave of change.

marketing-image