Hot and power-hungry: ‘Manhattan-sized’ data centers are just the beginning

The era of supersized data centers is upon us. As artificial intelligence dominates the agendas of the tech giants, the need for bigger and more powerful data centers is accelerating, and it’s leading to a building boom that could reshape the American landscape.
“We aren’t seeing gigawatt buildings yet, but it’s really only a matter of time,” says Dan Drennan, data centers sector leader at Corgan, the top-ranking architecture firm on Building Design + Construction’s annual list of revenue for data center design.
These rising demands are creating new challenges for the design of data centers, from the power generation needed to the infrastructure to the buildings that contain the servers that make AI work. Right now, and for the foreseeable future, everything is getting bigger.
Meta recently announced plans to build data centers that use up to 5 gigawatts of power. OpenAI, Oracle, and SoftBank announced plans earlier this year to invest up to $500 billion in a vast data center building spree. These and other so-called hyperscale data users like Google, Microsoft, and Amazon are expected to drive most of the growth in data centers in the U.S. and globally, according to an analysis by the Boston Consulting Group.
While the average data center building uses 40 megawatts of power today, it’s not uncommon for the biggest companies to be relying on data centers that suck up 300 to 400 megawatts of power per building. And that number is only going up.
More power, bigger buildings

“We’re actually building several multi-GW clusters.”
Mark Zuckerberg’s July 14 data center building announcement on Facebook put these plans into somewhat menacing perspective. He paired his post with a visualization of a massive rectilinear block smothering a large portion of New York City. “Just one of these covers a significant part of the footprint of Manhattan,” he wrote.
Meta’s largest announced project—the Louisiana-based Hyperion data center—is expected to use 2 gigawatts of power by 2030, with the potential to grow to 5 gigawatts of capacity. Now in its very early stages of construction, it sits on 2,250 acres of a former agricultural site. Manhattan’s total land area is more than 14,000 acres.
“From a logistical standpoint, it just makes sense to build these things under one roof,” says Gordon Dolven, director of CBRE Americas data center research. The dominant paradigm of AI today is the large language model, which pulls its intelligence out of deep pools of data and information stored in numerous servers stacked in long rows of 8-foot-tall cabinets, like the aisles of a grocery store filled with nothing but black boxes and blinking blue lights.
These servers connect and communicate with each other almost synaptically, so the closer they are to one another, the faster they can make those connections. The farther away they are, the slower the connections, and the more networking infrastructure and fiber optic cables required to keep them in communication.
That’s why the building size of data centers is increasing, and also why the companies pushing the development of AI are trying to have more of these large buildings constructed near each other.
For example, Meta’s Hyperion data center will be made up of 11 buildings covering more than 4 million square feet, according to a company spokesperson. Its Prometheus data center in Ohio is a vast campus that’s scaling up to run on 1 gigawatt of power by 2026, partly by gearing up servers in quickly built mega-tents.
A bigger load
More servers means more equipment to help them run efficiently, and that results in data center buildings surrounded by lots of large mechanical, cooling, and electrical equipment.
“The big thing for data centers is they always have to have backup power. Then you usually need an extra, so there’s a backup to the backup. And those take up a lot of space,” says Rob LoBuono, a critical facilities leader at Gensler, another of the top architecture firms designing data centers.
Backups are also being used for the data itself. “We’re seeing more of a trend toward multiple buildings, multiple points of redundancy, separated across the campus.”
And because the server equipment is getting heavier, the buildings need more robust structures at the foundation, with more material-intensive construction. “Where we were planning for 200 or 250 pounds per square foot previously, we’re now talking about 400, 500 pounds per square foot of loading on these floor plates,” Drennan says. “The loading that you’re planning for on the building goes up.”
All these factors are combining to make the buildings enormous. It’s not uncommon for construction on the larger AI-focused campuses to cover 500,000 square feet or more, usually across a single story. And technically they can keep growing.
“If you’re talking about a new building, assuming the land is such that we’re able to shape the building in a way that we can get all the gear around the building that’s needed to serve that compute in an efficient way, then there’s really no limit to how big these can go,” Drennan says.
More efficient infrastructure
Data center campuses don’t necessarily need to grow to Manhattan size, though, and almost certainly won’t. Experts say the equipment and infrastructure behind data centers, and AI data centers in particular, are getting denser, more efficient, and smaller. As a result, data center operators are packing more servers into these spaces, boosting their computing capacity as well as their electricity demands.
Just a few years ago, data centers could expect servers to use about 200 watts per square foot of space, Drennan says. A 10,000-square-foot building would pull about 2 megawatts of power in total. But server sizes have gone down and data centers can pack more of them into the same amount of space.
“Now you’ve got three, four, five times that density. So that same 10,000 square feet that used to be 2 megs is now 8 megs or 10 megs of power,” Drennan says. Scale the building to 100,000 square feet or 500,000 square feet, or even build multiple buildings at that size, and the capacity of the data center goes up significantly.
The cooling question
A lot of that efficiency is driven by the support systems that keep data centers running, especially the all-important cooling equipment that allows servers to run 24 hours a day without overheating.
Dolven says data centers used to rely solely on air cooling (think dozens of giant air conditioners running nonstop). Now new technologies like closed-loop coolant systems, direct liquid cooling, and even immersion systems that keep servers under water are lessening the power demands of the cooling side of data centers, allowing more of that power to flow to more servers.
These technologies may also help cut down some of the extreme resource use data centers require. One study, for example, found that a midsize data center used 300,000 gallons of water per day for cooling. That’s about the amount of water used daily by 1,000 homes.
Drennan expects data centers to get more efficient over time, making even “older” ones built just five years ago able to see their power capacities increase. “What they do with that increment of power gets more productive,” he says. “The compute gets better, the algorithms get better, the systems get better, and so the output goes up, even though the required support for that density is the same.”
The limiting factors of data center size are the heat they produce and the power they require.
Expelling heat from data centers is a significant part of what makes their footprint so large. This requires giant air-conditioning units that can number in the hundreds, with refrigerator-size condensers lined up outside or on the roof, and boxy air chillers pumping cool water into a network of pipes in the building. Outside there are other cooling tower boxes, coolant processors, exhaust filtration units, power transformers, and backup power generators. This equipment ends up in long rows and stacks on the periphery of data center buildings, with room in between for natural airflow and human maintenance.
Though cooling technology is improving, the size of the equipment behind that cooling is getting bigger. According to Drennan, just a few years ago a data center building would need extra space equivalent to about half its footprint to house all the cooling equipment required. “Now it’s more like the yard is four times the size of the building footprint,” he says. “You’ve got three or four times the amount of compute inside the building, so you’ve got to have three or four times the amount of equipment to reject that heat and back up the power associated with that.”
Hot and power-hungry
In the past, data center power demands were manageable. Dolven says a 5-megawatt project could pop up and simply request the power from a utility that was more than happy to sell it. “You could interconnect to the existing grid, you could tap into an adjacent substation that may have already been constructed,” he says. “But when you request 500 megawatts, the scenario shifts dramatically.”
New power generation has to be developed. Miles of high-voltage transmission lines have to be constructed, crossing through existing communities and private land. This brings its own permitting and approvals challenges, not to mention community opposition. A recent report found that the data center hot spot of Virginia is expecting its energy demands to double as more data centers come on line. This is leading to higher energy costs for regular consumers in the region.
Dolven says many hyperscale data center users are looking at building their own power generation facilities within their data center campuses, essentially making the power they need to operate without relying on, or impacting, the surrounding community’s infrastructure.
That’s the approach at Wonder Valley, a data center in development in rural Alberta, Canada, that bills itself as the largest AI data center industrial park in the world. Planned to have its own off-grid natural gas and geothermal plants on-site while pulling from existing “stranded” sources of natural gas, Wonder Valley aims to be a 7.5-gigawatt data center within the next 5 to 10 years.
Gensler is the design firm behind the project, and LoBuono says it’s being designed to be as sustainable as possible, utilizing local timber and in a style that reflects the natural surroundings. Wonder Valley’s developer, O’Leary Ventures, argues that by generating much of its own power, the center will offer a net positive to the region, through jobs, tax revenue, and a jolt to the local economy.
“The whole point of what we’re trying to pivot toward in this industry is making these buildings more of an asset,” LoBuono says. “Optics are huge in this industry. We shouldn’t be thinking about destroying Manhattan. The buildings get bigger, but the bigger has a benefit.”
What's Your Reaction?






