Skip to navigationSkip to content
Sponsor Content By HPE

How will history judge Facebook’s Arctic data center?

By HP Matter
Published Last updated on This article is more than 2 years old.

Drunk on free web services, enamored of video apps, and living in the cloud, consumers are creating data at a breakneck speed—and they don’t care who’s paying. For major social networks like Facebook, Twitter, and YouTube, this is the future of media and, while increasingly profitable, it’s a high-overhead business.

For one, delivering content quickly to users all over the world means hosting data centers across continents and preventing localized meltdowns when demand soars. Cooling these enormous spaces is like trying to chill wine in your car cup holder with the A/C on high—not only is it inefficient, it doesn’t really work.

In 2011, Facebook tried a new approach by building a 30,000 square meter data center in a small Swedish town called Lulea. The area is filling up quickly—a bitcoin mining group also has a server farm nearby, and Google’s data center in Finland is just across the border. All three data centers are there to take advantage of the cold air, water, and cheap real estate. Anticipating demand, a Swedish telecom company is building a new fiber cable called Skanova Backbone North, which will span almost 800 miles through northern Sweden.

Consumers are creating data at a breakneck speed—and they don’t care who’s paying.

Not everyone is keen to host their data and applications overseas, however. Concerns about physical security and government snooping have made offshoring data less attractive. And technological breakthroughs in computing are rapidly reducing the amount of heat and power these data centers will need to hum.

Encouragingly, the technological breakthrough that might save data centers from the snow isn’t some massive new discovery, but an evolutionary milestone in server design: efficient, highly customizable eight-core machines about the size of a medium-sized book.

At the National Renewable Energy Lab (NREL), a supercomputer called Peregrine, built by HP and Intel, runs 1,440 of these compact servers powered by eight-core CPUs—all sans air conditioning. Peregrine operates in a room-temperature building in Golden, CO, and is cooled by a warm-water system that uses the same principle as evaporative coolers. That is, water transfers heat better than air.

The NREL is the US Department of Energy’s national laboratory for renewable energy, so it was only appropriate that the on-premise Peregrine computer make a statement. As the largest supercomputer for energy-related research on Earth, it also reuses its waste heat by piping it into adjacent buildings in the winter. When energy recycling becomes possible, suddenly the most remote data centers are also the most wasteful.

As grids become smarter, energy will be treated more like an asset to be stored or traded, and less like a resource to be consumed and thrown away. That will encourage enterprises to hold their data centers as closely as they hold the data itself, creating a new market for customizable supercomputing solutions that can fit anywhere an enterprise goes—not just the Arctic.

Read more from HP Matter’s Energy issue here.

This article was produced on behalf of HP Matter by the Quartz marketing team and not by the Quartz editorial staff.

📬 Kick off each morning with coffee and the Daily Brief (BYO coffee).

By providing your email, you agree to the Quartz Privacy Policy.