Google and Microsoft have nothing on - drum roll - the SuperNAP

Saturday 24th May 2008
By Ashlee Vance in Las Vegas - The Register®

$500m. That's the going rate for a data smelter these days. You know, a facility run by a company such as Google or Microsoft that moves bits around and consumes more power than old-school metal processing plants.

Most companies in need of such horsepower go out of their way to build the computing centers in cities with cheap power. To date, that has meant tapping hydroelectric power in the Pacific Northwest or scheming tax breaks out of city officials in places like Oklahoma or South Carolina to get a "per watt" edge. So imagine our surprise upon learning that one of the world's most tightly-packed and energy-demanding data centers will go up in Las Vegas - a place where desert-bound casinos suck up huge amounts of electricity to fuel their neon signs, slot machines and suites.

In the coming months, a little known technology giant called Switch Communications will open the SuperNAP. This 407,000 square foot computing compound will house servers and storage systems owned by many of the world's most prominent companies. And, unlike most centers of its kind, the SuperNAP will not rely on raised floors or liquid cooling systems to keep the hardware humming. Instead, it will be fueled by custom designs that allow it to maintain an astonishing 1,500 watts per square foot - or close to three times the industry standard.

Switch has operated co-location facilities in Las Vegas for about eight years. The company lays claim to a unique set-up, as it owns a huge networking facility where more than 20 of the US's major carriers funnel their traffic and a number of data centers make use of all that bandwidth. These state-of-the-art computing centers have attracted a number of Fortune 100 companies, including technology and media heavies.

"In my opinion Switch has the finest data centers available anywhere," said David Matanane, the senior manager of hosted services at Cisco Systems.

The SuperNAP stands as the culmination of everything Switch has learned from these businesses to date.

The facility will make use of a custom Switch concept dubbed the T-SCIF or Thermal Separate Compartment in a Facility. The T-SCIF is sort of like a little shack for hardware. Customers slot their systems into the unit with the front half of the hardware sticking out into the main data center room and the back half sitting inside the T-SCIF. This approach makes sure that only cooled air reaches the front of servers and storage boxes, while all of the hot air is released into the sealed T-SCIF and then expelled through a series of ducts.

"We can do 500 or 600 per cent more cooling per cubic feet per minute than everyone else who designs their data centers with raised floors and cooling systems from Liebert," Switch CEO Rob Roy told us. "The raised floor kind of works against the laws of physics. Cold air does not want to fly up through a room. Everyone in the world knows that is probably not the right way to approach things."

With the T-SCIFs, Switch makes sure that cold air and hot air never intermingle. As a result, the hardware receives near uniform cooling with 68 degree Fahrenheit air rushing into the boxes.

Beyond these cooling systems, Switch has set up a sophisticated power system inside of its data centers. The company promise 100 per cent uptime thanks to a power distribution arrangement that's divided in Red, Blue and Grey systems. The company literally color codes all of its gear, making sure employees only fiddle with gear on one color scheme per day. The hardware is then connected to at least two of the grids, and Switch says it can survive just about any type of outage - be it at the utility company or Switch itself - because the company has access to a number of different suppliers.

To feed the SuperNAP, Switch will take this system to the next level running what it calls a power spine down the center of the 407,000 square foot facility.

SuperNAP's Spine
Switch has very close ties to the local energy concerns, letting it get power at about 5 to 6 cents per kilowatt hour. (The likes of Google can enjoy about 3 cents per kilowatt hour in some places thanks to tax breaks. Here's to you, local taxpayer. Google needs the money.)

The SuperNAP will eat up more power than three mega-casinos put together, but Roy isn't worried about running out of juice anytime soon. Las Vegas has access to power generated by the Hoover Dam and power plants being built to fuel California.

SuperNAP - By The Numbers

To get all of the power and cooling into the SuperNAP, Switch has again turned to homegrown products.

On the outside of the SuperNAP, you'll find numerous cooling stations. Each one sits on a raised platform that allows the cooling systems to grab already shaded and semi-chilled air. This shading process is complemented by the desert itself, where air temperatures fall dramatically at night and remain low during the early morning.

The stations have four methods each for cooling air - you can expel the heat outside, run direct evaporation, run indirect evaporation or use a closed water loop, Roy said. A weather monitoring device helps the stations pick the most effective method or combination of methods. "Liberet makes indirect and direct systems, but you always have to pick one. With our system, as the temperature or humidity changes, you pick which is the best cooling method. That's usually one of the four systems or a combination of two," Roy said.

The SuperNAP has been designed in a so-called "modular" fashion, so Switch can roll up as many of these cooling systems as needed over time and plug them into the main facility. (The cooling systems plug into the holes shown in the picture below.)

SuperNAP's Skeleton
The SuperNAP will cost about $350m, and be about the same size as Google and Microsoft's $500m data centers. Roy, however, thinks Switch can pack about four times as much computing power in the SuperNAP as these rival centers thanks to the cooling systems and energy supplies.

Inside the facility, Switch will set up a lab and also customer showcase centers where the likes of Sun can flaunt their gear.

Roy claims it will take up to three years to fill the facility even though "customers are fighting to get in." Switch also has room to build another three similar facilities and is looking to build other SwitchNAP replicas around the globe. The company may even consider licensing the designs to other data center makers, capitalizing off more than 30 patents (applied) on the technology.

Even though the SwitchNAP is just a few minutes drive from Las Vegas's main airport, it's almost in the middle of nowhere. The road leading up to the facility ends abruptly right in front of the SwitchNAP. All you get is a "Road Closed" barricade and then desert.

The land surrounding the SwitchNAP is part of the Beltway Business Park - a project funded by the Thomas and Mack Development Group - aka old money Vegas real estate magnates. Thomas and Mack are also investors in Switch Communications.

You can see why the group would like to put money into something like SwitchNAP. It's more or less a glorified real estate play. Why bother with tenants when you can rent space by the server rack at a higher price?

It takes all of a few seconds for the possibilities surrounding SwitchNAP and the Thomas and Mack land to become clear. These guys have near limitless power at their disposal and city officials who would love to up Vegas's position on the technology map.

Perhaps we'll soon be writing about Data Center Valley.

Back To The Data Center Study