IBM to build massive supercomputer for US government

New machine will run at a record 20 petaflops

The US government has hired IBM to build a supercomputer with more power than all the supercomputers on the Top 500 supercomputer list combined.

It's an ambitious claim by IBM in a business where jumbo-sized claims are the norm. The planned Sequoia system, capable of 20 petaflops, will be used by the US Department of Energy in its nuclear stockpile research. The fastest systems today can only reach one petaflop, a remarkable achievement in its own right that was met only last year.

IBM's brief "is the biggest leap of computing capability ever delivered to the lab", says Mark Seager, assistant department head for advanced technology at the Lawrence Livermore National Laboratory in Livermore, California, where the system will be housed. It's expected to be up and running in 2012.

IBM is actually building two supercomputers under this contract. The first one, to be delivered by mid-year, is called Dawn and will operate at around 500 teraflops. It will be used by researchers to help prepare for the larger system.

Sequoia will use approximately 1.6 million processing cores, all IBM Power chips, and will run on Linux, which dominates high performance computing at this scale. IBM is still developing a 45 nanometer chip for the system and may produce something with eight or 16 cores — or more — for it. Although the final chip configuration has yet to be determined, the system will have 1.6TB of memory and be housed in 96 "refrigerator-sized" racks.

The cost of the system wasn't disclosed.

The supercomputer is also helping to drive a massive power upgrade at Lawrence Livermore, which is increasing the amount of electricity available for all its computing systems from 12.5 megawatts to 30 megawatts. To achieve the upgrade, it will run more power lines to its facility. Sequoia alone is expected to use about six megawatts, according to Seager.

The world's first computer to break the teraflop barrier was built at Sandia National Laboratories in 1996. A teraflop equals a trillion floating points a second; a petaflop is 1,000 trillion (one quadrillion) sustained floating-point operations per second.

It takes government funding to build systems of this scale and size, but that also means that the US is paying for much of the problem-solving it takes to scale across more than a million cores. "This is what's so good about it," says Herb Schultz, manager of deep computing at IBM. "They (the national lab) end up proving that you can get codes to scale that high".

In effect, by solving those problems, the national lab's work will pave the way for broader adoption of massive systems that could improve weather research, forecasts, tornado tracking, and work on a variety of other research problems. Large systems such as Sequoia help researchers reduce uncertainty and improve precision in simulations that can, for instance, predict tornado paths. The more compute power available, the more fine tuned and accurate the simulation.

The major problems in running a system of this scale are "the applications — porting the applications and scaling them up is a critical problem we are facing," says Seager.

There are two petaflop systems in the US, IBM's Roadrunner at Los Alamos National Laboratory, which passed the petaflop barrier last May, and Cray's XT Jaguar at the Oak Ridge National Laboratory.

IBM plans to build Sequoia at its Rochester, Minnesota, plant.

Join the newsletter!

Or

Sign up to gain exclusive access to email subscriptions, event invitations, competitions, giveaways, and much more.

Membership is free, and your security and privacy remain protected. View our privacy policy before signing up.

Error: Please check your email address.

Tags Supercomputersequoia

Show Comments
[]