This super computer has achived a computing speed of 2750 Trillion calculations per second in a Top survey of supercomputers.
The tiane-1A series uses 7168 Nvidia Tesla M2050 GPUs and 14,336 CPUs to achive this mind blowing performance! It takes almost 4.04 megawatts to run this machine in full swing which means it consumes shitload of energy!...excuse me for that but i just couldnt contain my excitement...
The US however ended up 2nd with the Cray Xt5 'Jaguar' system at the US department of Energy's Oak Ridge Leadership Computing Facility in Tennessee with a speed of 1750 trillion calculations per second..
NPR reports that, five new supercomputers are being built that are supposed to be four times more powerful than China’s new machine. Three are in the U.S.; two are in Japan.
Programming computer software that will manage a supercomputer’s thousands of individual processors to work together efficiently is the challenge in using the supercomputer. Supercomputers are used for really complicated problems like forecasting the weather or simulating our planet’s future climate. Until the super machine is put into use and its effectiveness is measured, it will simply remain as a cause of national pride, that’s all.
Now the problem here is sustaining the computing power. Chinas Tiane does a 4.7 Petaflops per second but this is done only during peak performance.
The key word here is "peak performance": while the Linpack benchmark used to officially determine the speed of the world's fastest supercomputers measures their ability to do calculations in short bursts, in the real world of scientific computing, what often matters most is a machine's ability to sustain that performance.
In other words, the Tianhe 1A comes on strong, but American supercomputers can last all night - or sometimes many days, depending on the scale of the problem they're tackling.
"It's very difficult to achieve anywhere near peak performance on GPUs," says Thom Dunning, director of the National Center for Supercomputing Applications. GPUs are the NVIDIA-built graphics processing units that comprise the bulk of the computing power in the Tianhe 1A supercomputer, which also includes traditional CPUs in its hybrid design.
The problem with GPUs, says Dunning, is that they are so "compute hungry" that they "tend to sit idle for a large percentage of the time." The bottleneck is the memory on board GPU processors: it's fast, but not fast enough.
"There's a significant mismatch between memory speed and GPU speed," he adds.
Even if China's supercomputing software engineers are able to create useful scientific software that get close to the machine's peak performance by rarely accessing memory, it's not clear that the Linpack benchmark which pegs the machine as the world's fastest is a useful indicator of its performance in real-world applications.
"The Linpack benchmark is one of those interesting phenomena -- almost anyone who knows about it will deride its utility," says Dunning. "They understand its limitations but it has mindshare because it's the one number we've all bought into over the years."
The system's reliance on GPUs also means that overwhelming majority of existing supercomputing software would have to be entirely re-written to run on it. That's a programming challenge that has so far eluded engineers in the west - it's "more art than science," at this point, says Dunning. That doesn't mean it's impossible, or that the Chinese won't soon have a fleet of supercomputers ranking in the top 500 world ranking. Their real performance and utility, however, has yet to be determined.
No comments:
Post a Comment