When it comes to computing power you can never have enough. In the last sixty years, processing power has increased more than a trillionfold.
The first machine to bear the label ‘supercomputer’ was the CDC 6600. Designed by Seymour Cray and released in 1963, it boasted a groundbreaking performance of three megaflops – or 3,000 floating point operations per second. That’s roughly the same amount of processing power found in the Atari 2600 games console released just 13 years later (1977). The CDC 6600 cost around $8 million; the Atari 2600 cost a mere $199.
The benchmark for processing power jumped to gigaflops in the 1980s and accelerated to teraflops in the 1990s. In 2008 the first petascale computer systems appeared, capable of processing in excess of one petaflop, or a quadrillion (10^15) floating point operations per second. And in June 2016, China unveiled the Sunway TaihuLight, a 41,000-processor (10.65 million-core) system capable of 93 petaflops, and which is currently the fastest computer in the world.
From petaflops to exaflops
But now the race is on to reach the next level of supercomputing. Japan, France, China and the US are all pushing towards exascale-level systems, with a potential performance of one quintillion operations per second – or a billion billion (10^18) FLOPS if you prefer.
Of course, there are multiple issues to overcome en route to this computing milestone: efficiently connecting the multitude of processors and memory chips to maintain adequate data rates; keeping it all cool enough to operate reliably; and managing power requirements so it’s financially viable to run.
Today’s systems currently draw around 15 megawatts, which at $150 per megawatt hour equates to running costs of around $20 million per year. And, on top of all that, you need an OS and software that can actually utilise a billion billion operations per second.
As key players in the race to exaflop computing, the French and US will be using the latest Intel Xeon and Xeon Phi multi core processors. The previous fastest supercomputer, China’s Tianhe-2, used Intel Xeon Phi coprocessors and Ivy Bridge-EP Xeon processors to achieve a performance rating of 33.86 petaflops. It currently sits at number two on the Top500, the definitive list of the world’s speediest supercomputers.
For its part, Intel is busy creating the glue that holds it all together in the shape of Omni-Path, a high-performance communication architecture which provides the ability to scale to tens of thousands of nodes at low latency, low power and high throughput. Furthermore, Intel’s work with Micron resulted in the development of 3D XPoint non-volatile memory (now available to consumers as ‘Optane’). This innovation is 1,000 times faster than NAND memory, making it another important piece of the exascale jigsaw puzzle.
Exascale computing and beyond
Why do we need this kind of power? Supercomputing systems have already enabled scientists to recreate conditions at the birth of the universe and assemble genomes from millions of chunks of DNA. But the human race faces many challenges: population growth, climate change, evolving viruses, pollution… Exascale computing’s ability to run massively complex simulations will aid research in all of these areas, helping to more accurately predict weather patterns and earthquakes, providing more efficient engines and turbines, designing new drugs and materials, accelerating research into cures for cancer, and managing urban infrastructure.
Exascale computing isn’t just useful, it’s potentially vital for our future. And who knows what discoveries this computing power will unlock that might benefit your everyday life.
And just in case you were wondering, the next benchmark to hit is the zettaflop computer, a machine that will operate at a sextillion (10^21) floating-point operations per second. But that’s for another blog post, probably some time around 2050…