Now in its 29th year, the 2016 International Conference for High Performance Computing, Networking, Storage and Analysis (aka Supercomputing 2016 or SC16) taught us five important things.
1. We’re entering an era of cognitive computing
So believes Katharine Frase, Vice President and Chief Technology Officer, IBM Public Sector. In her keynote speech at SC16, Frase explained how the High Performance Computing (HPC) industry is moving from programmable systems to cognitive systems. These are computers that can not only crunch data, but understand it, reason with it and learn from it.
“We have always built systems to expand human capacity,” said Frase, but she points out that cognitive systems won’t “take the place of our decision-making.” Instead, she hopes that machine learning will take us beyond using computers to find the right answers in big data, to using computers to ask the right questions.
2. Big data is far bigger than you realise
The world is increasingly digital and we need supercomputers to help us make sense of it. According to Frase, 90% of the world’s data was created in the last two years and we’re generating an extra 2.5 billion gigabytes of data every day.
3. There’s a new Top500 supercomputing list
SC16 saw the release of the November 2016 Top500 list of the world’s fastest supercomputers. Once again, China is top of the petaflops. The 10,649,600-core Sunway TaihuLight is rated at 93 petaflops (93,014 teraflops or 93 quadrillion calculations per second). The Intel Xeon E5- and Intel Xeon Phi-based Tianhe-2 holds onto second place with a Linpack performance of 34 petaflops.
China and the US have 171 supercomputers in the Top500 list. Germany is a distant third with 32 systems, followed by Japan with 27, France with 20, and the UK with 17.
4. There’s a new processor in town
There are two new entries in the latest supercomputing top 10 — the 14 petaflop Cori supercomputer at Berkeley (#5) and the 13.6 petaflop Oakforest-PACS in Japan (#6). Both are powered by the latest Intel Xeon Phi 7250, a 68-core processor that can deliver 3 teraflops of performance. The chips are also being used in the forthcoming 100+ petaflop Aurora supercomputer.
5. Exascale computing is almost within reach
If you think 100+ petaflops is fast, plans are already underway to build exascale (1,000 petaflops or a billion billion calculations per second) computing systems. These will be able to tackle tasks 50 times faster than current machines. Not only will this exascale leap require next generation processors, but faster connections between those processors.
We need ever faster supercomputers to solve the world’s biggest problems — modelling our climate, designing new drugs and developing the next generation of artificial intelligence systems. The Titan supercomputer at Oak Ridge (#3 on the Top500 list), for example, is dedicated to nuclear fusion research. Japan’s K Computer (#7 on the Top500 list) is focused on disaster prevention and medical research.
SC16 shows that we’re well on the way to a cognitive, exascale future.