The world’s most powerful supercomputer, the Tianhe-2 – a Chinese supercomputer, though made on American technology – has now maintained its place for 2.5 years in a row. The US supercomputer Cray XK7 built three years ago maintains its second place today. Relative to June 2013, there has not even been a doubling in aggregate performance, whereas according to the historical trendlines, doublings have typically taken just a bit over a single year to occur. This is unprecedented, since Moore’s Law applies (applied?) to supercomputers just as much as it did to standard electronics.
Their obvious application to the development of radical technological breakthroughs, from the extraordinarily complex protein folding simulations vital to uncovering medical breakthroughs to the granddaddy of them all, computer superintelligence. The general “techno-optimistist” consensus has long been that Moore’s Law will continue to hold, or even strengthen further, because the Kurzweilian view was that the exponent itself was also (slowly) exponentially increasing. This would bring us an exaflop machine by 2018 and the capability to do full human brain neural simulations soon afterwards by the early 2020s.
But on post-2012 trends, exponentially extrapolated, we will actually be lucky just to hit one exaflop in terms of the aggregate of the world’s top 500 supercomputers by 2018. Now the predictions of the first exaflop supercomputer have moved out to 2023. Though perhaps not much in conventional life, a “delay” of 5 years is a huge deal so far as projections built on big exponents are concerned. For instance, assuming the trend isn’t reversed, the first supercomputer theoretically capable of full neural simulations moves out closer to 2030.
In terms of developing superintelligence, raw computing power has always been viewed as the weakest limit, and that remains a very reasonable view. However, the fact that even in this sphere there appear to be substantial unforeseen obstacles means a lot of trouble for the traditional placement of superintelligence and even the technological singularity at around 2045 or 2050 (not to even mention the 2020s as per Vernor Vinge).
Supercomputers can also be viewed as an instrument of national power. Indeed, some of the most powerful supercomputers have been used for nuclear testing (in lieu of real life). Other supercomputers are dedicated to modeling the global climate. Doing it better than your competitors can enable you to make better investments, even predict uprisings and civil wars, etc. All very useful from a geopolitical perspective. And of course they are very useful for a range of purely scientific and technological applications.
From having o-1 supercomputers in the Top 500 during the 1990s and a couple dozen in the 2000s, it surged past a waning Japan in the early 2010s and now accounts for 109 of the world’s top supercomputers, second only after the USA with its 199 supercomputers. This just confirms (if any such confirmations is still needed) that the story of China as nothing more than a low wage workshop is laughably wrong. An economy like that would not need 20%+ of the world’s top supercomputers.
|COUNTRIES||COUNT||SYSTEM SHARE (%)||RMAX (GFLOPS)||RPEAK (GFLOPS)||CORES|
Otherwise the rankings are approximately as one might expect, with the Big 4 middle sized developed Powers (Japan, Germany, UK, France) performing modestly well relative to the size of their population and the rest – including tthe non-China BRICS – being almost minnows in comparison.