The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information

 TeasersRussian Reaction Blog

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS


The heroes of Hikaru’s Go were off by 86 years.

As some of you might have heard, the word of go – or weiqi as it is known in its homeland of China – is currently undergoing its Deep Blue moment as one of the world’s strongest players Lee Sedol faces off against Google’s DeepMind AlphaGo project. Deep Blue was the IBM/Carnegie Mellon supercomputer that in 1997 beat the world’s top grandmaster Gary Kasparov in a series of 6 chess games. But the computer’s margin of victory at 3.5 to 2.5 was modest, and the event was dogged by Kasparov’s allegations that the IBM team had underhandedly helped the computer. It would be an entire decade before the top computer chess programs decisively overtook the top human players. As of today, there is a 563 point difference between the Elo rating of Magnus Carlsen, the current highest rated human player on the FIDE’s database, and the world’s most powerful chess program, the open source Stockfish 7. In practical terms, this means that Carlsen can expect to win fewer than one in a hundred games against the Stockfish running on a contemporary 64-bit quadcore CPU.

In terms of game complexity, more orders of magnitude separate go from chess than chess from draughts, a game that has been fully solved. The aim is to capture territory and enemy stones by encircling them while defending your own turf, both of which are tallied up at the end of the game with the winner being the one with the most points. It is played on a 19×19 board, a lot larger than the 8×8 arrangement of chess, and you can position your pieces – or stones – on any empty space not occupied by or completely encircled by the enemy, whereas the range of possible moves in chess is strongly constricted. Chess is tactics, go is logistics; chess is combined arms, go is encirclements; chess draws strongly upon algorithmic and combinatorial thinking, whereas go is more about pattern matching and “intuition.” Therefore it is not surprising that until recently it was common wisdom that it would be many decades before computers would start beating the world’s top human players. The unimpressive performance of existing go computer programs, and the slowdown or end of Moore’s Law in the past few years, would have only given weight to that pessimistic assessment. (Or perhaps optimistic one, if you’re with MIRI). Lee Sedol himself thought the main question would be whether he would beat AlphaGo by 5-0 or 4-1.

Which makes it all the more remarkable that Lee Sedol is not just behind but having lost all of his three games so far is getting positively rekt.

But apparently Lee’s confidence was more rational than hubris. He had watched AlphaGo playing against weaker players, in which it made some apparent mistakes. But as a DeepMind research scientist noted, this was actually feature, not bug:

As Graepel explained, AlphaGo does not attempt to maximize its points or its margin of victory. It tries to maximize its probability of winning. So, Graepel said, if AlphaGo must choose between a scenario where it will win by 20 points with 80 percent probability and another where it will win by 1 and a half points with 99 percent probability, it will choose the latter. Thus, late in Game One, the system made some moves that Redmond considered mistakes—“slow” in his terminology. These moves seemed to give up points, but from where Graepel was sitting, AlphaGo was merely trying to maximize its chances.

In other words, while the projected points on the board – territory held plus stones captured – might for a long time appear to be roughly equal, at the same time the probability of ultimate victory would inexorably shift against Lee Sedol. And capped as our human IQs are, not only Lee but all the rest of us might be simply incapable of discerning the deeper strategies in play: “And so we boldly go – into the whirling knives” (to borrow from Nick Bostrom’s book on the risks of computer superintelligence).

Those are in fact the exact terms in which AI scientist/existential risks researcher Eliezer Yudkowsky analyzed this game in a lengthy Facebook post:

At this point it seems likely that Sedol is actually far outclassed by a superhuman player. The suspicion is that since AlphaGo plays purely for *probability of long-term victory* rather than playing for points, the fight against Sedol generates boards that can falsely appear to a human to be balanced even as Sedol’s probability of victory diminishes. The 8p and 9p pros who analyzed games 1 and 2 and thought the flow of a seemingly Sedol-favoring game ‘eventually’ shifted to AlphaGo later, may simply have failed to read the board’s true state. The reality may be a slow, steady diminishment of Sedol’s win probability as the game goes on and Sedol makes subtly imperfect moves that *humans* think result in even-looking boards.

For all we know from what we’ve seen, AlphaGo could win even if Sedol were allowed a one-stone handicap. But AlphaGo’s strength isn’t visible to us – because human pros don’t understand the meaning of AlphaGo’s moves; and because AlphaGo doesn’t care how many points it wins by, it just wants to be utterly certain of winning by at least 0.5 points.

In the third game, which finished just a few hours ago – by the way, you can watch the remaining two games live at the DeepMind YouTube channel, though make sure to learn the rules beforehand or it will be very boring – Lee Sedol, by then far behind on points, made a desperate ploy to salvage the game (or more likely just use the opportunity to test AlphaGo’s capabilities) by initiating a ko fight. A ko is a special case in go in which a local altercation sharply becomes the fulcrum around which the outcome of the entire game might be decided. Making the winning moves requires perfect, precise play as opposed to AlphaGo’s key method of playing out billions of random games and choosing the one which results in the most captured territory after n moves.

But AlphaGo handled the ko situation with aplomb, and Lee had to resign.

The Korean Lee Sedol is the fourth highest rated go player on the planet. But even as of March 9, were it a person, AlphaGo would have already displaced him. The top player in the world is the Chinese Ke Jie, who is currently 100 Elo points higher than Lee. According to my calculations, this implies that Lee should win slightly more than a third of his matches against Ke Jie. His actual record is 2/8, or 25%. Not only is his current tally against AlphaGo is 0/3, but he was beaten by a considerable number of points by an entity that is perfectly content to minimize its lead in order to to maximize its winning probability.

will-lee-sedol-defeat-alphago Finally, a live predictions market on whether Lee Sedol would defeat AlphaGo in any of the three games remaining (that is, before the third match) varied between 20%-25%, implying that the probability of him winning any one game against the the DeepMind monster was less than 10%. (If anything, those probabilities would be even lower now that AlphaGo has demonstrated ko isn’t its Achilles heel, but let us set that aside).

According to my calculations, IF this predictions market is accurate, it would imply that AlphaGo has a ~400-450 Elo point superiority over Lee Sedol based on its performance up to and including the first two games against him.

It would also mean it would be far ahead of Ke Jie, who is the highest ranked human player ever and is currently virtually at his peak. Whereas Lee can only be expected to win 7%-9% of his games against AlphaGo, for Ke Jie this figure would be only modestly higher at 12%-15%. But in principle I see no reason why AlphaGo’s capabilities couldn’t be even higher than that. It’s a long tail – and we can’t see all that far ahead!

But really the most astounding element of this is that what took chess computing a decade to accomplish increasingly appears to have occured in the space of a few days with AlphaGo – despite the slowdown in Moore’s Law in recent years, and the problems of go being far more challenging than those of chess in terms of traditional AI approaches.

For all intents and purposes AI has entered the superhuman realm in a problem space where merely human intelligence had hitherto ruled supreme, and even though we are as far away as ever from discovering the “Hand of God” – the metaphorical perfect game, which will take longer than the lifetime of the universe to compute if all of the universe were to become computronium – we might well be starting the construction of a Sliver of Him.

Update -

Lee won the fourth game!

A win rate of 25% means that AlphaGo’s Elo likely superiority over Lee’s current 3519 points has just plummeted from 400-450 (based on predictions market) to 191, i.e. 3710. Still higher than top player Ke Jie at 3621.

If Lee loses the next game, that Elo difference goes up to 241; if he wins, it gets reduced further to 120. Regardless, we can now say with considerable confidence that AlphaGo is peak human level but decidedly not superhuman level.

Update 2 -

Final remarks:

Was writing article instead of watching final Lee-AlphaGo game but final score is 4:1. Reverse of what Lee had originally predicted! ;)

Anyhow 4:1 score (w/out looking into details) implies Alpha has *probabilistic* ~240 point higher Elo rating than Lee Sedol i.e. ~3760.

That means its likely ~140 points higher than first ranked human Ke Jie and should beat him about 70% of the time.

I had a look at go bots historic performance other day. Looks like they move up by 1 S.D. every two years or so. Treating AlphaGo as the new base, humans should be *completely* outclassed by computer in go by around 2020.

• Category: Science • Tags: Game, Supercomputers 
🔊 Listen RSS

The latest data from Top 500, a website that tracks the world’s most powerful supercomputers, has pretty much confirmed this with the release of their November 2015 list.

The world’s most powerful supercomputer, the Tianhe-2 – a Chinese supercomputer, though made on American technology – has now maintained its place for 2.5 years in a row. The US supercomputer Cray XK7 built three years ago maintains its second place today. Relative to June 2013, there has not even been a doubling in aggregate performance, whereas according to the historical trendlines, doublings have typically taken just a bit over a single year to occur. This is unprecedented, since Moore’s Law applies (applied?) to supercomputers just as much as it did to standard electronics.


Apart from serving as a conventient bellweather for general trends, futurists are well advised to follow supercomputers for two reasons.

Technological Projections

Their obvious application to the development of radical technological breakthroughs, from the extraordinarily complex protein folding simulations vital to uncovering medical breakthroughs to the granddaddy of them all, computer superintelligence. The general “techno-optimistist” consensus has long been that Moore’s Law will continue to hold, or even strengthen further, because the Kurzweilian view was that the exponent itself was also (slowly) exponentially increasing. This would bring us an exaflop machine by 2018 and the capability to do full human brain neural simulations soon afterwards by the early 2020s.


But on post-2012 trends, exponentially extrapolated, we will actually be lucky just to hit one exaflop in terms of the aggregate of the world’s top 500 supercomputers by 2018. Now the predictions of the first exaflop supercomputer have moved out to 2023. Though perhaps not much in conventional life, a “delay” of 5 years is a huge deal so far as projections built on big exponents are concerned. For instance, assuming the trend isn’t reversed, the first supercomputer theoretically capable of full neural simulations moves out closer to 2030.

In terms of developing superintelligence, raw computing power has always been viewed as the weakest limit, and that remains a very reasonable view. However, the fact that even in this sphere there appear to be substantial unforeseen obstacles means a lot of trouble for the traditional placement of superintelligence and even the technological singularity at around 2045 or 2050 (not to even mention the 2020s as per Vernor Vinge).

National Power

Supercomputers can also be viewed as an instrument of national power. Indeed, some of the most powerful supercomputers have been used for nuclear testing (in lieu of real life). Other supercomputers are dedicated to modeling the global climate. Doing it better than your competitors can enable you to make better investments, even predict uprisings and civil wars, etc. All very useful from a geopolitical perspective. And of course they are very useful for a range of purely scientific and technological applications.


As in so many spheres in the international arena, the overwhelming story here is of the Rise of China.

From having o-1 supercomputers in the Top 500 during the 1990s and a couple dozen in the 2000s, it surged past a waning Japan in the early 2010s and now accounts for 109 of the world’s top supercomputers, second only after the USA with its 199 supercomputers. This just confirms (if any such confirmations is still needed) that the story of China as nothing more than a low wage workshop is laughably wrong. An economy like that would not need 20%+ of the world’s top supercomputers.

United States 199 39.8 172,582,178 246,058,722 10,733,270
China 109 21.8 88,711,111 189,895,013 9,046,772
Japan 37 7.4 38,438,914 49,400,668 3,487,404
Germany 32 6.4 29,663,941 37,844,201 1,476,524
United Kingdom 18 3.6 11,601,324 14,230,096 724,184
France 18 3.6 12,252,180 14,699,173 766,540
India 11 2.2 4,933,698 6,662,387 236,692
Korea, South 10 2 7,186,952 9,689,205 283,568
Russia 7 1.4 4,736,512 6,951,848 208,844
Brazil 6 1.2 2,012,268 2,722,150 119,280

Otherwise the rankings are approximately as one might expect, with the Big 4 middle sized developed Powers (Japan, Germany, UK, France) performing modestly well relative to the size of their population and the rest – including tthe non-China BRICS – being almost minnows in comparison.

No Items Found
Anatoly Karlin
About Anatoly Karlin

I am a blogger, thinker, and businessman in the SF Bay Area. I’m originally from Russia, spent many years in Britain, and studied at U.C. Berkeley.

One of my tenets is that ideologies tend to suck. As such, I hesitate about attaching labels to myself. That said, if it’s really necessary, I suppose “liberal-conservative neoreactionary” would be close enough.

Though I consider myself part of the Orthodox Church, my philosophy and spiritual views are more influenced by digital physics, Gnosticism, and Russian cosmism than anything specifically Judeo-Christian.