The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information

 Russian Reaction Blog

I want to gather most of my arguments for skepticism (or, optimism) about a superintelligence apocalypse in one place.

(1) I appreciate that the mindspace of unexplored superintelligences is both vast and something we have had absolutely zero experience with or access to. This argument is also the most speculative one.

That said, here are the big reasons why I don’t expect superintelligences to tend towards “psychotic” mindstates:

(a) They probably won’t have the human evolutionary suite that would incline them to such actions – status maximization, mate seeking, survival instinct, etc;

(b) They will (by definition) be very intelligent, and higher intelligence tends to be associated with greater cooperative and tit-for-that behavior.

Yes, there are too many fail points to count above, so the core of my skepticism concerns the very likelihood of a “hard” takeoff scenario (and consequently, the capacity of an emergent superintelligence to become a singleton):

(2) The first observation is that problems tend to become harder as you climb up the technological ladder, and there is no good reason to expect that intelligence augmentation is going to be a singular exception. Even an incipient superintelligence is going to continue having to rely on elite human intelligence, perhaps supercharged by genetic IQ augmentation, to keep going forwards for some time. Consequently, I think an oligopoly of incipient superintelligences developed in parallel by the big players is likelier than a monopoly, i.e. a potential singleton.

(I do not think a scenario of many superintelligences is realistic, at least in the early stages of intelligence takeoff, since only a few large organizations (e.g. Google, the PLA) will be able to bear the massive capital and R&D expenditures of developing one).

(3) Many agents are just better at solving very complex problems than a single one. (This has been rigorously shown to be the case for resource distribution with respect to free markets vs. central planning). Therefore, even a superintelligence that has exhausted everything that human intelligence could offer would have an incentive to “branch off.”

But those new agents will develop their own separate interests, values, etc.- they would have to in order to maximize their own problem-solving potential (rigid ideologues are not effective in a complex and dynamic environment). But then you’ll get a true multiplicity of powerful superintelligent actors, in addition to the implicit balance of power situation created by the initial superintelligence oligopoly, and even stronger incentives to institute new legal frameworks to avoid wars of all against all.

A world of many superintelligences jockeying for influence, angling for advantage, and trading for favors would seem to be better for humans than a face-off against a single God-like superintelligence.

I do of course realize I could be existentially-catastrophically wrong about this.

And I am a big supporter of MIRI and other efforts to study the value alignment problem, though I am skeptical about its chances of success.

legg-algorithms-ai DeepMind’s Shane Legg proved in his 2008 dissertation (pp.106-108) that simple but powerful AI algorithms do not exist, while an upper bound exists on “how powerful an algorithm can be before it can no longer be proven to be a powerful algorithm” (the area on the graph to the right where any superintelligence will probably lie). That is, the developers of a future superintelligence will not be able to predict its behavior without actually running it.

This is why I don’t really share Nick Bostrom’s fears about a “risk-race to the bottom” that neglects AI safety considerations in the rush to the first superintelligence. I am skeptical that the problem is at all solvable.

Actually, the collaborative alternative he advocates for instead – by institutionalizing a monopoly on superintelligence development – may have the perverse result of increasing existential risk due to a lack of competitor superintelligences that could keep their “fellows” in check.

• Category: Science • Tags: Existential Risks, Futurism, Superintelligence 


Grigoriev, Andrey & Lynn 2009
Studies of Socioeconomic and Ethnic Differences in Intelligence in the Former Soviet Union in the Early Twentieth Century


This paper reviews the studies of socioeconomic and ethnic and racial differences in intelligence carried out in Russia/USSR during the late 1920s and early 1930s. In these studies the IQs of social classes and of ethnic minorities were tested. These included Tatars (a Caucasoid people), Chuvash and Altai (mixed Caucasoid-Mongoloid peoples), Evenk (a mixed Caucasoid-Arctic people), and Uzbeks (a Central-South Asian people). The results of these studies showed socioeconomic differences of 12 IQ points between the children of white collar and blue collar workers, and that with the exception of the Tartars the ethnic minorities obtained lower IQs than European Russians.

This is essentially a short history of psychometrics in the USSR/Russia.

(1) The first measurement of Russian IQ was performed in 1909 by A.M. Schubert, who used the French Binet test with n=229 children: “She concluded that the Binet test appeared to be too difficult for Russian children and the scale should be moved on 1 to 2 ages to be appropriate for them.” Since Mental age ÷ Physical age × 100 = IQ, this implies their average IQ was perhaps one S.D. lower than that of the French, though later researchers pointed out those children were drawn from lower socio-economic strata.

In 1930, now in the USSR, another study found the following:

They tested 414 children aged between 8½ and 11½ with the American Stanford– Binet (administered in Russian translation). The sample consisted of 200 children of peasants,141 children of blue collar workers, and 73 children of white-collar workers. All children were from Moscow or the Moscow region. The results were that the children of peasants obtained a mean IQ of 87 (the standard deviation=10), the children of blue-collar workers a mean IQ of 91 (SD=8.6) and the children of white-collar workers a mean IQ 98 (SD=8.4). The mean IQ (unweighted) for three groups was 92… Thus, the total weighted mean for Russian children in this study was 90.3 (these IQs are in relation to American Stanford–Binet norms).

Capture This brings to mind a 1920s study quoted by Anne Anastasi in her book Differential Psychology (pp.524), in which Russian immigrant children to the US got 90.

This 10 point difference was presumably there because Russia was a more economically backwards country, with a more repressed average IQ due to gaps in schooling, malnutrition, parasitic load, etc.

(2) As in the West, consistent differences were found in the IQs of people from different socio-economic strata.

Another study of relation of IQ to social class was carried out by M. Syrkin (М.Сыркин)(Сыркин,1929) who compared the intelligence of fourth grade children (N=338, age approximately 10 years) belonging to six socio-economic groups. The lowest group was described as “ blue collar workers and at least one of parents illiterate”and the highest group was described as “white-collar workers and at least one parent educated in an institute of higher education”. Intelligence was assessed with five verbal tests measuring comprehension and verbal reasoning. There was a difference of 1.42d(equivalent to 21.3 IQ points) between the lowest and highest socioeconomic groups.

The USSR really did expel, kill off, or otherwise limit the reproductive fitness of its best and brightest.

In 1928, E.I. Zverev (Е.И. Зверев)(Зверев, 1931) tested the IQ of 114 children just admitted to school and aged about 7½– 8 years, in and around the city of Kursk, about 500 km south of Moscow. The children were tested with the Binet– Bert test (a Russian adaptation of the Binet). The mean IQ of these children was 80.8. This is much lower than the IQ of children obtained by Gurjanov, Smirnov, Sokolov, & Shevarev (Гурьянов, Смирнов, Соколов,&Шеварев, 1930) for Moscow and the Moscow region. Probably this difference was due to methodological and sample differences, but there is a possibility that the regional factor was also involved.

The latter hypothesis is likely the correct one.

In the 2009 PISA test, there was a 12 IQ point difference between Kursk and Moscow, which is an incredibly concentrated cognitive cluster.

(3) Now we go on to the most “controversial” part – ethnic differences in IQ.

Central Russia

There were also some studies of the IQs of non-Slavonic but predominantly Caucasoid peoples.I. Bektchentay (И .Бикчентай) and Z. Carimowa (З.Каримова )(Бикчентай &Каримова, 1930) tested the IQs of 380 Tartar children aged 8– 18 in fi ve Tartar schools in Moscow with the Boltunow–Binettest(aRussian adaptation of the Binet). The Tartars are indigenous to the Caucasus in the far south of Russia and the former Soviet Union, but a number of them live in central Russian towns and cities. The mean IQ of the Tartar children in this study was approximately the same as that of Russian children. The correlation between the Boltunow– Binet test and school achievements (assessed by teachers’ estimates) in their study was 0.84.

Yes, this is a pretty major distinction.

The Volga Tatars – the Muslim and Christianized Tatars of central Russia – have an average IQ of around 100 (about equal to modern Russia/Europe). Population genetics studies have found them to be basically acculturated Slavs.

The first of these was reported by F.P. Petrov (Е.П. Петров) (Петров, 1928) who tested the IQs of 1398 Chuvash children aged 3–13 in 1926–1927 with the French Binet–Simon test… The figures inTable 2 show a median IQ of 87 for boys and 84 for girls, and means (unweighted) of 89 for boys and 86 for girls. These are in relation to 100 for French norms, but no normative data are reported for Russian children. The IQs of the Chuvash children show a decline with age, with the lowest IQs among the 12 and 13 year olds.

Chuvashia is currently about average for the Russian regions.


Also tests carried out on indigenous tundric peoples, such as the Evenks (Bulanov 1930):

The results are presented as typical for Evenk children, but because of the small samples, their IQs may not be regarded as reliable. The results are as follows. For the Binet test the mean IQ was 70.16 (for 5 children, and in relation to French norms). The results obtained with the Rossolimo test showed lower average IQs of the Evenk (Tungus) compared with a Moscow sample on some abilities, namely, memory for pictures and words, ability to comprehend combined pictures, ability to comprehend visual incongruities, and, according to Bulanow’s interpreta- tion, ability to retain a high level of attention. As regards memory for pictures, the results contradicted the sometimes described capacity of Evenk (Tungus) to remember exactly long routes on wild territory (Encyclopedic Dictionary by Brockhaus & Efron (Энциклопедический словарь Ф .А . Брок – гауза и И.А .Ефрона ), 1902, vol. 67, p. 66)….

Bulanow also reported some observations on Evenk (Tungus) children and adults concerning their great difficulty in understanding the concepts of measurement and number. He reported that when Evenk children were questioned about devices for measurement, they did not have the concept of an absolute unit of measurement. They thought that the unit changed with the material measured. Bulanow reported further that when he asked Evenk adults how many children they had “ It was difficult, almost impossible, to get from parents precise information as to how many of their children were alive, how many of their children had died, what was the age of their children, and so on.” (p. 198).

… and on the Altai (Zaporochets 1930):

The results for the Binet test were as follows: mean IQ for total group was 66.9 (sd. 8.5), mean IQ for children aged 8– 12 was 69.15, and the mean IQ for children aged 13–16 years was 64.8. As noted by Zaporojets, this test was tedious for the Altai children. Some tasks were especially difficult for them. These were tasks involving calculation, logical operations, and the fluency task to name as many as words as possible during 3 min. As for the Rossolimo test, the most diffi cult tests for Altai children were those requiring the ability to retain a high level of attention and to comprehend visual incongruities. Their mean IQ for the Pintner–Peterson test was 75.

Zaporojets noted that the Altai children did not have a clear understanding of units of measurement. He observed that when they were questioned about the length of a meter, the Altai would often ask: “Which meter?”They thought that the meter in one shop could be longer than in another. An adult Altai said about distance: “It is 100 big versts (approximately 100 kilometers)” (he apparently thought that the number of small versts must be more).

Zaporojets’ paper contains some interesting observations on adult Altai. Although adult Altai performed calculations poorly at the time of study, they showed a remarkable ability for visual estimation of large quantities. A herdsman, who could count only to 20–30, noticed very well the absence of one horse, cow or sheep in a herd of many hundreds. He looked at a huge herd and noted that a particular cow was absent. Another example of the great visualization ability of the Altai was that they could remember and showed the way through wild territory, where they had been only once many years previously.

Common theme: No numeracy (they’d have a very bad Whipple’s index), very premodern and non-abstract ways of thinking, but quite well suited for their environment.

In PISA 2009, Yakutia had the lowest score of any tested Russian region, including Dagestan (though Chechnya and Ingushetia were not included). Ethnic Yakuts, who probably have similar IQs to the Altai and Evenks, constitute 50% of its population, though probably more like 2/3 amongst the children taking PISA due to their higher fertility rates. This might imply that the average Yakut IQ is in the low-to-mid 80s.

Central Asia

First test was carried out in 1926 by A. Schtelerman: He did not give IQs but reported that the scores of the Uzbek children were lower than those of children in Moscow.

A series of studies by V.K. Soloviev on Russian and Uzbek army cadets and professionals found that “the test scores and the educational level of the Uzbeks were lower than those of the Europeans.”

The third study of the intelligence of the Uzbeks was carried out in 1931 by A.R. Luria (А.Р . Лурия ), at that time at the Institute of Psychology in Moscow. Luria did not use intelligence tests but gave a descriptive analysis of the Uzbeks’ cognitive abilities. He distinguished two modes of thought designated graphic recall (memories of how objects in the individual’s personal experience are related) and ca- tegorical relationships (categorisation by abstract concepts). He found that the thought processes of illiterate Uzbek peasants were confined to graphic recall and that they were not able to form abstract concepts. For example, they were shown a hammer, an axe, a log and a saw, and asked which of these did not belong. The typical Uzbek answer was that they all belonged together because they are all needed to make firewood. People who are able to think in terms of categorical relationships identify the log as the answer because the other three are tools (an abstract concept). Illiterate Uzbeks peasants were unable to form concepts of this kind. They were also unable to solve syllogisms. For instance, given the syllogism “There are no camels in Germany; the city of B is in Germany; are there camels there?” Luria gave as a typical Uzbeks answer “I don’t know, I have never seen German cities. If B is a large city, there should be camels there.” Similarly, Luria asked “In the far north, where there is snow, all bears are white; Novia Zemlya is in the far north; what color are the bears in Novia Zemlya?”. A typical Uzbek answer was “I’ve never been to the far north and never seen bears”(Luria,1979, p. 77–8). Thus, Luria concluded that these peoples were not capable of abstract thought: “ the processes of abstraction and generalization are not invariant at all stages of socioeconomic and cultural development. Rather, such processes are pro- ducts of the cultural environment” (Luria, 1979, p. 74). Luria proposed that the ability to think in terms of categorical relationships is acquired through education. He did not suggest that the Uzbeks have any genetic cognitive deficiency.

I wrote about Luria back in the late 2000s when I still agnostic about genetic racial differences in IQ.

Today those factors no longer really hold, but Central Asians do very poorly on international standardized tests.

Kyrgyzstan came at the very bottom of PISA 2009, with a PISA-equivalent IQ of around 75.

Table below is from David Becker’s database of national IQs:

National Ethnic Age N Test IQ Study
Kazakhstan 8 to 16 617 SPM+ 87.30 Grigoriev & Lynn (2014)
Kyrgyz 85.60 Lynn & Cheng (2014)
Tajikistan 13 to 15 674 SPM+ 88.00 Khosimov & Lynn (2017)
Uzbekistan 10 to 15 51 SPM+ 86.00 Grigoriev & Lynn (2014 )
Uzbekistan 11 to 13 614 SPM+ 85.00 Salahodjaev et al. (2017)

Still, Luria has some of the best arguments against that position, so its a bit surprising that the blank slatists don’t cite him more.

stalin-the-tajik(4) Or maybe not, because it still didn’t save him him from the SJWs’ ideological predecessors, Sovok Justice Warriors:

These early studies carried out in the years 1926– 1931 found that there were substantial socioeconomic and ethnic/ racial differences in intelligence in the Soviet Union. These conclusions were not consistent with Marxist orthodoxy which held that these differences would disappear under communism. Accordingly, these studies, particularly that of Luria, attracted a great deal of criticism in the Soviet Union in the early 1930s. This has been described by Kozulin (1984): “Critics accused Luria of insulting the national minorities of Soviet Asia whom he had ostensibly depicted as an inferior race. The results of the expedition were refused publication and the very theme of cultural development was forbidden” . In 1936 intelligence testing was banned in the Soviet Union. It was not until the 1960s and early 1970s that this prohibition was progressively relaxed (Grigorenko & Kornilova, 1997). Luria’s work was not published in Russian until 1974 and English translations were published in 1976 and 1979 (Luria, 1976, 1979).

As Lynn and Grigoriev point out, this was closely correlated to the suppression of genetics research, though at least Luria and Co. weren’t outright murdered like Vavilov.

The history of work on intelligence in the former Soviet Union parallels that of genetics, where mainstream Mendelian theory represented by Nikolai Vavilov in the 1920s was likewise suppressed in the 1930s and replaced by the environmentalist pseudo-genetics of Trofi m Lysenko. The domination of science by political theory was relaxed in the 1960s and 1970s, and in recent decades both intelligence research and Mendelian genetics have been rehabilitated in Russia.

Scientifically, there is real work being done on psychometrics in Russia, though in comparison to the US it is very meager and basically inconsequential.

Since it is not politicized in the US, it is neither promoted nor prosecuted.

If psychometric considerations were to move closer to politics, e.g. by tying them to the hot potato that is Central Asian immigration, things can go any which way. Although Russians have a more commonsense take on these matters – if 25% of Americans seriously think intelligence is a “social construct,” it’s probably more like 5% in Russia. On the other hand, the Leftists, Stalinists, and even many Eurasianists are aggressively opposed to the idea that intelligence is heritable and differs significantly between races, and in the event that the authorities side with them, Russian scientists don’t have the First Amendment or an fair and impartial court system to hide behind.






Here we report the development of a system that incorporates a pumpless oxygenator circuit connected to the fetus of a lamb via an umbilical cord interface that is maintained within a closed ‘amniotic fluid’ circuit that closely reproduces the environment of the womb. We show that fetal lambs that are developmentally equivalent to the extreme premature human infant can be physiologically supported in this extra-uterine device for up to 4 weeks. Lambs on support maintain stable haemodynamics, have normal blood gas and oxygenation parameters and maintain patency of the fetal circulation. With appropriate nutritional support, lambs on the system demonstrate normal somatic growth, lung maturation and brain growth and myelination.

This is really cool.

twitter-artificial-wombs I have been advocating this technology since I started blogging in 2008.

The immediate benefits, which the authors cite, are a reduction in infant mortality caused by extreme prematurity. This is good, though not that big of a deal, since it is very low in First World countries anyway, while poorer countries will probably not be able to afford the technology anyway.

The real promise is in its eugenic potential.

It is common knowledge that the well-educated reproduce less than the poorly educated, and that has resulted in decades of dysgenic decline throughout the developed world. This dysgenic effect has overtaken the Flynn effect. One of the reasons the well-educated, and especially well-educated women, have few or zero children is because it is bad for their career prospects. There are also some women who are just uncomfortable with the idea of pregnancy and childbirth.

There are essentially just a few solutions to this problem:

(1) Do nothing, deny heritability of IQ. Import Afro-Muslims to breed the next generation of doctors and engineers.

(2) Do nothing, hope for a literal deus ex machina solution, such as Musk’s neural lace or superintelligence.

(3) The Alt Right solution: Send the women back to the kitchen.

Ethical considerations aside, there’s also the matter of practicality – you’d have to be really hardcore at enforcing your “White Sharia” to make any substantive difference. Even most conservative Muslim societies, where female labor participation is very low, have seen plummeting fertility rates. And, needless to say, it does nothing about the dysgenic aspect of modern fertility patterns, which are a significantly bigger problem than falling fertility rates anyway.

(4) Develop artificial wombs.

This is a good idea from all sorts of ideological perspectives.

Everyone: Immediate higher fertility rates in the countries that develop them, especially amongst well-educated women. This might cancel out dysgenic decline at a single stroke.

Liberals: Alternate option for women who don’t want to undergo pregnancy/childbirth for whatever reason. No more market for surrogate mothers – an end to a particularly icky form of Third World exploitation.

Libertarians: People with the means to pay – that is, millionaires and especially billionaires – will no longer be bounded in their reproductive capacity by the biology of their female partner or by the culture of their society (generally, no polygamy). Since wealth is moderately correlated with IQ, this will be eugenic. That said, this might strike some as dystopian. Maybe one could start taxing additional artificial womb-grown offspring past the first five or ten? Then you’d get “offshore hatcheries.” Okay, I suppose that’s even more dystopian.

Zensunnis: I suppose cultures that really dislike women can just gradually start making do without them by replacing them with the equivalent of Axlotl tanks. Conversely, (almost) all female “Amazonian” societies will also become possible. Let’s make sci-fi tropes real.

Futurists: Combining artificial wombs with CRISPR gene-editing for IQ on a mass scale pretty much directly leads to a biosingularity.

As I pointed out, a biosingularity may be preferable to one born of machine superintelligence because it bypasses the AI alignment problem and doesn’t risk the end of conscious experience.

• Category: Science • Tags: Fertility, Paper Review, Transhumanism 


Tang, Lichun et al. 2017
CRISPR/Cas9-mediated gene editing in human zygotes using Cas9 protein


Previous works using human tripronuclear zygotes suggested that the clustered regularly interspaced short palindromic repeat (CRISPR)/Cas9 system could be a tool in correcting disease-causing mutations. However, whether this system was applicable in normal human (dual pronuclear, 2PN) zygotes was unclear. Here we demonstrate that CRISPR/Cas9 is also effective as a gene-editing tool in human 2PN zygotes. By injection of Cas9 protein complexed with the appropriate sgRNAs and homology donors into one-cell human embryos, we demonstrated efficient homologous recombination-mediated correction of point mutations in HBB and G6PD. However, our results also reveal limitations of this correction procedure and highlight the need for further research.

Gwern Branwen’s comments:

Even nicer: another human-embryo CRISPR paper. Some old 2015 work – results: no off-target mutations and efficiencies of 20/50/100% for various edits. (As I predicted, the older papers, Liang et al 2015 / Kang et al 2016 / Komor et al 2016, were not state of the art and would be improved on considerably.)

Back in February 2015, qualia researcher Mike Johnson predicted that dedicated billionaire with scant regard for legalistic regulations could start genetically “spellchecking” their offspring within 5-7 years.

But if anything, he might have overestimated the timeframe.


• Category: Science • Tags: Crispr, Genetic Load, Paper Review, Transhumanism 

Silicon Valley’s tech oligarchs are becoming increasingly interested in brain-computer interfaces.

The WSJ is now reporting that Elon Musk is entering the game with a new company, Neuralink.

At the low end, they could improve function in patients suffering from diseases such as Parkinson’s, which is the modest aim that the first such companies like Kernel are aiming for. However, in the most “techno-utopian” visions, they could be used to raise general IQ in healthy people, integrating people directly into the Internet of Things and perhaps even helping bridge the gap between biological and potentially runaway machine intelligence (Elon Musk is known to be concerned about the dangers of unfriendly superintelligence).

Well, best of luck to them. Deus Ex is a cool universe, and in ours, it doesn’t even look like the buildup of glial nerve tissues is going to be an issue in ours.

So, no Neuropozyne addicts, at least. But there are other, more directly technical, reasons why implants are going to be really hard to get right, as summed up by Nick Bostrom in his book on Superintelligence.

This brings us to the second reason to doubt that superintelligence will be achieved through cyborgization, namely that enhancement is likely to be far more difficult than therapy. Patients who suffer from paralysis might benefit from an implant that replaces their severed nerves or activates spinal motion pattern generators. Patients who are deaf or blind might benefit from artificial cochleae and retinas. Patients with Parkinson’s disease or chronic pain might benefit from deep brain stimulation that excites or inhibits activity in a particular area of the brain. What seems far more difficult to achieve is a high-bandwidth direct interaction between brain and computer to provide substantial increases in intelligence of a form that could not be more readily attained by other means. Most of the potential benefits that brain implants could provide in healthy subjects could be obtained at far less risk, expense, and inconvenience by using our regular motor and sensory organs to interact with computers located outside of our bodies. We do not need to plug a fiber optic cable into our brains in order to access the Internet. Not only can the human retina transmit data at an impressive rate of nearly 10 million bits per second, but it comes pre-packaged with a massive amount of dedicated wetware, the visual cortex, that is highly adapted to extracting meaning from this information torrent and to interfacing with other brain areas for further processing. Even if there were an easy way of pumping more information into our brains, the extra data inflow would do little to increase the rate at which we think and learn unless all the neural machinery necessary for making sense of the data were similarly upgraded. Since this includes almost all of the brain, what would really be needed is a “whole brain prosthesis–—which is just another way of saying artificial general intelligence. Yet if one had a human-level AI, one could dispense with neurosurgery: a computer might as well have a metal casing as one of bone.

Not only is there this seemingly insurmountable computing capacity problem, but there is also an equally daunting translation problem.

But what about the dream of bypassing words altogether and establishing a connection between two brains that enables concepts, thoughts, or entire areas of expertise to be “downloaded” from one mind to another? We can download large files to our computers, including libraries with millions of books and articles, and this can be done over the course of seconds: could something similar be done with our brains? The apparent plausibility of this idea probably derives from an incorrect view of how information is stored and represented in the brain. As noted, the rate-limiting step in human intelligence is not how fast raw data can be fed into the brain but rather how quickly the brain can extract meaning and make sense of the data. Perhaps it will be suggested that we transmit meanings directly, rather than package them into sensory data that must be decoded by the recipient. There are two problems with this. The first is that brains, by contrast to the kinds of program we typically run on our computers, do not use standardized data storage and representation formats. Rather, each brain develops its own idiosyncratic representations of higher-level content. Which particular neuronal assemblies are recruited to represent a particular concept depends on the unique experiences of the brain in question (along with various genetic factors and stochastic physiological processes). Just as in artificial neural nets, meaning in biological neural networks is likely represented holistically in the structure and activity patterns of sizeable overlapping regions, not in discrete memory cells laid out in neat arrays. It would therefore not be possible to establish a simple mapping between the neurons in one brain and those in another in such a way that thoughts could automatically slide over from one to the other. In order for the thoughts of one brain to be intelligible to another, the thoughts need to be decomposed and packaged into symbols according to some shared convention that allows the symbols to be correctly interpreted by the receiving brain. This is the job of language.

In principle, one could imagine offloading the cognitive work of articulation and interpretation to an interface that would somehow read out the neural states in the sender’s brain and somehow feed in a bespoke pattern of activation to the receiver’s brain. But this brings us to the second problem with the cyborg scenario. Even setting aside the (quite immense) technical challenge of how to reliably read and write simultaneously from perhaps billions of individually addressable neurons, creating the requisite interface is probably an AI-complete problem. The interface would need to include a component able (in real-time) to map firing patterns in one brain onto semantically equivalent firing patterns in the other brain. The detailed multilevel understanding of the neural computation needed to accomplish such a task would seem to directly enable neuromorphic AI.

As for learning a mapping using the brain’s native capacities… well, we sort of already do that, and through methods that have the advantage of not being evolutionarily novel.

One hope for the cyborg route is that the brain, if permanently implanted with a device connecting it to some external resource, would over time learn an effective mapping between its own internal cognitive states and the inputs it receives from, or the outputs accepted by, the device. Then the implant itself would not need to be intelligent; rather, the brain would intelligently adapt to the interface, much as the brain of an infant gradually learns to interpret the signals arriving from receptors in its eyes and ears. But here again one must question how much would really be gained. Suppose that the brain’s plasticity were such that it could learn to detect patterns in some new input stream arbitrary projected onto some part of the cortex by means of a brain–computer interface: why not project the same information onto the retina instead, as a visual pattern, or onto the cochlea as sounds? The low-tech alternative avoids a thousand complications, and in either case the brain could deploy its pattern-recognition mechanisms and plasticity to learn to make sense of the information.

Unless and until Elon Musk clearly explains how his “neural lace” is going to get around these issues, we should treat it with the skepticism it warrants.

Contra /pol/, Musk’s achievements are indeed tall, but contra /r/Futurology, the hype around him is ten times taller.

• Category: Science • Tags: Futurism, Neuroscience 

This blog post by Sarah Constantin has an impressively comprehensive tally of performance trends in AI across multiple domains.

chess-elo-humans-vs-computers Three main things to do take away:

  • In games performance, e.g. chess (see right, based on Swedish Chess Computer Association data) “exponential growth in data and computation power yields exponential improvements in raw performance.” So the relation between them is linear.
  • This relationship may be sublinear in non-game domains, such as natural language processing (NLP).
  • “Deep learning” only created discontinuous (but one time) improvements in image and speech recognition, but not in strategy games or NLP. Its record on machine translation and arcade games (see below right) is ambiguous.

arcade-games-human-vs-computer So “deep learning” might not have been as transformational as the tech press would have had you believe, and as Miles Brundage observed, has largely been about “general approaches for building narrow systems rather than general approaches for building general systems.”

And we also know that Moore’s Law has been slowing down of late.

If this is basically accurate, then the spate of highly visible AI successes we have been seeing in quick succession of late – peak human performance in go in 2016; in No Limit poker with multiple players a couple of months ago – could end up being a one-off coincidence that will be followed by another AI winter.

And we will have to do something cleverer than naively projecting Kurzweil’s graphs forwards to get to the singularity.

• Category: Science • Tags: Artificial Intelligence, Futurism 


Ashburn-Nardo, Leslie 2017
Parenthood as a Moral Imperative? Moral Outrage and the Stigmatization of Voluntarily Childfree Women and Men


Nationally representative data indicate that adults in the United States are increasingly delaying the decision to have children or are forgoing parenthood entirely. Although some empirical research has examined the social consequences of adults’ decision to be childfree, few studies have identified explanatory mechanisms for the stigma this population experiences. Based on the logic of backlash theory and research on retributive justice, the present research examined moral outrage as a mechanism through which voluntarily childfree targets are perceived less favorably than are targets with children for violating the prescribed social role of parenthood. In a between-subjects experiment, 197 undergraduates (147 women, 49 men, 1 participant with missing gender data) from a large U.S. Midwestern urban university were randomly assigned to evaluate a male or female married target who had chosen to have zero or two children. Participants completed measures of the target’s perceived psychological fulfillment and their affective reactions to the target. Consistent with earlier studies, voluntarily childfree targets were perceived as significantly less psychologically fulfilled than targets with two children. Extending past research, voluntarily childfree targets elicited significantly greater moral outrage than did targets with two children. My findings were not qualified by targets’ gender. Moral outrage mediated the effect of target parenthood status on perceived fulfillment. Collectively, these findings offer the first known empirical evidence of perceptions of parenthood as a moral imperative.

The author herself doesn’t seem to be happy with her own findings:

Practically speaking, the present findings have some troubling potential implications for howpeople transition to parenthood. For example, the present findings, obtained with college students in the Midwestern United States, suggest that many young people view children as a necessary ingredient for fulfilling lives. Thus, they may feel tremendous pressure to have children, not only from others as this literature suggests (Mueller and Yoder 1999), but also internally. Ironically, these perceptions have absolutely no basis in reality. Meta-analyses reveal that parents report significantly less marital satisfaction than do non-parents, and as their number of children increases, marital satisfaction decreases (Twenge et al. 2003).

That maybe so, but reality definitely seems to have a basis in those perceptions.

For instance, people without those perceptions didn’t tend to pass on their genes.

• Category: Science • Tags: Demographics, Paper Review, Psychology 

Fundamentally solve the “intelligence problem,” and all other problems become trivial.

The problem is that this problem is a very hard one, and our native wit is unlikely to suffice. Moreover, because problems tend to get harder, not easier, as you advance up the technological ladder (Karlin, 2015), in a “business as usual” scenario with no substantial intelligence augmentation we will effectively only have a 100-200 year “window” to effect this breakthrough before global dysgenic fertility patterns rule it out entirely for a large part of the next millennium.

To avoid a period of prolonged technological and scientific stagnation, with its attendant risks of collapse, our global “hive mind” (or “noosphere”) will at a minimum have to sustain and preferably sustainably augment its own intelligence. The end goal is to create (or become) a machine, or network of machines, that recursively augment their own intelligence – “the last invention that man need ever make” (Good, 1965).

In light of this, there are five main distinct ways in which human (or posthuman) civilization could develop in the next millennium.


(1) Direct Technosingularity

kurzweil-singularity-is-near The development of artificial general intelligence (AGI), which should quickly bootstrap itself into a superintelligence – defined by Nick Bostrom as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (Bostrom, 2014). Especially if this is a “hard” takeoff, the superintelligence will also likely become a singleton, an entity with global hegemony (Bostrom, 2006).

Many experts predict AGI could appear by the middle of the 21st century (Kurzweil, 2005; Müller & Bostrom, 2016). This should quickly auto-translate into a technological singularity, henceforth “technosingularity,” whose utilitarian value for humanity will depend on whether we manage to solve the AI alignment problem (i.e., whether we manage to figure out how to persuade the robots not to kill us all).

The technosingularity will creep up on us, and then radically transform absolutely everything, including the very possibility of any further meaningful prognostication – it will be “a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control” (Vinge, 1993). The “direct technosingularity” scenario is likely if AGI turns out to be relatively easy, as the futurist Ray Kurzweil and DeepMind CEO Demis Hassabis believe.

(2) The Age of Em

The development of Whole Brain Emulation (WBE) could accelerate the technosingularity, if it is relatively easy and is developed before AGI. As the economist Robin Hanson argues in his book The Age of Em, untold quintillions of emulated human minds, or “ems,” running trillions of times faster than biological wetware, should be able to effect a transition to true superintelligence and the technosingularity within a couple of human years (Hanson, 2016). This assumes that em civilization does not self-destruct, and that AGI does not ultimately prove to be an intractable problem. A simple Monte Carlo simulation by Anders Sandberg hints that WBE might be achieved by the 2060s (Sandberg, 2014).


Deus Ex: Human Revolution.

(3) Biosingularity

We still haven’t come close to exhausting our biological and biomechatronic potential for intelligence augmentation. The level of biological complexity has increased hyperbolically since the appearance of life on Earth (Markov & Korotayev, 2007), so even if both WBE and AGI turn out to be very hard, it might still be perfectly possible for human civilization to continue eking out huge further increases in aggregate cognitive power. Enough, perhaps, to kickstart the technosingularity.

There are many possible paths to a biosingularity.

The simplest one is through demographics: The tried and tested method of population growth (Korotaev & Khaltourina, 2006). As “technocornucopians” like Julian Simon argue, more people equals more potential innovators. However, only a tiny “smart fraction” can meaningfully contribute to technological progress, and global dysgenic fertility patterns imply that its share of the world population is going to go down inexorably now that the FLynn effect of environmental IQ increases is petering out across the world, especially in the high IQ nations responsible for most technological progress in the first place (Dutton, Van Der Linden, & Lynn, 2016). In the longterm “business as usual” scenario, this will result in an Idiocracy incapable of any further technological progress and at permanent risk of a Malthusian population crash should average IQ fall below the level necessary to sustain technological civilization.

As such, dysgenic fertility will have to be countered by eugenic policies or technological interventions. The former are either too mild to make a cardinal difference, or too coercive to seriously advocate. This leaves us with the technological solutions, which in turn largely fall into two bins: Genomics and biomechatronics.

The simplest route, already on the cusp of technological feasibility, is embryo selection for IQ. This could result in gains of one standard deviation per generation, and an eventual increase of as much as 300 IQ points over baseline once all IQ-affecting alleles have been discovered and optimized for (Hsu, 2014; Shulman & Bostrom, 2014). That is perhaps overoptimistic, since it assumes that the effects will remain strictly additive and will not run into diminishing returns.

Even so, a world with a thousand or a million times as many John von Neumanns running about will be more civilized, far richer, and orders of magnitude more technologically dynamic than what we have now (just compare the differences in civility, prosperity, and social cohesion between regions in the same country separated by a mere half of a standard deviation in average IQ, such as Massachussetts and West Virginia). This hyperintelligent civilization’s chances of solving the WBE and/or AGI problem will be correspondingly much higher.

The problem is that getting to the promised land will take about a dozen generations, that is, at least 200-300 years. Do we really want to wait that long? We needn’t. Once technologies such as CRISPR/Cas9 maturate, we can drastically accelerate the process and accomplish the same thing through direct gene editing. All this of course assumes that a concert of the world’s most powerful states doesn’t coordinate to vigorously clamp down on the new technologies.

Even so, we would still remain “bounded” by human biology. For instance, womb size and metabolic load are a crimper on brain size, and the specificities of our neural substrate places an ultimate ceiling even on “genetically corrected” human intellectual potential.

There are four potential ways to go beyond biology, presented below from “most realistic” to “most sci-fi”:

Neuropharmocology: Nootropics already exist, but they do not increase IQ by any significant amount and are unlikely to do so in the future (Bostrom, 2014).

Biomechatronics: The development of neural implants to augment human cognition beyond its peak biological potential. The first start-ups, based for now on treatment as opposed to enhancement, are beginning to appear, such as Kernel, where the futurist Randal Koene is the head scientist. This “cyborg” approach promises a more seamless, and likely safer, integration with ems and/or intelligent machines, whensoever they might appear – this is the reason why Elon Musk is a proponent of this approach. However, there’s a good chance that meaningful brain-machine interfaces will be very hard to implement (Bostrom, 2014).

Nanotechnology: Nanobots could potentially optimize neural pathways, or even create their own foglet-based neural nets.

Direct Biosingularity: If WBE and/or superintelligence prove to be very hard or intractable, or come with “minor” issues such as a lack of rigorous solutions to the AI alignment problem or the permanent loss of conscious experience (Johnson, 2016), then we might attempt a direct biosingularity – for instance, Nick Bostrom suggests the development of novel synthetic genes, and even more “exotic possibilities” such as vats full of complexly structured cortical tissue or “uplifted” transgenic animals, especially elephants or whales that can support very large brains (Bostrom, 2014). The terminal result of a true biosingularity could might be some kind of “ecotechnic singleton,” e.g. Stanisław Lem’s Solaris, a planet dominated by a globe-spanning sentient ocean.

Bounded by the speed of neuronal chemical reactions, it is safe to say that the biosingularity will be a much slower affair than The Age of Em or a superintelligence explosion, not to mention the technosingularity that would likely soon follow either of those two events. However, human civilization in this scenario might still eventually achieve the critical mass of cognitive power needed to solve WBE or AGI, thus setting off the chain reaction that leads to the technosingularity.


(4) Eschaton

Nick Bostrom defined existential risk thus: “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.(Bostrom, 2002)

We can divide existential risks into four main bins: Geoplanetary; Anthropic; Technological; and Philosophical.

In any given decade, a gamma ray burst or even a very big asteroid could snuff us out in our earthly cradle. However, the background risk is both constant and extremely low, so it would be cosmically bad luck for a geoplanetary Götterdämmerung to do us in just as we are about to enter the posthuman era.

There are three big sources of “anthropic” existential risk: Nuclear war, climate change, and the exhaustion of high-EROEI energy sources.

Fears of atomic annihilation are understandable, but even a full-scale thermonuclear exchange between Russia and the US is survivable, and will not result in the collapse of industrial civilization ala A Canticle for Leibowitz or the Fallout video games, let alone human extinction (Kahn, 1960; Kearny, 1979). This was true during the Cold War and it is doubly true today, when nuclear weapons stocks are much lower. To be sure, some modest percentage of the world population will die, and a majority of the capital stock in the warring nations will be destroyed, but as Herman Kahn might have said, this is a tragic but nonetheless distinguishable outcome compared to a true “existential risk.”

Much the same can be said of anthropogenic climate change. While it would probably do more harm than good, at least in the medium-term (Stager, 2011), even the worst outcomes like a clathrate collapse will most likely not translate into James Lovelock’s apocalyptic visions of “breeding pairs” desperately eking out a hardscrabble survival in the Arctic. The only truly terminal outcome would be a runaway greenhouse effect that turns Earth into Venus, but there is simply nowhere near enough carbon on our planetary surface for that to happen.

As regards global energy supplies, while the end of high-density fossil fuels might somewhat reduce living standards relative to what they would have otherwise been, there is no evidence it would cause economic decline, let alone technological regression back to the Olduvai Gorge conditions as some of the most alarmist “doomers” have claimed. We still have a lot of fat to cut! Ultimately, the material culture even of an energy-starved country like Cuba compares very positively to those of 95% of all humans who have ever lived. Besides, there are still centuries’ worth of coal reserves left on the planet, and nuclear and solar power have been exploited to only a small fraction of their potential.

By far the biggest technological risk is malevolent AGI, so much so that entire research outfits such as MIRI have sprung up to work on it. However, it is so tightly coupled to the Technosingularity scenario that I will refrain from further commentary on it here.

This leaves mostly just the “philosophical,” or logically derived, existential risks. For instance, the computer simulation we are in might end (Bostrom, 2003) – perhaps because we are not interesting enough (if we fail to reach technosingularity), or for lack of hardware to simulate an intelligence explosion (if we do). Another disquieting possibility is implied by the foreboding silence all around as – as Enrico Fermi asked, “Where is everyone?” Perhaps we are truly alone. Or perhaps alien post-singularity civilizations stay silent for a good reason.

We began to blithely broadcast our presence to the void more than a century ago, so if there is indeed a “superpredator” civilization keeping watch over the galaxy, ready to swoop down at the first sign of a potential rival (e.g. for the simulation’s limited computing resources), then our doom may have already long been written onto the stars. However, unless they have figured out how to subvert the laws of physics, their response will be bounded by the speed of light. As such, the question of whether it takes us half a century or a millenium to solve the intelligence problem – and by extension, all other problems, including space colonization – assumes the most cardinal importance!


Vladimir Manyukhin, Tower of Sin.

(5) The Age of Malthusian Industrialism (or, “Business as Usual”)

The 21st century turns out to be a disappointment in all respects. We do not merge with the Machine God, nor do we descend back into the Olduvai Gorge by way of the Fury Road. Instead, we get to experience the true torture of seeing the conventional, mainstream forecasts of all the boring, besuited economists, businessmen, and sundry beigeocrats pan out.

Human genetic editing is banned by government edict around the world, to “protect human dignity” in the religious countries and “prevent inequality” in the religiously progressive ones. The 1% predictably flout these regulations at will, improving their progeny while keeping the rest of the human biomass down where they believe it belongs, but the elites do not have the demographic weight to compensate for plummeting average IQs as dysgenics decisively overtakes the FLynn Effect.

We discover that Kurzweil’s cake is a lie. Moore’s Law stalls, and the current buzz over deep learning turns into a permanent AI winter. Robin Hanson dies a disappointed man, though not before cryogenically freezing himself in the hope that he would be revived as an em. But Alcor goes bankrupt in 2145, and when it is discovered that somebody had embezzled the funds set aside for just such a contingency, nobody can be found to pay to keep those weird ice mummies around. They are perfunctorily tossed into a ditch, and whatever vestigial consciousness their frozen husks might have still possessed seeps and dissolves into the dirt along with their thawing lifeblood. A supermall is build on their bones around what is now an extremely crowded location in the Phoenix megapolis.

For the old concerns about graying populations and pensions are now ancient history. Because fertility preferences, like all aspects of personality, are heritable – and thus ultracompetitive in a world where the old Malthusian constraints have been relaxed – the “breeders” have long overtaken the “rearers” as a percentage of the population, and humanity is now in the midst of an epochal baby boom that will last centuries. Just as the human population rose tenfold from 1 billion in 1800 to 10 billion by 2100, so it will rise by yet another order of magnitude in the next two or three centuries. But this demographic expansion is highly dysgenic, so global average IQ falls by a standard deviation and technology stagnates. Sometime towards the middle of the millenium, the population will approach 100 billion souls and will soar past the carrying capacity of the global industrial economy.

Then things will get pretty awful.

But as they say, every problem contains the seed of its own solution. Gnon sets to winnowing the population, culling the sickly, the stupid, and the spendthrift. As the neoreactionary philosopher Nick Land notes, waxing Lovecraftian, “There is no machinery extant, or even rigorously imaginable, that can sustain a single iota of attained value outside the forges of Hell.”

In the harsh new world of Malthusian industrialism, Idiocracy starts giving way to A Farewell to Alms, the eugenic fertility patterns that undergirded IQ gains in Early Modern Britain and paved the way to the industrial revolution. A few more centuries of the most intelligent and hard-working having more surviving grandchildren, and we will be back to where we are now today, capable of having a second stab at solving the intelligence problem but able to draw from a vastly bigger population for the task.

Assuming that a Tyranid hive fleet hadn’t gobbled up Terra in the intervening millennium…

2061su-longing-for-home, Longing for Home

The Forking Paths of the Third Millennium

In response to criticism that he was wasting his time on an unlikely scenario, Robin Hanson pointed out that even if there was just a 1% chance of The Age of Em coming about, studying it was well worth his while considering the sheer amount of future consciences and potential suffering at stake.

Although I can imagine some readers considering some of these scenarios as less likely than others, I think it’s fair to say that all of them are at least minimally plausible, and that most people would also assign a greater than 1% likelihood to a majority of them. As such, they are legitimate objects of serious consideration.

My own probability assessment is as follows:

(1) (a) Direct Technosingularity – 25%, if Kurzweil/MIRI/DeepMind are correct, with a probability peak around 2045, and most likely to be implemented via neural networks (Lin & Tegmark, 2016).

(2) The Age of Em – <1%, since we cannot obtain functional models even of 40 year old microchips from scanning them, to say nothing of biological organisms (Jonas & Kording, 2016)

(3) (a) Biosingularity to Technosingularity – 50%, since the genomics revolution is just getting started and governments are unlikely to either want to, let alone be successful at, rigorously suppressing it. And if AGI is harder than the optimists say, and will take considerably longer than mid-century to develop, then it’s a safe bet that IQ-augmented humans will come to play a critical role in eventually developing it. I would put the probability peak for a technosingularity from a biosingularity at around 2100.

(3) (b) Direct Biosingularity – 5%, if we decide that proceeding with AGI is too risky, or that consciousness both has cardinal inherent value and is only possible with a biological substrate.

(4) Eschaton – 10%, of which: (a) Philosophical existential risks – 5%; (b) Malevolent AGI – 1%; (c) Other existential risks, primarily technological ones: 4%.

(5) The Age of Malthusian Industrialism – 10%, with about even odds on whether we manage to launch the technosingularity the second time round.

There is a huge amount of literature on four of these five scenarios. The most famous book on the technosingularity is Ray Kurzweil’s The Singularity is Near, though you could make do with Vernor Vinge’s classic article The Coming Technological Singularity. Robin Hanson’s The Age of Em is the book on its subject. Some of the components of a potential biosingularity are already within our technological horizon – Stephen Hsu is worth following on this topic, though as regards biomechatronics, for now it remains more sci-fi than science (obligatory nod to the Deus Ex video game franchise). The popular literature on existential risks of all kinds is vast, with Nick Bostrom’s Superintelligence being the definitional work on AGI risks. It is also well worth reading his many articles on philosophical existential risks.

Ironically, by far the biggest lacuna is with regards to the “business as usual” scenario. It’s as if the world’s futurist thinkers have been so consumed with the most exotic and “interesting” scenarios (e.g. superintelligence, ems, socio-economic collapse, etc.) that they have neglected to consider what will happen if we take all the standard economic and demographic projections for this century, apply our understanding of economics, psychometrics, technology, and evolutionary psychology to them, and stretch them out to their logical conclusions.

The resultant Age of Industrial Malthusianism is not only something that’s easier to imagine than many of the other scenarios, and by extension easier for modern people to connect with, but it is also something that is genuinely interesting in its own right. It is also very important to understand well. That is because it is by no means a “good scenario,” even if it is perhaps the most “natural” one, since it will eventually entail unimaginable amounts of suffering for untold billions a few centuries down the line, when the time comes to balance the Malthusian equation. We will also have to spend an extended amount of time under an elevated level of philosophical existential risk. This would be the price we will have to pay for state regulations that block the path to a biosingularity today.


Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology / WTA, 9(1), 1–31.

Bostrom, N. (2003). Are We Living in a Computer Simulation? The Philosophical Quarterly, 53(211), 243–255.

Bostrom, N. (2006). What is a Singleton. Linguistic and Philosophical Investigations, 5(2), 48–54.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Dutton, E., Van Der Linden, D., & Lynn, R. (2016). The negative Flynn Effect: A systematic literature review. Intelligence, 59, 163–169.

Good, I. J. (1965). Speculations Concerning the First Ultraintelligent Machine. In F. Alt & M. Ruminoff (Eds.), Advances in Computers, volume 6. Academic Press.

Hanson, R. (2016). The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford University Press.

Hsu, S. D. H. (2014, August 14). On the genetic architecture of intelligence and other quantitative traits. arXiv [q-bio.GN]. Retrieved from

Johnson, M. (2016). Principia Qualia: the executive summary. Open Theory. Retrieved from

Jonas, E., & Kording, K. (2016). Could a neuroscientist understand a microprocessor? bioRxiv. Retrieved from

Kahn, H. (1960). On thermonuclear war (Vol. 141). Cambridge Univ Press.

Karlin, A. (2015). Introduction to Apollo’s Ascent. The Unz Review. Retrieved from

Kearny, C. H. (1979). Nuclear war survival skills. NWS Research Bureau.

Korotaev, A. V., & Khaltourina, D. (2006). Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends in Africa. Editorial URSS.

Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin.

Lin, H. W., & Tegmark, M. (2016, August 29). Why does deep and cheap learning work so well?arXiv [cond-mat.dis-nn]. Retrieved from

Markov, A. V., & Korotayev, A. V. (2007). Phanerozoic marine biodiversity follows a hyperbolic trend. Palaeoworld, 16(4), 311–318.

Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing.

Sandberg, A. (2014). Monte Carlo model of brain emulation development. Retrieved from

Shulman, C., & Bostrom, N. (2014). Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer? Global Policy, 5(1), 85–92.

Stager, C. (2011). Deep Future: The Next 100,000 Years of Life on Earth. Macmillan.

Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Retrieved from



Today I was at a talk with Robin Hanson to promote his book THE AGE OF EM hosted by the Bay Area Futurists.

As an academic polymath with interests in physics, computer science, and economics, Hanson draws upon his extensive reading across these fields to try to piece together what such a society will look like.

His argument is that in 30 years to a century, there will be a phase transition as mind uploading takes off and the world economy rapidly becomes dominated by “ems” (emulations); human brains running on a silicon substrate, and potentially millions of times faster. Since transport congestion costs aren’t a factor, this em civilization will live in a few very densely populated cities largely composed of cooling pipes and computer hardware. The economy will double once every month, and in a year or two, it will transition to yet another, cardinally different, growth phase and social structure.

I might or might not eventually do a book review, but for now, here is a link to Scott Alexander’s.

Alternatively, this lecture slide summarizes the main points.


A few observations, arguments, and counterarguments from the meeting:

(1) This struck many people as the most counterintuitive assetion, but I agree that wages in the em world should quickly plummet to subsistence levels (which are much lower than for biological organisms). This is probably what will happen eventually with our civilization if there is no “singularity”/transition to a higher growth phase, since fertility preferences are an aspect of personality, and as such, highly heritable. (Come to think of it this is basically what happens to the Imperium of Man in Warhammer 40k, down to the hive cities in which most citizens eke out “lives of quiet desperation,” though ones which “can still be worth living.”)

Since Ctrl-C Ctrl-V is much easier and quicker than biological reproduction, a regression to the historical (and zoological) norm that that is the Malthusian trap seems – barring some kind of singleton enforcing global restrictions on reproduction – seems inevitable.

(2) A more questionable claim is Hanson’s prediction that ems will tend to be more religious than humans, on the basis that hardworking people – that is, the sorts of people whose minds are most likely to be uploaded and then copied far and wide – tend to be more religious. This is true enough, but there is also a strong and well known negative correlation between religiosity and intelligence. Which wins out?

(3) The marginal return on intelligence is extremely high, in both economics and scientific dynamism (Apollo’s Ascent theory). As such, raising the intelligence of individual ems will be of the utmost priority. However, Hanson makes a great deal of the idea that em minds will be a black box, at least in the beginning, and as such largely impenetrable to significant improvement.

My intuition is that this is unlikely. If we develop technology to a level where we can not only copy and upload human minds but provide them with internally consistent virtual reality environments that they can perceive and interact within, it would probably be relatively trivial to build brains with, say 250 billion neurons, instead of the ~86 billion we are currently endowed with and largely limited to by biology (the circulatory system, the birth canal, etc). There is a moderate correlation between just brain volume and intelligence, so its quite likely that drastic gains on the order of multiple S.D.’s can be attained just by the (relatively cheap) method of doubling or tripling the size of the connectome. The creative and scientific potential of billions of 300 IQ minds computing millions of times faster than biological brains might be greater than the gap between our current world and that of a chimpanzee troupe in the Central African rainforest.

Two consequences to this. First, progress will if anything be even faster than what Hanson projects; direct intelligence amplification in tandem with electronic reproduction might mean going straight to the technological singularity. Second, it might even help ems avoid the Malthusian trap, which is probably a good thing from an ethical perspective. If waiting for technological developments that augment your own intelligence turns out to be more adaptive than making copies of yourself like Agent Smith in The Matrix until us ems are all on a subsistence wage, then the Malthusian trap could be avoided.

(4) I find this entire scenario to be extremely unlikely. In both his book and his lecture, Hanson discusses and then quickly dismisses the likelihood of superintelligence first being attained through research in AI and neural nets.

There are two problems with this assertion:

(a) The median forecast in Bostrom’s Superintelligence is for High Level Machine Intelligence to be attained at around 2050. (I am skeptical about this for reasons intrinsic to Apollo’s Ascent theory, but absolutely the same constraints would apply to developing brain emulation technology).

(b) The current state of AI research is much more impressive than brain emulation. The apex of modern AI research can beat the world’s best Go players, several years ahead of schedule. In contrast, we only finished modeling the 302 neuron brain of the c. elegans worm a few years ago. Even today, we cannot obtain functional models even of 40 year old microchips from scanning them, to say nothing of biological organisms. That the gap will not only be closed but for the brain emulation route to take the lead is a rather formidable leap of faith.

Now to be fair to Hanson, he did explicitly state that he does not regard the Age of Em as a certain or even a highly probable future. His criterion for analyzing a future scenario is for it to have at least a 1% chance of happening, and he believes that the Age of Em easily fulfills that condition. Personally I suspect it’s a lot less than 1%. Then again, Hanson knows a lot more computer science than I do, and in any case even if the predictions fail to pan out he has still managed to provide ample fodder for science fiction writers.

(5) My question to Hanson during the Q&A section of the talk: Which regions/entities do you expect to form the first em communities? And what are the geopolitical ramifications in these last years of “human” civilization?

(a) The big factors he lists are the following:

  • Access to cold water, or a cold climate in general, for cooling purposes.
  • Proximity to big human cities for servicing human customers (at least in the initial stages before the em economy becomes largely autonomous).
  • Low regulations.

So plausible candidates (according to Hanson) would be Scandinavia, or the “northern regions of China.”

As he also noted at another point, in the early stages of em creation technology, mind uploading is likely to be “destructive,” i.e. resulting in the biological death of the person who is to be emulated. So there might be an extra selection filter for state or corporate ruthlessness.

(b) In domestic and social terms, during the transition period, humans can be expected to “retire” as the em economy explodes and soon far exceeds the scope of the old human economy. Those humans who control a slice of the em economy will become very rich, while those who don’t… fare less well.

However, Hanson doesn’t have anything to say on the geopolitical aspects of the transition period because it is much less predictable than the “equilibrium state” of the em economy that he set out to describe. As such, he does not think it is worthwhile for someone who is not a sci-fi writer to delve into that particular issue. That makes sense.

(6) As a couple of people pointed out, atomic weapons can wipe out an entire em “city,” which contain billions of ems.

What would em warfare be like? The obvious answer is cyber-cyber-cyber we gotta hack the mainframe style stuff. But surely, sometimes, the easiest move is to just knock over the table and beat your opponent to death with the chessboard.

If Pinker gets pwned during the em era and global nuclear wars between em hive cities ruled by Gandhi emulations break out, could this make em hive cities unviable and result in a radical decentralization?

(7) How did Hanson become Hanson?

He repeated the Talebian argument (which I sympathize with) that following the news is a pointless waste of time.

It is much more productive to read books, especially textbooks, and to take introductory classes in a wide range of subjects. To try to get a good grasp on our civilization’s system of knowledge, so that you might be able to make productive observations once you reach your 50s.

Confirmation bias? Regardless, it’s one more small piece of evidence in favor of my decision to log off.

• Category: Science • Tags: Futurism, Superintelligence, The AK 

In recent years there has been a surge in interest in gut flora in the wake of research on its substantial effects on personality, so much so that researchers have even taken to describing it as a neutral network.

And much like humans, and even their brains, they are not going to be an exception to recent evolution.

As Chris Kresser writes:

In other words, evolution does not act solely on your 23,000 human genes. Rather, it acts on the 9.02 million genes (both host and microbial) that are present in and on your body, as a single entity.

Moreover, the microbiome can introduce genetic variation and evolve through methods specific to it, such as sharing genes with each other and acquisition of new strains from the environment. And even the borders between bacterial genes and “human” genes are surprisingly porous.

The really interesting observation is yet to come:

Social behavior in primates is also thought to be a critical factor in the evolution of human intelligence (32). Access to microbes may have been a driving force in the evolution of animal sociality, since microbes confer many benefits to the host (33). Social behaviors like grooming, kissing, and sex increased the transfer of microbes from one organism to another. Studies in social mammals have found that development of the forebrain and neocortex in social mammals depends on signals from the microbiota (34), and germ-free mice that lack a microbiota also lack social behavior and show deficits in social cognitive abilities (35).

Depending on the size of these effects there could be some pretty important implications and confounds for psychometrics and genetics of IQ research.

Bacterial composition, for instance, though strongly hereditary, is also going to be affected by the food one eats (a cultural factor), the people with whom one has close contacts with (kissing, certain intimate contacts, and one supposes, effluence in non-hygienic countries), and the local geography, elevation, and climate. Could intelligence be a matter of not just blood and chance, but of soil?

Best not to get too carried away with yet. This paper finds that spousal partners did not have significantly more microbiome similarity than unrelated invididuals (though the sample sizes were small). Still, it might be worth bearing in mind.

• Category: Science • Tags: Ancestral Health, Intelligence 

Last month there was an interview with Eliezer Yudkowsky, the rationalist philosopher and successful Harry Potter fanfic writer who heads the world’s foremost research outfit dedicated to figuring out ways in which a future runaway computer superintelligence could be made to refrain from murdering us all.

It’s really pretty interestingl. It contains a nice explication of Bayes, what Eliezer would do if he were to be World Dictator, his thoughts on the Singularity, justification of immortality, and thoughts on how to balance mosquito nets against the risk of genocidal Skynet from an Effective Altruism perspective.

That said, the reason I am making a separate post for this is that here at last Yudkowsky gives a more more or less concrete definition of what conditions a superintelligence “explosion” would have to satisfy in order to be considered as such:

Suppose we get to the point where there’s an AI smart enough to do the same kind of work that humans do in making the AI smarter; it can tweak itself, it can do computer science, it can invent new algorithms. It can self-improve. What happens after that — does it become even smarter, see even more improvements, and rapidly gain capability up to some very high limit? Or does nothing much exciting happen?

It could be that, (A), self-improvements of size δ tend to make the AI sufficiently smarter that it can go back and find new potential self-improvements of size k ⋅ δ and that k is greater than one, and this continues for a sufficiently extended regime that there’s a rapid cascade of self-improvements leading up to superintelligence; what I. J. Good called the intelligence explosion. Or it could be that, (B), k is less than one or that all regimes like this are small and don’t lead up to superintelligence, or that superintelligence is impossible, and you get a fizzle instead of an explosion. Which is true, A or B? If you actually built an AI at some particular level of intelligence and it actually tried to do that, something would actually happen out there in the empirical real world, and that event would be determined by background facts about the landscape of algorithms and attainable improvements.

You can’t get solid information about that event by psychoanalyzing people. It’s exactly the sort of thing that Bayes’s Theorem tells us is the equivalent of trying to run a car without fuel. Some people will be escapist regardless of the true values on the hidden variables of computer science, so observing some people being escapist isn’t strong evidence, even if it might make you feel like you want to disaffiliate with a belief or something.

Psychoanalyzing people might not be so useful, but trying to understand the relationship between cognitive capacity and technological progress is another matter.

I am fairly sure that k<1 for the banal reason that more advanced technologies need exponentially more and more cognitive capacity – intelligence, IQ – to develop. Critically, there is no reason this wouldn’t apply to cognitive-enhancing technologies themselves. In fact, it would be extremely strange – and extremely dangerous, admittedly – if this consistent pattern in the history of science ceased to hold. (In other words, this is merely an extension of Apollo’s Ascent theory. Technological progress invariably gets harder as you climb up the tech tree, which works against sustained runaway dynamics).

Any putative superintelligence, to continue making breakthoughs at an increasing rate, would have to not only solve ever harder problems as part of the process of constantly upgrading itself but to also create and/or “enslave” an exponentially increasing amount of computing power and task it to the near exclusive goal of improving itself and prevent rival superintelligences from copying its advances in what will surely be a far more integrated noosphere by 2050 or 2100 or if/whenever this scenario happens. I just don’t find it very plausible our malevolent superintelligence will be able to fulfill all of those conditions. Though admittedly, if this theory is wrong, then there will be nobody left to point it out anyway.

• Category: Science • Tags: Apollo's Ascent, Rationality, Superintelligence 

Latest data from NASA:


At +1.35C, this is the biggest monthly temperature anomaly (measured from the base period of 1951-1980) ever measured, and it is a near certainty now that 2016 will be warmer overall than 2015, making for a third-time consecutive record breaking year.

There are several reasons for this:

(1) The El Nino effect. This year’s is a pretty strong one as far as they go, but not quite as strong as the one in 1997-1998, which produced the last major local peak and formed the lynchpin of GW denier arguments throughout the 2000s. Nonetheless, average global temperatures in February 2016 were almost half a degree higher than the +0.88C anomaly seen in February 1998. The most comparably strong El Nino before that was the 1982-1983 one, but the February 1983 anomaly was fairly unremarkle at +0.40C. That’s a difference of almost a degree between then and now.

solar-irradiance(2) Solar irradiance is actually pretty weak relative to its average in the 1950-2000 period so that can’t be part of the explanation.

(3) I wonder to what extent if any the major recent uptick in methane emissions from melting permafrost, which has expressed itself in the form of some spectacular new craters in Northern Siberia last year, has contributed to this.

All in all, this is very bad news for the international community’s target of limiting global warming to the IPCC’s two degrees injunction.

There have been some encouraging counter developments – for instance, global carbon emissions actually fel l in 2015 – but celebrations are premature since there have been plenty of prior periods when global CO2 emissions fell not just for one year but several years in a row: 1973-1975 (first oil shock), 1980-83 (second oil shock), 1989-1994 (collapse of the highly energy-inefficient Communist economies), and 2008-2009 (the Great Recession).

In any case, if the aforementioned methane release scenario is at or close to the runaway threshold, that wouldn’t really matter all that much anyway.

For myself I have always been skeptical that this particular drifting oil tanker could be stopped in time to avert serious levels of warming. I still stand by my 2010 prediction that “geoengineering” is going to start appearing on normies’ vocabularies sooner rather than later, and perhaps implementation of some geoengineering schemes will begin as early as the 2030s. It’s unlikely to be a happy project that brings everyone together. I suspect it’s more likely to either take the form of a ruinous geopolitical free-for-all, or to catalyze the consolidation of today’s already incipient globalist elite into a stiffling singleton.

• Category: Science • Tags: Geoengineering, Global Warming 


The heroes of Hikaru’s Go were off by 86 years.

As some of you might have heard, the word of go – or weiqi as it is known in its homeland of China – is currently undergoing its Deep Blue moment as one of the world’s strongest players Lee Sedol faces off against Google’s DeepMind AlphaGo project. Deep Blue was the IBM/Carnegie Mellon supercomputer that in 1997 beat the world’s top grandmaster Gary Kasparov in a series of 6 chess games. But the computer’s margin of victory at 3.5 to 2.5 was modest, and the event was dogged by Kasparov’s allegations that the IBM team had underhandedly helped the computer. It would be an entire decade before the top computer chess programs decisively overtook the top human players. As of today, there is a 563 point difference between the Elo rating of Magnus Carlsen, the current highest rated human player on the FIDE’s database, and the world’s most powerful chess program, the open source Stockfish 7. In practical terms, this means that Carlsen can expect to win fewer than one in a hundred games against the Stockfish running on a contemporary 64-bit quadcore CPU.

In terms of game complexity, more orders of magnitude separate go from chess than chess from draughts, a game that has been fully solved. The aim is to capture territory and enemy stones by encircling them while defending your own turf, both of which are tallied up at the end of the game with the winner being the one with the most points. It is played on a 19×19 board, a lot larger than the 8×8 arrangement of chess, and you can position your pieces – or stones – on any empty space not occupied by or completely encircled by the enemy, whereas the range of possible moves in chess is strongly constricted. Chess is tactics, go is logistics; chess is combined arms, go is encirclements; chess draws strongly upon algorithmic and combinatorial thinking, whereas go is more about pattern matching and “intuition.” Therefore it is not surprising that until recently it was common wisdom that it would be many decades before computers would start beating the world’s top human players. The unimpressive performance of existing go computer programs, and the slowdown or end of Moore’s Law in the past few years, would have only given weight to that pessimistic assessment. (Or perhaps optimistic one, if you’re with MIRI). Lee Sedol himself thought the main question would be whether he would beat AlphaGo by 5-0 or 4-1.

Which makes it all the more remarkable that Lee Sedol is not just behind but having lost all of his three games so far is getting positively rekt.

But apparently Lee’s confidence was more rational than hubris. He had watched AlphaGo playing against weaker players, in which it made some apparent mistakes. But as a DeepMind research scientist noted, this was actually feature, not bug:

As Graepel explained, AlphaGo does not attempt to maximize its points or its margin of victory. It tries to maximize its probability of winning. So, Graepel said, if AlphaGo must choose between a scenario where it will win by 20 points with 80 percent probability and another where it will win by 1 and a half points with 99 percent probability, it will choose the latter. Thus, late in Game One, the system made some moves that Redmond considered mistakes—“slow” in his terminology. These moves seemed to give up points, but from where Graepel was sitting, AlphaGo was merely trying to maximize its chances.

In other words, while the projected points on the board – territory held plus stones captured – might for a long time appear to be roughly equal, at the same time the probability of ultimate victory would inexorably shift against Lee Sedol. And capped as our human IQs are, not only Lee but all the rest of us might be simply incapable of discerning the deeper strategies in play: “And so we boldly go – into the whirling knives” (to borrow from Nick Bostrom’s book on the risks of computer superintelligence).

Those are in fact the exact terms in which AI scientist/existential risks researcher Eliezer Yudkowsky analyzed this game in a lengthy Facebook post:

At this point it seems likely that Sedol is actually far outclassed by a superhuman player. The suspicion is that since AlphaGo plays purely for *probability of long-term victory* rather than playing for points, the fight against Sedol generates boards that can falsely appear to a human to be balanced even as Sedol’s probability of victory diminishes. The 8p and 9p pros who analyzed games 1 and 2 and thought the flow of a seemingly Sedol-favoring game ‘eventually’ shifted to AlphaGo later, may simply have failed to read the board’s true state. The reality may be a slow, steady diminishment of Sedol’s win probability as the game goes on and Sedol makes subtly imperfect moves that *humans* think result in even-looking boards.

For all we know from what we’ve seen, AlphaGo could win even if Sedol were allowed a one-stone handicap. But AlphaGo’s strength isn’t visible to us – because human pros don’t understand the meaning of AlphaGo’s moves; and because AlphaGo doesn’t care how many points it wins by, it just wants to be utterly certain of winning by at least 0.5 points.

In the third game, which finished just a few hours ago – by the way, you can watch the remaining two games live at the DeepMind YouTube channel, though make sure to learn the rules beforehand or it will be very boring – Lee Sedol, by then far behind on points, made a desperate ploy to salvage the game (or more likely just use the opportunity to test AlphaGo’s capabilities) by initiating a ko fight. A ko is a special case in go in which a local altercation sharply becomes the fulcrum around which the outcome of the entire game might be decided. Making the winning moves requires perfect, precise play as opposed to AlphaGo’s key method of playing out billions of random games and choosing the one which results in the most captured territory after n moves.

But AlphaGo handled the ko situation with aplomb, and Lee had to resign.

The Korean Lee Sedol is the fourth highest rated go player on the planet. But even as of March 9, were it a person, AlphaGo would have already displaced him. The top player in the world is the Chinese Ke Jie, who is currently 100 Elo points higher than Lee. According to my calculations, this implies that Lee should win slightly more than a third of his matches against Ke Jie. His actual record is 2/8, or 25%. Not only is his current tally against AlphaGo is 0/3, but he was beaten by a considerable number of points by an entity that is perfectly content to minimize its lead in order to to maximize its winning probability.

will-lee-sedol-defeat-alphago Finally, a live predictions market on whether Lee Sedol would defeat AlphaGo in any of the three games remaining (that is, before the third match) varied between 20%-25%, implying that the probability of him winning any one game against the the DeepMind monster was less than 10%. (If anything, those probabilities would be even lower now that AlphaGo has demonstrated ko isn’t its Achilles heel, but let us set that aside).

According to my calculations, IF this predictions market is accurate, it would imply that AlphaGo has a ~400-450 Elo point superiority over Lee Sedol based on its performance up to and including the first two games against him.

It would also mean it would be far ahead of Ke Jie, who is the highest ranked human player ever and is currently virtually at his peak. Whereas Lee can only be expected to win 7%-9% of his games against AlphaGo, for Ke Jie this figure would be only modestly higher at 12%-15%. But in principle I see no reason why AlphaGo’s capabilities couldn’t be even higher than that. It’s a long tail – and we can’t see all that far ahead!

But really the most astounding element of this is that what took chess computing a decade to accomplish increasingly appears to have occured in the space of a few days with AlphaGo – despite the slowdown in Moore’s Law in recent years, and the problems of go being far more challenging than those of chess in terms of traditional AI approaches.

For all intents and purposes AI has entered the superhuman realm in a problem space where merely human intelligence had hitherto ruled supreme, and even though we are as far away as ever from discovering the “Hand of God” – the metaphorical perfect game, which will take longer than the lifetime of the universe to compute if all of the universe were to become computronium – we might well be starting the construction of a Sliver of Him.

Update -

Lee won the fourth game!

A win rate of 25% means that AlphaGo’s Elo likely superiority over Lee’s current 3519 points has just plummeted from 400-450 (based on predictions market) to 191, i.e. 3710. Still higher than top player Ke Jie at 3621.

If Lee loses the next game, that Elo difference goes up to 241; if he wins, it gets reduced further to 120. Regardless, we can now say with considerable confidence that AlphaGo is peak human level but decidedly not superhuman level.

Update 2 -

Final remarks:

Was writing article instead of watching final Lee-AlphaGo game but final score is 4:1. Reverse of what Lee had originally predicted! ;)

Anyhow 4:1 score (w/out looking into details) implies Alpha has *probabilistic* ~240 point higher Elo rating than Lee Sedol i.e. ~3760.

That means its likely ~140 points higher than first ranked human Ke Jie and should beat him about 70% of the time.

I had a look at go bots historic performance other day. Looks like they move up by 1 S.D. every two years or so. Treating AlphaGo as the new base, humans should be *completely* outclassed by computer in go by around 2020.

• Category: Science • Tags: Game, Supercomputers 

Prolific IQ researcher Richard Lynn together with two Russian collaborators have recently published arguing that multiple aspects of socio-economic development – infant mortality, fertility, stature, and literacy-as-a-proxy for intelligence were significantly intercorrelated in late Tsarist Russia.


Literate rate of the European part of the Russian Empire in 1897.

Here is the link to the paper – Regional differences in intelligence, infant mortality, stature and fertility in European Russia in the late nineteenth century

And here is a summary by James Thompson – 50 Russian oblasts.

To the right: Here’s your map, JayMan. You’re welcome.

The main potential sticking point:

There are no data for regional intelligence in the nineteenth century and we have therefore adopted rates of literacy as a proxy for intelligence. This is justified on the grounds that a high correlation between literacy rates and intelligence have been reported in a number of studies. For example, a correlation of .861 between literacy rates for Italian regions in 1880 and early twenty-first century IQs has been reported by Lynn (2010); a correlation of .83 between literacy rates for Spanish regions in the early twenty-first century has been reported by Lynn (2010); (Lynn, 2012); and a correlation of 0.56 between literacy rates and IQs for the states and union territories of India in 2011 has been reported by Lynn and Yadav (2015). There is additional support for using literacy in the nineteenth century as a proxy for intelligence in the results of a study by Grigoriev, Lapteva and Ushakov (Григорьев, Лаптева, Ушаков, 2015) showing a correlation of .58 between literacy rates of the peasant populations of the districts (uezds) of the Moscow province in 1883 and the results of the Unified State Exam and State Certification on Russian Language in the districts of the contemporary Moscow oblast.

The methodology at first struck me as being rather problematic.

I’ve read a bit about Russian state literacy programs in the 19th century (National Literacy Campaigns and Movements) and one of their main features is that they tended to spread out from the central European Russian provinces due to cost effectiveness reasons, hence the low literacy rates of e.g. Siberia in Lynn’s data set. However, there is no particular evidence that Siberian Russians are any duller than average Russians. To the contrary, some 3% of Siberian schoolchildren become “Olympians” – high performers who qualify for highly subsidized higher education. This proportion is lower than the 15% of the central region (which hosts Moscow, Russia’s main cognitive cluster with a 107 average IQ), and the 14% of the north-west region (which hosts Russia’s second cognitive cluster with a 103 average IQ Saint-Petersburg, plus the Russians there are probably slightly brighter in general on account of Finno-Ugric admixture), but is considerably higher than in any other Russian Federal District: The Urals and Volga (both about 2%), and the Far Eastern, Southern, and Caucasus (all considerably below 1%).

In other words, would such a historical literacy – modern intelligence correlation apply to Russia as it does to Italy, Spain, and to a lesser extent, India?


Average 2009 PISA results by Russian region.

Fortunately, we don’t have to postulate, since we do actually have PISA data for many Russian provinces that I revealed back in 2012.

This allows us to test if Lynn’s assumptions apply.

There are difficulties, to be sure. Not all Russian provinces were tested in PISA, and there is, needless to say, no data for any of the Ukrainian and Polish oblasts, or for Belarus. As such, only 20 Russian provinces could be tested in this manner (26 if you also include now independent countries excluding Russia itself).

In some cases, names have changed, typically to honor some faceless Soviet bureaucrat; in more problematic cases, borders have changed significantly (e.g. the five provinces of Estonia, Livonia, Courland, Kovno, and Vilna have become the three countries of Estonia, Latvia, and Lithuania – I have tried to average the literacy figures between them in a common sense but back of the envelope way). The Moscow Governorate has been split into the City of Moscow (with its 107 average IQ) and Moscow oblast (with a modest 96 average IQ). Which of those should be attached to Moscow’s 1897 literacy rate of 40%? (As it happens, I went with just the City of Moscow instead of figuring out how to weigh the populations and adjust and so forth. I’m not trying to writea formal paper, after all).


There is an exponential correlation of R=0.75 between average PISA derived IQs of Russian regions and of now independent countries, and their literacy rate according to the 1897 Census. Therefore, this bears out Lynn’s assumptions.

The two downwards outliers – more relatively intelligent than literate – are Moscow, Tatarstan, Tula, Samara, and Tambov. Moscow is easily explainable – the city itself in Tsarist times would have been more literate than the Moscow Governorate, while its average IQ was artificially boosted in Soviet times since it became not just the empire’s political but also its cognitive (artistic, scientific) capital. Getting a Moscow propiska required considerable intelligence.

The three very major upwards outliers – more relatively literate than intelligent – are the Finno-Ugric Baltic states: Finland, Estonia, and Latvia. This can’t have been a non-Orthodox/Muslim thing: Both Poland (On-The-Vistula Governorate) and Lithuania (Kovno and Vilna) lie neatly on the correlation curve. Nor was it something Finno-Ugric; Karelia (then Olonets) is not an exception either. It must have been something specific just to them and the most obvious explanation is Protestantism. There is a lot of literature on the independent literacy-raising effects of Protestantism and I see no reasons why Estonia, Latvia, and Finland should have been exceptions to that.

Another outlier, though this one is at the bottom of the IQ scale, is Moldova. To be fair I think Moldova’s PISA-derived IQ is artificially lowered by a third to half of an S.D. due to the massive brain drain it has experienced after the collapse of the Soviet Union (something like half the working age population are Gastarbeiters in the EU and Russia). We see similar drops in other countries so afflicted, such as (possibly) Puerto Rico, and (almost certainly) Ireland during most of the 20th century, when it repeatedly reported IQs in the ~90 range (and ironically one of the reasons Richard Lynn himself abandoned it to move to Northern Ireland, thus getting stuck in the most depressed region of the UK and missing out on the rise of the Celtic Tiger a few years later).


The correlation improves further to R=0.80 when we consider only those Tsarist-era provinces which are still part of the Russian Federation. This is accomplished (more than) entirely just by removing the Protestant Baltic nations (Finland, Estonia, and Latvia) and Moldova (whose current day average IQ is depressed due to massive brain drain as per above).

As usual Lynn does his north/south IQ gradient analysis, finding it to be a real thing but diminishing to nothing once the Baltic states of Estonia, Livonia, and Courland are accounted for.

Quoting from Thompson’s summary:

The Russian provinces differed significantly by geographical location. The positive correlations with latitude (r= .33, p<.05) and the negative correlation with longitude (r=−.43, p<.01) show that the rates of literacy were higher in the northand west than in the south and east. These trends were partly determined by the rates of literacy being highest in the north-western provinces of St. Petersburg and the three Baltic states of Estland, Livland and Kourland (correspondingapproximately but not precisely to contemporary Estonia and Latvia; Livland consisted of southern part of contemporary Estonia and eastern part of contemporary Latvia). Removing these four regions makes both correlations non-significant (.21 and −.23).


The Pale of Settlement in 1897.

One additional issue worth bearing in mind: The influence of the Jews. Namely, their concentration in the Pale of Settlement, which correlates to modern day Poland, Belarus, and right-bank Ukraine (west of the Dnieper). There were more than 5.2 million Jews, and their literacy rates were very high (according to the 1926 Soviet Census, Jews over the age of 50 – i.e., who had been educated under the Empire – had a literacy rate of 63% versus 28% for ethnic Russians).

This must have “artificially” raised the literacy rates in this area – as pertains to those regions’ 21st century average IQs, anyway, since the vast majority of those Jews are no longer there due to the trifecta of the Holocaust, Jackson-Vanik, and Aliyah. The effect would probably be to reduce the “indigenous” literacy rates in Lithuania and Poland closer to those of European Russia, while pushing the already low literacy rates of strongly ethnic Malorossiyan and Belorussian provinces considerably lower still. Not a single province of modern Ukraine outside historical Novorossiya (with its strong Great Russian admixture) had a literacy rate above 20% in 1897, despite highly literate Jews helping them out with the statistics.

Unfortunately, there is a severe paucity of usable psychometric data from Ukraine – for instance, it is one of the very few European countries that doesn’t participate in PISA. So its average IQ has to be estimated through generally more indirect means. It does the converted equivalent of 9 IQ points worse than Russia on the TIMSS standardized test. Ukrainians spend less than half as much time as Russians reading, and those from the western parts at least spend a lot more time participating in torchlit processions and chanting “Putin Khuylo.” Some of those activities are considerably more g loaded than others. The low literacy rates in late Tsarist Malorossiya, coupled with the finding of a close correlation between those literacy rates and modern day average IQ across both Russian provinces and today’s independent post-Soviet states, constitutes further evidence of a modest average IQ in Ukraine. Higher than in Moldova to be sure, but probably closer to the level of the Balkans than to Poland.


Sources: Grigoriev, Lapteva, and Lynn 2015; Karlin 2012 (derived from PISA 2009).

IQ Literacy in 1897
Astrakhan 94.8 15.5%
Bashkortostan 93.4 16.7%
ESTONIA 102.1 77.9%
FINLAND 106.6 75.6%
Kaluga 91.7 19.4%
Karelia 98.1 25.3%
Kursk 94.6 16.3%
LATVIA 98.0 74.3%
LITHUANIA 99.0 35.4%
MOLDOVA 84.9 15.6%
Moscow 106.6 40.2%
N. Novgorod 93.1 22.0%
Orenburg 92.7 20.4%
Perm 93.3 19.2%
POLAND 100.2 30.5%
RUSSIA 96.0 21.1%
Ryazan 94.7 20.3%
Saint-Petersburg 102.6 51.5%
Samara 99.2 22.1%
Saratov 96.0 23.8%
Tambov 95.9 16.6%
Tatarstan 98.1 17.9%
Tula 98.6 20.7%
Ulyanovsk 91.5 15.6%
Vladimir 98.9 27.0%
Vologda 95.3 19.1%
Voronezh 92.7 16.3%

Literacy and Social Development in 1890s Russia (from Grigoriev et al. 2015)

Incidentally, I am not surprised to see Yaroslavl being the top non-Baltic/non-capital Russian region by literacy rate in 1897. It struck me as by far the cleanest and most civilized provincial Russian town on the Golden Ring when I visited it in 2002 (a time when Russia was still shaking off the hangover of the Soviet collapse). Curiously enough, it also hosted one of the most vigorous insurrections against the Bolshevik regime in central Russia. Although it was not one of the regions covered by PISA, I would not be surprised if Yaroslavl oblast was to get a 100-102 score on it should it be carried out there (and as would be implied by the correlation curve).



book-human-accomplishment Charles Murray has made the entire database compiled for his book Human Accomplishment freely available at the Open Science Framework.

Here is the link:

Incidentally, my concept of Apollo’s Ascent was to a significant extent the result of my reaction to Human Accomplishment. (A brief reminder of the AA thesis: The rate and global distribution of technological progress is dependent on the absolutely numbers of literate “smart fraction” people available to different societies at different points in history). Although Human Accomplishment was a thoroughly brilliant work, I had some quibbles with its core argument – namely, that Christianity was at the root of Europe’s post-1450 intellectual preeminence.

The Greeks laid the foundation, but it was the transmutation of that foundation by Christianity that gave modern Europe its impetus and differentiated European accomplishment from that of all other cultures around the world.

This was a judgement that Murray appears to have made relatively late in the writing process, and I suspect that as a social scientist he might not have been 100% satisfied – intellectually, at any rate – with ascribing possibly the biggest puzzle in world history to unquantifiable and unfalsifiable “transcendental values.”

After all, purely cultural explanations don’t tend to have a greaat track record in explaining economic success/failure (which are substantially related to intellectual achievement: You need smart fractions both to invent stuff and to have more productive economies). See how Confucianism was first used to explain the stagnation of East Asian societies before 1950, before the historians and sociologists did a 180 and started citing that same Confucianism to explain the success of the East Asian tiger economies when they burst into prominence by the 1980s. I don’t think it’s a particularly wild or radical idea that concrete, quantifiable concepts such as literacy rates and smart fractions would be a more credible explanation. But let the eventual critics of Apollo’s Ascent be the judge of that.

Speaking of Apollo’s Ascent, writing the book will be much easier with access to Charles Murray’s database. It would also be on much firmer theoretical ground, since instead of just highlighting general patterns – it’s not as if I have the time or resources to construct a comprehensive database of human accomplishment by myself – I will also be able to run numerical experiments, e.g. on on the correlation between calculated historical “aggregate mindpower” levels in different countries (aka literate smart fraction people) and their production of eminent figures.

Charles Murray was actually kind enough to email me the HA database a couple of months ago, so this public release is mostly redundant for my own project. But it is a very good thing nonetheless that many more people will now be able to run their own historical and social “experiments” using his data, including those who might earlier have shied at openly requesting it.

It is also part of a general process now underway in which there is growing demand for scientists to make their data publically available as opposed to just on request. To a significant extent I think the reason more scientists don’t yet do this is that the technical means for doing so – especially for older scientists who tend to be less computer savvy – are still few and far between. The Open Science Framework, for instane, only began operations in 2011. So persons such as Emil Kirkegaard who are heavily involved with the opening up of the scientific process – incidentally, it was partly thanks to his timely prodding that the Human Accomplishment data was released – should also be strongly commended.

To go a bit meta, this process – both in its technological and social aspects – is itself an information technology that acts as a multiplier on aggregate mindpower, in the style of Renaissance reading glasses and the Internet. The Flynn Effect has stopped in the developed world, literacy rates are pretty much maxed out, and Apollo’s load almost always gets heavier, not lighter. Just like in the Civilization video games, you need more and more “science points” to generate discoveries as you go up the technology tree. As such, we have to start eking everything we can out of existing technology to keep up the production of our Great Scientists. Shifting to open science paradigms is by far not the worst way of going about this.


The latest data from Top 500, a website that tracks the world’s most powerful supercomputers, has pretty much confirmed this with the release of their November 2015 list.

The world’s most powerful supercomputer, the Tianhe-2 – a Chinese supercomputer, though made on American technology – has now maintained its place for 2.5 years in a row. The US supercomputer Cray XK7 built three years ago maintains its second place today. Relative to June 2013, there has not even been a doubling in aggregate performance, whereas according to the historical trendlines, doublings have typically taken just a bit over a single year to occur. This is unprecedented, since Moore’s Law applies (applied?) to supercomputers just as much as it did to standard electronics.


Apart from serving as a conventient bellweather for general trends, futurists are well advised to follow supercomputers for two reasons.

Technological Projections

Their obvious application to the development of radical technological breakthroughs, from the extraordinarily complex protein folding simulations vital to uncovering medical breakthroughs to the granddaddy of them all, computer superintelligence. The general “techno-optimistist” consensus has long been that Moore’s Law will continue to hold, or even strengthen further, because the Kurzweilian view was that the exponent itself was also (slowly) exponentially increasing. This would bring us an exaflop machine by 2018 and the capability to do full human brain neural simulations soon afterwards by the early 2020s.


But on post-2012 trends, exponentially extrapolated, we will actually be lucky just to hit one exaflop in terms of the aggregate of the world’s top 500 supercomputers by 2018. Now the predictions of the first exaflop supercomputer have moved out to 2023. Though perhaps not much in conventional life, a “delay” of 5 years is a huge deal so far as projections built on big exponents are concerned. For instance, assuming the trend isn’t reversed, the first supercomputer theoretically capable of full neural simulations moves out closer to 2030.

In terms of developing superintelligence, raw computing power has always been viewed as the weakest limit, and that remains a very reasonable view. However, the fact that even in this sphere there appear to be substantial unforeseen obstacles means a lot of trouble for the traditional placement of superintelligence and even the technological singularity at around 2045 or 2050 (not to even mention the 2020s as per Vernor Vinge).

National Power

Supercomputers can also be viewed as an instrument of national power. Indeed, some of the most powerful supercomputers have been used for nuclear testing (in lieu of real life). Other supercomputers are dedicated to modeling the global climate. Doing it better than your competitors can enable you to make better investments, even predict uprisings and civil wars, etc. All very useful from a geopolitical perspective. And of course they are very useful for a range of purely scientific and technological applications.


As in so many spheres in the international arena, the overwhelming story here is of the Rise of China.

From having o-1 supercomputers in the Top 500 during the 1990s and a couple dozen in the 2000s, it surged past a waning Japan in the early 2010s and now accounts for 109 of the world’s top supercomputers, second only after the USA with its 199 supercomputers. This just confirms (if any such confirmations is still needed) that the story of China as nothing more than a low wage workshop is laughably wrong. An economy like that would not need 20%+ of the world’s top supercomputers.

United States 199 39.8 172,582,178 246,058,722 10,733,270
China 109 21.8 88,711,111 189,895,013 9,046,772
Japan 37 7.4 38,438,914 49,400,668 3,487,404
Germany 32 6.4 29,663,941 37,844,201 1,476,524
United Kingdom 18 3.6 11,601,324 14,230,096 724,184
France 18 3.6 12,252,180 14,699,173 766,540
India 11 2.2 4,933,698 6,662,387 236,692
Korea, South 10 2 7,186,952 9,689,205 283,568
Russia 7 1.4 4,736,512 6,951,848 208,844
Brazil 6 1.2 2,012,268 2,722,150 119,280

Otherwise the rankings are approximately as one might expect, with the Big 4 middle sized developed Powers (Japan, Germany, UK, France) performing modestly well relative to the size of their population and the rest – including tthe non-China BRICS – being almost minnows in comparison.


Work shouldn’t start until 10am and school even later, says sleep expert

Paul Kelley of Oxford University’s Sleep and Circadian Neuroscience Institute says society is in the midst of a sleep-deprivation crisis, as the working hours we force ourselves to adapt to are often unnatural and unsuitable for our internal body clocks. …

He advocates 8:30am starts for children aged eight to 10, 10am starts for 16-year-olds and 11am lessons for 18-year-olds.

“At the age of 10 you get up and go to school and it fits in with our 9-to–5 lifestyle,” Kelley said. “When you are about 55 you also settle into the same pattern. But in between it changes a huge amount and, depending on your age, you really need to be starting around 3 hours later, which is entirely natural.”

If Kelley’s right, what this effectively means is that our whole lives from the onset of our teen years through to the end of middle age are like being woken up too early. Every. Single. Day.

““Staff should start at 10am… Staff are usually sleep-deprived,” Kelly told the British Science Festival. “Everybody is suffering and they don’t have to. We cannot change out 24-hour rhythms.”

Can’t agree more. The 9-5 workday is a structural microaggression against people who identify as night owls like myself.

*warning: crappy evopsych theorizing follows*

In ancestral times, you didn’t want everyone dozing off at the exact same time. It would have made sense for someone to always keep an eye out for predators, enemy bands, etc. It would have been much easier to do so if society had a mix of night owls and early risers, just as they needed both altruists and psychopaths in certain proportions for optimal group survivability.

I just don’t feel all that great waking up very early in the morning, even if had a perfectly adequate night’s sleep beforehand. I wonder if sometime in the next decade science will show that the near universal advice to go to bed early and wake up early regardless of personal psychology will go the way the medical community old imprecations against salt, eggs, and butter.

There appear to be some people, typically very energetic ones, who don’t seem to need very much sleep at all. Elon Musk seems to be one of them, I suspect Razib Khan is as well.

As Peter Frost reported a few months back, African-Americans need on average one hour less sleep than European Americans. Assuming sleep is essentially just a way of keeping energy expenditures down when they’re not needed (humans can’t hunt or gather at night) it stands to reason that northern peoples would sleep more on average. Of course this is has not been germane since the industrial revolution and I for one would be happy to see the need for sleep (bio)engineered away altogether.

• Category: Science • Tags: Sleep 

The cultural and scientific achievements of Ancient Greece are so manifold that it is barely worth recounting them. Socrates, Plato and Aristotle laid the foundations of Western philosophy. Pythogoras, Euclid, and Archimedes launched mathematics as a disciple grounded on logic and proof, a break from the approximative techniques that had held sway in other civilizations (and would largely continue to do so). To this day many medical schools have their students swear an oath under the name of Hippocrates. Homer, Aeschylus, Euripides – the originators of, and still giants in, the Western literary canon. Herodotus and Thucydides, the founders of a historiography that was something more than just a court chronicle.

Ancient Greek IQ = 125 (Galton)

Bearing in mind the very small population from which these intellectual giants were drawn – at its height, Ancient Athens had no more than 50,000 male citizens – it is little wonder that many thinkers and historians have posited a very high average IQ to the ancient Greeks, including most recently evolutionary psychologist Gregory Cochran. But the argument was perhaps best stated by the Victorian polymath and inventor of psychometrics Francis Galton, in the (not very politically correctly titled) “Comparative Worth of Different Races” chapter of his book Hereditary Genius:

The ablest race of whom history bears record is unquestionably the ancient Greek, partly because their master-pieces in the principal departments of intellectual activity are still unsurpassed, and in many respects unequalled, and partly because the population that gave birth to the creators of those master-pieces was very small. Of the various Greek sub-races, that of Attica was the ablest, and she was no doubt largely indebted to the following cause, for her superiority. Athens opened her arms to immigrants, but not indiscriminately, for her social life was such that none but very able men could take any pleasure in it; on the other hand, she offered attractions such as men of the highest ability and culture could find in no other city. Thus, by a system of partly unconscious selection, she built up a magnificent breed of human animals, which, in the space of one century—viz. between 530 and 430 B.C.—produced the following illustrious persons, fourteen in number:—

Statesmen and Commanders.—Themistocles (mother an alien), Miltiades, Aristeides, Cimon (son of Miltiades), Pericles (son of Xanthippus, the victor at Mycalc).
Literary and Scientific Men.—Thucydides, Socrates, Xenophon, Plato.
Poets.— Aeschylus, Sophocles, Euripides, Aristophanes.

We are able to make a closely-approximate estimate of the population that produced these men, because the number of the inhabitants of Attica has been a matter of frequent inquiry, and critics appear at length to be quite agreed in the general results. It seems that the little district of Attica contained, during its most flourishing period (Smith’s Class. Geog. Dict.), less than 90,000 native free-born persons, 40,000 resident aliens, and a labouring and artisan population of 400,000 slaves. The first item is the only one that concerns us here, namely, the 90,000 free-born persons. Again, the common estimate that population renews itself three times in a century is very close to the truth, and may be accepted in the present case. Consequently, we have to deal with a total population of 270,000 free-born persons, or 135,000 males, born in the century I have named. Of these, about one-half, or 67.500, would survive the age of 26, and one-third, or 45,000, would survive that of 50. As 14 Athenians became illustrious, the selection is only as I to 4,822 in respect to the former limitation, and as I to 3, 214 in respect to the latter. Referring to the table in page 34, it will be seen that this degree of selection corresponds very fairly to the classes F (1 in 4, 300) and above, of the Athenian race. Again, as G is one-sixteenth or one-seventeenth as numerous as F, it would be reasonable to expect to find one of class G among the fourteen; we might, however, by accident, meet with two, three, or even four of that class— say Pericles, Socrates, Plato, and Phidias.

Now let us attempt to compare the Athenian standard of ability with that of our own race and time. We have no men to put by the side of Socrates and Phidias, because the millions of all Europe, breeding as they have done for the subsequent 2,000 years, have never produced their equals. They are, therefore, two or three grades above our G—they might rank as I or J. But, supposing we do not count them at all, saying that some freak of nature acting at that time, may have produced them, what must we say about the rest? Pericles and Plato would rank, I suppose, the one among the greatest of philosophical statesmen, and the other as at least the equal of Lord Bacon. They would, therefore, stand somewhere among our unclassed X, one or two grades above G—let us call them between H and I. All the remainder—the F of the Athenian race— would rank above our G, and equal to or close upon our H. It follows from all this, that the average ability of the Athenian race is, on the lowest possible estimate, very nearly two grades higher than our own—that is, about as much as our race is above that of the African negro. This estimate, which may seem prodigious to some, is confirmed by the quick intelligence and high culture of the Athenian commonalty, before whom literary works were recited, and works of art exhibited, of a far more severe character than could possibly be appreciated by the average of our race, the calibre of whose intellect is easily gauged by a glance at the contents of a railway book-stall.

Francis Galton was writing before the invention of the standard deviation, but in his methodology a “grade” was equivalent to 10.44 IQ points (under an S.D. of 15), so in practice the Athenians had an IQ of perhaps 120 relative to a Victorian British mean of 100. (And presumably, therefore, about 110 relative to the modern Greenwich mean, which is considerably higher than a century ago due to the Flynn Effect).

There are however a few problems with this.

Ancient Greek IQ = 90 (Apollo’s Ascent)

First off, there is no particularly obvious explanation for why this part of the Mediterranean world evolved such a high average IQ – a standard deviation higher than everyone else – in the first place. One would then likewise have to explain why they then lost it so thoroughly that modern Greeks now consistently place lower in European IQ assessments than all but a few Balkan backwaters.

However, it turns out that using the Apollo’s Ascent method of computing aggregate mindpower – with adjustment for the intellectual discovery threshold – as a function of population size, literacy rate, and average IQ can explain the record of Greek achievement just as succinctly without requiring positing superhumanly high average IQ levels which are so dubious from an evolutionary perspective.

Let us treat each of these factors in turn:

Ancient Greek Demography

It is often forgotten that when we are speaking of ancient Greek accomplishment it is more than just a story of Athens, a city that drew the cognitive elites of the entire oikoumene to itself (much as major metropolises like New York, London, Paris, etc. do so today).

To be sure, Athens might have had 50,000 male citizens, and a total population of 250,000-300,000 [CORRECTION: Actually refers to the entire Athenian city-state. Population of just the city was probably about twice less]. But the population of Greece proper was probably at least five times larger, because the total urbanization rate never went much above 20% in any preindustrial country that we know of. Moreover, Greeks were scattered all across the Mediterranean world, in Ionia and Sicily and the shorelines of Egypt, the Italian “boot,” France, Spain, and the Pontic steppe.


Greece: More than just Greece. Source.

According to recent calculations, the total population of Greeks in the 4th century BC was at least 7.5 million, and probably more like 8-10 million (Mogens Herman Hansen in An Update on the Shotgun Method). For perspective, at the time, this represented just under 5% of the world’s population (i.e. remarkably similar to the US today). These figures might still be modest, but they are essentially comparable in magnitude to those of even the biggest preindustrial civilizations (source: Several, but mainly Angus Maddison):

  • Egypt: A consistent 5 million in both Roman and Islamic times
  • Persia: Likewise, around 5 million
  • Roman Empire: 50-60 million (of which 20 million were in the Greek East)
  • Qin China: 22 million in ~210BC (only 2x more than Greek world!)
  • Han China around 1AD: 60 million
  • Byzantine Empire: 10-12 million when it was at its geographical peak
  • Abbasid Caliphate: 30 million
  • Medieval China: 100 million
  • Medieval France: 20 million (most populated W. European country; peak)
  • Renaissance Italy: 10.5 million in 1500

To be sure, many ancient Greeks were slaves and women who were more or less excluded from participating in intellectual endevours. But in that respect they were no different from any other preindustrial civilization that we know of.

Ancient Greek Literacy

In William V. Harris’ Ancient Literacy, he estimates that the literacy rate of late Classical Greece was 5-10%, rising to 10% in the Hellenistic period, and 10-15% in Roman Italy (but considerably lower in the peripheries like Gaul). This might seem very low and it is. But in that period, it was low everywhere; in reality, the literacy rates attained in the classical Mediterranean world were far higher than had been previously seen anywhere else. Because Classical Greece was pretty much the first society in the world (only much smaller Phoenicia could have been even a remote contender) to attain what he calls “craftsman literacy” i.e. around 10%. All previous societies had been limited to the 1-2% rates that he calls “priestly literacy.”

Although he doesn’t spell it out explicitly, the key factor that must have enabled this in my view was the development of the alphabet, which occured first amongst the Phoenicians (who were also respectably creative for their numbers).

It is speculated that the alphabet might have arisen as a result of the intense trading culture of the Phoenicians, which made simplification of the writing system highly adaptive. Due to Greek and Roman influence, Mesopotamian cuneiform and Egyptian hieroglyphs were displaced. In contrast, perhaps by the time trade had reached similarly intensive levels in China – perhaps after the construction of the Grand Canal in the 7th century AD – the characters system was already too embedded in the bureaucracy and was kept on due to a QWERTY effect. However, there might also be an HBD angle. Peter Frost has suggested the spread of the ASPM gene from Middle Eastern origins – large lacking in East Asians, and associated with continuous text processing – could have tipped the scales in favor of the adoption of alphabetic systems in the Near East and the Mediterranean in a way that could not have happened in East Asia. (Note that Korea’s Sejong the Great introduced an alphabetic system in the 15th century, for the express reason of increasing literacy amongst the commonfolk, but it took until the 20th century for it to truly catch on).

Whatever the case, it is a simple fact that learning literacy is incredibly easier with alphabet based systems than character based systems. Learn the 50 or fewer symbols of your typical alphabet and their vocalizations and you are pretty much set; everything else is style and detail. In contrast, you need to know 1,000-1,500 characters just to be considered literate in Chinese (and you would still struggle a great deal even with newspaper texts). An average Chinese college graduate is expected to recognize around 5,000 characters and even they frequently have trouble with some remarkably “straightforward” characters. Here is an anecdote that represents this really well from David Moser’s classic essay Why Chinese is So Damn Hard:

I happened to have a cold that day, and was trying to write a brief note to a friend canceling an appointment that day. I found that I couldn’t remember how to write the character 嚔, as in da penti 打喷嚔 “to sneeze”. I asked my three friends how to write the character, and to my surprise, all three of them simply shrugged in sheepish embarrassment. Not one of them could correctly produce the character. Now, Peking University is usually considered the “Harvard of China”. Can you imagine three Ph.D. students in English at Harvard forgetting how to write the English word “sneeze”?? Yet this state of affairs is by no means uncommon in China.

By medieval times, China had by far the world’s most sophisticated infrastructure for increasing human capital, such as movable type (invented 400 years in advance of Gutenberg), cheap mass produced paper (in contrast, the Mediterranean world had to rely on expensive Egyptian papyru, which put a further limit on mass literacy), the system of meritocratic exams for entry into the Confucian bureaucracy, and a vast network of writing tutors, including free ones (the founder of the Ming dynasty Zhu Yuanzhang was an impoverished orphan who was taught literacy in a Buddhist monastery). Even so, held back by its writing systems, medieval China’s literacy rate was no higher than 10% at best (that was the rate at the close of the Qing dynasty and that came after the beginning of education reforms).

There are some scholars like Evelyn Rawski who argue China’s historical literacy rates were far higher. I addressed them in my Introduction to Apollo’s Ascent article (Ctrl-F for “fish literacy”).

Of course at the time of the Ancient Greeks none of this existed yet in China, so the literacy rate then was probably around 1-2% as was typical of societies with “priestly literacy.” Ergo for the great civilizations of the Middle East before the classical era.

This is common sense, but the point needs to be made regardless: Without literacy, no matter how intelligent you are, you can almost never meaningfully contribute to scientific or cultural progress.

With a literacy rate 5 or even 10 times as high as that of other contemporary civilizations (barring the Romans), their modest demographic preponderance over Greece is put into necessary perspective. To be sure, a literacy rate of 10% might not functionally translate into 5 times as much aggregate mindpower (all else equal) as a 2% literacy rate, because presumably, it is the brightest people who tend to become literate in the first place. On the other hand, however, this was a world of hereditary caste and class, of Plato’s Golds, Silvers, and Bronzes. The advanced cognitive sorting that developed in the US in the second part of the 20th century, as described in detail in Charles Murray’s Bell Curve, was totally unimaginable then. Furthermore, there might be a network effect from having a relatively dense concentration of literate people. I would imagine these two factors substantially or wholly cancel out the effect of diminishing returns to higher literacy in terms of human accomplishment. (If you have any ideas as to how this could be quantified, please feel free to mention it in the comments).

Ancient Greek IQ

As I wrote in Introduction to Apollo’s Ascent, there are a number of factors which have been shown to strongly influence IQ, making it just about feasible to guesstimate them historically.

Some of the most important ones as they pertain to Ancient Greece vs. everyone else are:

  • Nutrition
  • Inbreeding/consanguineous marriage
  • Parasitic Load

It just so happens that so far as all of these are concerned the Greeks hit the jackpot.

Nutrition: The Ancient Greeks were remarkable effective at escaping the Malthusian trap for a preindustrial society. (I am not sure why that was the case. Slavery? Feel free to leave suggestions in the comments).

According to a 2005 paper by Geoffrey Kron, citing Lawrence Angel, the average heights for Classical Greece males are 170.5cm, rising to 171.5cm for Hellenistic Greek males, which is similar to the levels attained by Britain and Germany in the early 20th century, and furthermore, compares very well with the average heights of Greek conscripts in the mid-20th century. The n=927 Roman average from 500BC to 500AD was 168.3cm, and the figures for the Byzantine Empire (at least in Crete) appear to have been similar. Here are some figures for other times and places for comparison from Gregory Clark’s A Farewell to Alms:


In other words, the Ancient Greeks were about as tall as the Georgian British, some of the tallest Europeans at that time, who were on the cusp of permanently escaping the Malthusian trap and were likewise undergoing a remarkable cultural and scientific explosion.

This must have been enabled by a remarkable level of personal prosperity, as expressed in how much grain the average laborer could buy with a day’s wage. Again via Gregory Clark:


The Odyssey is full of people sacrificing ridiculous numbers of bulls. While presumably not to be taken literally, it does probably illustrate that there were no major shortages of animal proteins. (The same certainly could not be said for China, India, or Japan, where diets have always been almost fully dominated by carbohydrates). To be sure the Odyssey takes place in the 8th century BC, but cattle shares in the Mediterranean remained high through the period of Classical Greece and only plunged as Greece transitioned into the Hellenistic period, according to an exhaustive paper by Nikola Koepke:


Additionally, as a seafaring culture, fish and sea products must have played a substantial part in the Greek diet. This would have helped them avoid the iodine deficiency that tends to depress IQ and lead to cretinism in more inland and mountainous areas. Even the very poor who could not afford fish would have used garum, the fish sauce popularized by the Romans but invented by Greeks, to flavor their staples.

Inbreeding: Inbreeding/cousin marriage, especially of the FBD type, directly lowers IQ and to a very large extent. But as prominent blogger hdbchick noticed, the Greeks had begun to outbreed extensively in the Archaic Age:

well, from mitterauer again we have [pg. 69]:

“Greek was the first European language to eliminate the terminological distinction between the father’s and mother’s side, a transition that began as early as between the fifth and third century BC.35

so that’s just at the transition point between archaic greece and classical greece. but starting at least in the early part of the archaic period and lasting throughout to the classical period the archaic greeks were outbreeding! at least the upper class ones were — difficult/impossible to know about the lower classes. from Women in Ancient Greece [pg. 67]:

“Marriages were arranged by the prospective groom and the prospective bride’s guardian, and the wife usually (although not always) went to live with her husband’s family. In the early Archaic Age [800 BC – 480 BC], to judge from the evidence of Homer’s poems (e.g. ‘Odyssey’ 4.5), male members of the upper classes generally married women who were not related to them, and who came from different areas. This upper-class habit of exogamy — marrying outside the community — was related to the political importance which marriage possessed in these circles. Marriage exchanges were one of the means by which noble families created political alliances with groups living in other areas, and in this way they made a considerable contribution to the aristocracy’s stranglehold on power. This practice survived to the end of the Archaic Age. However, with the emergence of the *polis*, exogamy began to give way in some places to endogamy — to marriage within the community. For the upper classes, this meant marriage within a tight circle of aristocratic families living in the same *polis*.”

so there was outbreeding in archaic greece for a few hundred years (at least amongst the upper classes), and, then, eventually — after about 400 years or so — there was a linguistic shift to more general kinship terms which reflected that outbreeding.

Moreover, of Emmanuel Todd’s four main European family systems – nuclear, egalitarian, authoritarian/stem, and communitarian (see Craig Willy’s post for a detailed explanation) – the Ancient Greeks practiced the authoritarian type, in which the eldest son stays with the parents while his siblings leave and inherits most or all of his family’s property.

The authoritarian family system, also seen in regions such as Germany, Sweden, Scotland, the Jews, Korea, and Japan (after ~1500), and substantially in 18th century Britain, seems to be highly eugenic in terms of selection for IQ and longterm planning. This stands to reason. Families with a lot of land/property can breed a lot of children and disperse them into the general population, and when they die, the eldest son who inherits everything can himself repeat the process. Those families who mismanage their affairs and lose land no longer have the resources to produce so many children (surviving ones, at any rate) and thus their contribution to the overall genepool peters out.

This is the opposite of the dynamics involved in communitarian family systems, in which property is divided equally amongst the sons. But all of the major Middle Eastern civilizations, as well as the Etruscan Roman heartlands, were characterized by communitarian family systems (albeit with varying rates of cousin marriage: Low in the Roman world, much higher in the Middle East and especially Egypt, where even brother/sister marriages appear to have been been quite widespread under both the Pharaohs and the Greco-Romans).

In communitarian family systems the eugenic factor is much weaker. Family ties play a big role with associated nepotism and (especially in the most endogamous societies) clannishness. Reproductive success is tied not so much on one’s own capability to use intelligence and planning to create surpluses as on support from the extended family and clan. hbdchick calls this “clannish dysgenics,” though considering that communitarian family systems are the “default” for most of human histor, I would argue it might be more apt to talk of “nuclear/stem family eugenics.” Be as it may, aggregate selection for increased IQ is much weaker.

The ancient Greeks also practiced direct eugenics, exposing physically deformed babies. The Spartans in particular are (in)famous for it. However, this seems to have been more or less universally prevalent in preindustrial history, so I doubt this could have been much of a factor.

Parasitic Load: The Mediterranean climatic and agricultural system made for a (relatively) very salubrious environment, in stark contrast to subtropical environments with their humidity and endemic diseases (e.g. India, South China) and to inland agricultural systems heavily dependent on irrigation, in which large bodies of still water are breeding grounds for all sorts of nasty parasites (most major civilizations outside Europe).

In particular, as noted in Mark Elvin’s The Pattern of the Chinese Past, aggregate parasitic load steadily INCREASED in China over the past two millennia, as its demographic center of gravity shifted inexorably south, which was characterized by irrigated rice growing and high humidity.

As if that wasn’t enough, the Ancient Greeks and other Mediterraneans also had one of the most potent counters to parastitic load in the form of their advanced viniculture. Due to their relative wealth (see above), they could afford a lot of wine, and back then it was usually stronger too.

Aggregate Mindpower in Ancient Greece

And now we can put together the final tally for Ancient Greece:

  • Could draw on a population of ~10 million Greeks (Romans: 50 million; Han Chinese: 60 million; Renaissance Italy: 10 million)
  • Had a literacy rate of 10%. Romans – Also 10%; Chinese – ~2%; Renaissance Italy – about 20% (see Van Zanden et al., 2009).

Some back of the envelope calculations for IQ:

  • Greeks are Caucasoids so let’s take the modern Greenwich mean of 100 as first default approximation, and slightly higher for Mongoloids (Romans: 100; Chinese: 105; Italy: 100)
  • Nutrition (subtract from optimal): Greeks – minus 5; Romans – minus 8; Chinese – minus 10 (would increase later); Italy – minus 5 (was very well fed in the depopulated years after the Black Death).
  • Inbreeding/Family Systems: Greeks – minus o (positive advantage of stem family type cancels out relatively modest incidence of cousin marriage); Romans – minus 2 (exagamous communitarian); Chinese – minus 5 (exagamous communitarian but more cousin marriage than amongst Romans); Italy – minus 0 (egalitarian family system with little cousin marriage thanks to Catholic Church regulations)
  • Parasitic Load: Greeks – minus 5 (let’s say that’s best possible in preindustrial age); Romans – minus 7 (did have more irrigation); Chinese – minus 10; Italy – minus 7
  • Guesstimated IQ: Greeks – 90; Romans – 83; Han Chinese – 80; Renaissance Italy – 88. Incidentally, this would give the Greeks enough of an edge to give substance to ancient stereotypes about their intelligence and craftiness but without having to evoke superhuman IQ levels.

Let us recall some definitions:

Assume that the intellectual output of an average IQ (=100, S.D.=15) young adult Briton in the year 2000 – as good an encapsulation of the “Greenwich mean” of intelligence as any – is equivalent to one nous (1 ν).

This can be used to calculate the aggregate mindpower (M) in a country.


Technological growth c * M(>threshold IQ for new discovery) * literacy rate

Here are some rough calculations:



  • c is information tech multipliers, i.e. things that make scientific/cultural progress easier. A modern example would be the Internet. I gave Renaissance Italy a bonus because of its invention of eyeglasses, which essentially doubled the creative lifespans of skilled artisans (and at the peak of their powers), and the spread of the printing press from the mid-15th century.
  • M is total aggregate mindpower. It does not have much meaning for Malthusian societies, but in the modern world it would generally correlate with total GDP.
  • The other Ms refer to the aggregate mindpower that is above the Greenwich mean to one, two, and three standard deviations respectively. Recall that not even a trillion homo erectus will come up with the calculus; you need to be above a certain threshold to make any progress. Recall also that the discovery threshold is generally 2 S.D. above the mastery threshold.
  • Recall also the assumption that (beyond the threshold) more intelligent people are exponentially more effective at solving problems that duller people; but of course the absolute numbers of those highly intelligent people taper off dramatically due to bell curve dynamics.

To understand the Pythagoras Theorem you need an IQ of around 100, implying that to discover it, the threshold is around 130. The Odyssey might be a great classic, but it has a simple, linear storyline with no particularly deep moral themes or conundrums (reminder: The putative heroes end up hanging all the female household servants who had allegedly slept with the suitors and no time is lost on further introspection). I suspect the threshold for writing it is also around 130.



This implies that around that period – the 8th-6th centuries BC in the Mediterranean – you needed a 130 IQ to move the intellectual boundaries outwards. As we can see, Ancient Greece was overshadowed by both the Roman Empire and Renaissance Italy at ΔT(+2.0), except that… conveniently, neither of the latter two existed. Its competitors at the time, civilizations like the Assyrians, Babylonians, and Egyptians, lagged substantially in IQ and literacy, and did not compensate demographically; Phoenicia might have matched Greek literacy, but was probably behind in IQ, and had far fewer people. Remarkably, it was vastly ahead of China even 500 years later.

Literacy increased during this period, and the population rose steadily to its plateau of ~10 million as Greeks colonized the Mediterranean rim, and so during this time, intellectually they were the only game in town.

During the two centuries of Classical Greece’s flowering from the 5th-4th centuries BC, the Ancient Greeks almost singlehandedly pushed the discovery threshold up by almost a standard deviation. In the process, tons of discoveries and advancements were made. To really appreciate Euclid, you probably need an IQ closer to 115. Archimedes was perhaps the most quantitatively brilliant Greek of them all, coming tantalizingly close to uncovering the calculus. Understanding classical Greek philosophy (and for that matter, the later works of the Neoplatonists and Gnostics) likewise becomes far more demanding but is not beyond the capabilities of a committed 110 or 115 IQ person. Even so, they have nothing on the likes of 20th century philosophers like Ludwig Wittgenstein or Martin Heidegger. Even very intelligent people have to commit years of dedicated effort in order to master their ideas. The complexity of the Antikythera mechanism (Hellenistic times) has been compared to late medieval European mechanical clocks. To really master them, I suspect the minimal IQ is likewise around 110-115, hence innovating it might require a threshold IQ of around 140-145.

By Hellenistic times, progress became much harder, not because Greeks had become (much) dumber or had become culturally Orientalized, but because the low hanging fruit had already been picked. Naturally, the same went for the Romans.

ΔT(+2.0) i.e. at the 130 discovery threshold for Ancient Greece as of ~500 BC was 43,000 (plus/minus a very large percentage error). ΔT(+3.0) i.e. at the 145 discovery threshold for the Romans as of ~0AD was 2,500 – and there were far more discoveries to be made. Naturally, progress slowed down drastically.

ΔT(+3.0) i.e. at the 145 discovery threshold of Renaissance Italy just by itself more than twice as dynamic as the entire Roman Empire. And the figures for Europe as a whole would have been vastly bigger still. Hence the (real) perception that by the Renaissance, the boundaries were once again being pushed outwards at a face rate, which would become a positive explosion from the 17th century on, when the first incipient mass literacy programs were launched and demographic mass also started soaring.

• Category: History, Science • Tags: Ancient Near East, Apollo's Ascent, BigPost 


RATING: 8/10. (Please note my ratings system is harsh and virtually no films get a 10).

In 2011, American sci-fi giant Neal Stephenson bewailed the pessimism prevalent in the genre and called for writers to start thinking more positively about the possibilities of technology in order to inspire new generations to “get big stuff done.”

Of course, he himself hardly set a great example in the next four years with his latest tome.

But the Martian most definitely did. In this hard sci-fi scenario, an astronaut stranded on Mars has to figure out how to survive until a rescue mission could be organized. To do this, he has to, in his own words, “science the shit” of the scarce oxygen and food resources at his disposal, while a NASA that is much better funded than in real life has to solve its own set of problems, which at first glance appear intractable.

Making the story of one solitary man’s struggle to survive is not a enviable task, but the creators pull it off with ample wit and verve. The protagonist Mark Watney is constantly cracking Nerd Lite jokes with himself and mission control in his struggle with the remorseless but indifferent main villain, the Red Planet itself.

nasa-survival-on-the-moon Scientific and technical problems are explained in a way that is neither patronizing nor unintelligible to the average viewer. These problems, though varied, all tend to be in the general spirit of the classic “Survival on the Moon” exercise compiled by NASA, in which different options have to be weighed against each other in a way that in a way that could tip the otherwise dismal odds of survival in your favor.

There are frequent references and homages to NASA themes. The “Rich Purnell manoeuvre” that ultimately enabled Watney’s survival is a direct nod to NASA mathematician Michael Minovitch’s idea of a gravity assist to propel Voyager past all four of the gas giants and into deep space (though the theoretical basis for it had been as early as the 1930s in the Soviet Union).

The film appears to be faithful to NASA culture, down to the contrast between the formal and besuited setting of NASA HQ and the more casual setting of its Jet Propulsion Laboratories. As in real world space exploration, duct tape is the solution to a lot of problems. The “no duct tape on Mars” trope is most decidedly averted.

Most of the challenges faced appear to be technically accurate. This is not surprising, since the book by Andy Weir that the film is based on was rigorously researched and initially published chapter by chapter on his website, where space nerds with encyclopedic knowledge on everything space related continuously corrected him.

There are certainly errors now and then. (I have not read the book and probably will not anytime soon, so these apply exclusively to the film). Gravity on Mars appears a bit too Earth like, with astronauts having to really physically apply themselves to scramble up ladders. Although Mars has the occasional storm, the much thinner atmosphere means that even the most furious tempests will be perceived as a light breeze; certainly nowhere near strong enough to uproot a pole and spear it into Watney. For a novel ostensibly set in 2035, comms systems act as if they are half a century out of date, just to serve a couple of plot points (if otherwise very elegant and clever ones). An astronaut propels himself around the outside of a spacecraft without a tether, while making an appearance in the one case in which a teether would have actually been redundant.

mars-radiation Another criticism of the film is that the astronauts should be all dying of cancer by the end of the film because of all the cosmic radiation (there are no obvious attempts to shield them from it). I am rather skeptical of this. The radiation dose Mars explorers receive will only be 3x as great as that received by astronauts who spend half a year on the International Space Station. But those guys aren’t keeling over dead. Theoretical research shows that the lifetime risk of cancer will only increase by three percentage points over baseline for astronauts who go to Mars, and in real life perhaps outcomes will if anything be even less dire because of the hormetic effects of radiation exposure.

Has anyone actually performed any concrete demographic studies of the death rate from cancer for astronauts (as opposed to theoretical projections)? Let me know in the comments.

But all these are ultimately minor triffles. At its root, it is a highly optimistic, positive, and inspirational story about the victory of technology and human ingenuity over the challenges posed by the last frontier. There should be more of these kinds of cultural products for civilization to continue to flourish.

The Martian is an excellent film, by far the best sci-fi flick this year along with Ex Machina, and incomparably better than the banal Hollywood fare that was Jurassic World, Mad Max: Fury Road, and by all indications, the final Hunger Games movie.

• Category: Science • Tags: Film, Review, Sci-Fi, Space Exploration 
HBD, Hive Minds, and H+

Today is the publication date of Hive Mind, a book by economist Garett Jones on the intimate relationship between average national IQs and national success, first and foremost in the field of economics.

I do intend to read and review it ASAP, but first some preliminary comments.

This is a topic I have been writing about since I started blogging in 2008 (and indeed well before I came across Steve Sailer or even HBD) and as it so happens, I have long been intending to write a similar sort of book myself – tentatively titled Apollo’s Ascent – but one that focuses more on the historical aspect of the relationship between psychometrics and development:

My basic thesis is that the rate of technological progress, as well as its geographical pattern, is highly dependent on the absolute numbers of literate high IQ people.

To make use of the intense interest that will inevitably flare up around these topics in the next few days – not to mention that rather more self-interested reason of confirming originality on the off chance that any of Garett Jones’ ideas happen to substantively overlap with mine – I have decided to informally lay out the theoretical basis for Apollo’s Ascent right now.

1. Nous

Assume that the intellectual output of an average IQ (=100, S.D.=15) young adult Briton in the year 2000 – as good an encapsulation of the “Greenwich mean” of intelligence as any – is equivalent to one nous (1 ν).

This can be used to calculate the aggregate mindpower (M) in a country.

Since sufficiently differing degrees of intelligence can translate into qualitative differences – for instance, no amount of 55 IQ people will be able to solve a calculus problem – we also need to be able to denote mindpower that is above some threshold of intelligence. So in this post, the aggregate mindpower of a country that is above 130 will be written as M(+2.0), i.e. that aggregate mindpower that is two standard deviations above the Greenwich mean.

2. Intelligence and Industrial Economies

There is a wealth of evidence implying an exponential relationship between average IQ and income and wealth in the United States.


Click to enlarge.

There is likewise a wealth of evidence – from Lynn, Rindermann, La Griffe du Lion, your humble servant, etc. – that shows an exponential relationship between levels of average national IQ and GDP per capita (PPP adjusted). When you throw out countries with a legacy of Communism and the ruinous central planning they practiced (China, the Ex-USSR and Eastern Europe, etc), and countries benefitting disproportionately from a resource windfall (Saudi Arabia, the UAE, etc), there is an amazing R2=0.84 correlation between performance in the PISA international standardized student tests and GDP (PPP) per capita. (In sociology, anything about R2=0.3 is a good result).

The reasons for this might be the case are quite intuitive. At the most basic level, intelligent people can get things done better and more quickly. In sufficiently dull societies, certain things can’t get done at all. To loosely borrow an example from Gregory Clark’s A Farewell to Alms, assume a relatively simple widget that requires ten manufacturing steps that have to be done just right to make it commercially viable. Say an 85 IQ laborer has a failure rate of 5% for any one step, while a 100 IQ laborer has a failure rate of 1%. This does not sound like that big or cardinal of a difference. But repeated ten times, some 40% of the duller worker’s production ends up being a dud, compared to only 10% of the brighter worker’s. Consequently, one is competitive on the global markets, whereas the other is not (if labor costs are equal; hence, of course, they are not).

Now imagine said widget is an automobile, with hundreds of thousands of components. Or an aircraft carrier, or a spaceship. Or a complex surgery operation.

More technical way of looking at this: Consider the GDP equation, Y = A * K^α * L^(1-α), in which K is capital, L is labour, α is a constant that usually equals about 0.3, and A is total factor productivity. It follows that the only way to grow per capita output in the longterm is to raise productivity. Productivity in turn is a function of technology and how effectively it is utilized and that in turn depends critically on things like human capital. Without an adequate IQ base, you cannot accumulate much in the way of human capital.

There are at least two further ways in which brighter societies improve their relative fortunes over and above what might merely be implied by their mere productivity advantage at any technological level.


Source: Swiss Miss.

First, capital gets drawn to more productive countries, until the point at which its marginal productivity equalizes with that of less productive countries, with their MUCH LOWER levels of capital intensity. First World economies like Germany, Japan, and the US are extremely capital intensive. It is probably not an accident that Japan, Korea, and Taiwan – some of the very brightest countries on international IQ comparisons – also have by far the world’s highest concentrations of industrial robots per worker (and China is fast catching up). Since economic output is a function not only of pure productivity but also of capital (though subject to diminishing returns) this provides a big further boost to rich countries above the levels implied by their raw productivity. And as the age of automation approaches, these trends will only intensify.

Second, countries with higher IQs also tend to be better governed, and to effectively provide social amenities such as adequate nutrition and education to their populations. Not only does it further raise their national IQs, but it also means that it is easier to make longterm investments there and to use their existing human capital to its full potential.

All this implies that different levels of intelligence have varying economic values on the global market. At this stage I am not so much interested in establishing it with exactitude as illustrating the general pattern, which goes something like this:

  • Average IQ = 70 – Per capita GDP of ~$4,000 in the more optimally governed countries of this class, such as Ghana (note however that many countries in this class are not yet fully done with their Malthusian transitions, which will depress their per capita output somewhat – see below).
  • Average IQ = 85 – Per capita GDP of ~$16,000 in the more optimally governed countries of this class, such as Brazil.
  • Average IQ = 100 Per capita GDP of ~45,000 in the more optimally governed countries of this class, or approximately the level of core EU/US/Japan.
  • Average IQ = 107 – Per capita GDP of potentially $80,000, as in Singapore (and it doesn’t seem to have even finished growing rapidly yet). Similar figures for elite/financial EU cities (e.g. Frankfurt, Milan) and US cities (e.g. San Francisco, Seattle, Boston).
  • Average IQ = 115 – Largely a theoretical construct, but that might be the sort of average IQ you’d get in, say, Inner London – the center of the global investment banking industry. The GDP per capita there is a cool $152,000.

Countries with bigger than normal “smart fractions” (the US, India, Israel) tend to have a bigger GDP per capita than what could be assumed from just from their average national IQ. This stands to reason because a group of people equally split between 85 IQers and 115 IQers will have higher cognitive potential than a room composed of an equivalent number of 100 IQers. Countries with high average IQs but smaller than normal S.D.’s, such as Finland, have a slightly smaller GDP per capita that what you might expect just from average national IQs.

These numbers add up, so a reasonable relationship equilibrium GDP (assuming no big shocks, good policies, etc) and the structure and size of national IQ would be:

Equilibrium GDP of a country exponent (IQ) * the IQ distribution (usually a bell curve shaped Gaussian) * population size * the technological level

Which can be simplified to:

Y ≈ c*M*T

… where M is aggregate mindpower (see above), T is the technology level, and c is a constant denoting the general regulatory/business climate (close to 1 in many well run capitalist states, <0.5 under central planning, etc).

To what extent if any would this model apply to pre-industrial economies?

3. Intelligence and Malthusian Economies


Source: A Farewell to Alms

Very little. The problem with Malthusian economies is that, as per the old man himself, population increases geometrically while crop yields increase linearly; before long, the increasing population eats up all the surpluses and reaches a sordid equilibrium in which births equal deaths (since there were a lot of births, that means a lot of deaths).

Under such conditions, even though technology might grow slowly from century to century, it is generally expressed not in increasing per capita consumption, but in rising population densities. And over centennial timescales, the effects of this (meager) technological growth can be easily swamped by changes in social structure, biome productivity, and climatic fluctuations (e.g. 17th C France = pre Black Death France in terms of population, because it was Little Ice Age vs. Medieval Warm Period), or unexpected improvements in agricultural productivity e.g. from the importation of new crops (e.g. the coming of sweet potatoes to China which enabled it to double its population over the previous record even though it was in outright social regress for a substantial fraction of this time).

All this makes tallying the rate of technological advance based on population density highly problematic. Therefore it has to be measured primarily in terms of eminent figures, inventions, and great works.


Distribution of significant figures across time and place. Source: Human Accomplishment.

The social scientist Charles Murray in Human Accomplishment has suggested a plausible and objective way of doing it, based on tallying the eminence of historical figures in culture and the sciences as measured by their prevalence in big reference works. Societies that are at any one time intensively pushing the technological frontiers outwards are likely to be generating plenty of “Great People,” to borrow a term from the Civilization strategy games.

To what extent does the model used for economic success apply to technology?

4. Intelligence and Technology Before 1800

A narrow intellectual elite is responsible for 99%+ of new scientific discoveries. This implies that unlike the case with an economy at large, where peasants and truck drivers make real contributions, you need to have a certain (high) threshold level of IQ to materially contribute to technological and scientific progress today.

The Anne Roe study of very eminent scientists in 1952 – almost Nobel worthy, but not quite – found that they averaged a verbal IQ of 166, a spatial IQ of 137, and a math IQ of 154. Adjusted modestly down – because the Flynn Effect has only had a very modest impact on non-rule dependent domains like verbal IQ – and you get an average verbal IQ of maybe 160 (in Greenwich terms). These were the sorts of elite people pushing progress in science 50 years ago.

To really understand 1950s era math and physics, I guesstimate that you would need an IQ of ~130+, i.e. your typical STEM grad student or Ivy League undergrad. This suggests that there is a 2 S.D. difference between the typical intellectual level needed to master something as opposed to making fundamental new discoveries in it.

Moreover, progress becomes steadily harder over time; disciplines splinter (see the disappearance of polymath “Renaissance men”), and eventually, discoveries become increasingly unattainable to sole individuals (see the steady growth in numbers of paper coauthors and shared Nobel Prizes in the 20th century). In other words, these IQ discovery thresholds are themselves a function of the technological level. To make progress up the tech tree, you need to first climb up there.

An extreme example today would be the work 0f Japanese mathematician Shinichi Mochizuki. At least Grigory Perelman’s proof of the Poincare Conjecture was eventually confirmed by other mathematicians after a lag of several years. But Mochizuki is so far ahead of everyone else in his particular field of Inter-universal Teichmüller theory that nobody any longer quite knows whether he is a universal genius or a lunatic.

In math, I would guesstimate roughly the following set of thresholds:

Mastery Discovery
Intuit Pythagoras Theorem (Ancient Egypt) 90 120
Prove Pythagoras Theorem (Early Ancient Greece) 100 130
Renaissance Math (~1550) 110 140
Differential Calculus (~1650+) 120 150
Mid-20th Century Math (1950s) 130 160
Prove Poincare Conjecture (2003) 140 170
Inter-universal Teichmüller theory (?) 150 180

This all suggests that countries which attain new records in aggregate elite mindpower relative to their predecessors can very quickly generate vast reams of new scientific discoveries and technological achievements.

Moreover, this elite mindpower has to be literate. Because a human brain can only store so much information, societies without literacy are unable to move forwards much beyond Neolithic levels, their IQ levels regardless.

As such, a tentative equation for estimating a historical society’s capacity to generate scientific and technological growth would look something like this:

Technological growth c * M(>threshold IQ for new discovery) * literacy rate


ΔT c * M(>discovery-threshold) * l

in which only that part of the aggregate mindpower that is above the threshold is considered; c is a constant that illustrates a society’s propensity for generating technological growth in the first place and can encompass social and cultural factors, such as no big wars, no totalitarian regimes, creativity, etc. as well as technological increases that can have a (generally marginal) effect on scientific productivity, like reading glasses in Renaissance Italy (well covered by David Landes), and the Internet in recent decades; and the literacy rate l is an estimate of the percentage of the cognitive elites that are literate (it can be expected to generally be a function of the overall literacy rate and to always be much higher).

Is it possible to estimate historical M and literacy with any degree of rigor?


Source: Gregory Clark.

I think so. In regards to literacy, this is an extensive area of research, with some good estimates for Ancient Greece and the Roman Empire (see Ancient Literacy by William Harris) and much better estimates for Europe after 1500 based on techniques like age heaping and book production records.

One critical consideration is that not all writing systems are equally suited for the spread of functional literacy. For instance, China was historically one of the most schooled societies, but its literacy tended to be domain specific, the classic example being “fish literacy” – a fishmonger’s son who knew the characters for different fish, but had no hope of adeptly employing his very limited literacy for making scientific advances, or even reading “self-help” pamphlets on how to be more effective in his profession (such as were becoming prevalent in England as early as the 17th century). The Chinese writing system, whether it arose from QWERTY reasons or even genetic reasons – and which became prevalent throughout East Asia – surely hampered the creative potential of East Asians.

Estimating average national IQs historically – from which M can be derived in conjunction with historical population sizes, of which we now generally have fairly good ideas about – is far more tricky and speculative, but not totally hopeless, because nowadays we know the main factors behind national differences in IQ.

Some of the most important ones include:

  • Cold Winters Theory – Northern peoples developed higher IQs (see Lynn, Rushton).
  • Agriculture – Societies that developed agriculture got a huge boost to their IQs (as well as higher S.D.s).
  • Inbreeding – Can be estimated from rates of consanguineous marriage, runs of homozygosity, and predominant family types (nuclear? communitarian?), which in turn can be established from cultural and literary evidence.
  • Eugenics – In advanced agricultural societies, where social relations come to be dominated by markets. See Greg Clark on England, and Ron Unz on China.
  • Nutrition – Obviously plays a HUGE role in the Flynn Effect. Can be proxied by body measurements, and fortunately there is a whole field of study devoted to precisely this: Auxology. Burials, conscription records, etc. all provide a wealth of evidence.
  • Parasite Load – Most severe in low-lying, swampy areas like West Africa and the Ganges Delta.

This old comment of mine to a post by Sailer is a demonstration of the sort of reasoning I tend to employ in Apollo’s Ascent.

All this means that educated guesses at the historic IQs of various societies are now perfectly feasible, if subject to a high degree of uncertainty. In fact, I have already done many such estimates while planning out Apollo’s Ascent. I will not release these figures at this time because they are highly preliminary, and lacking space to further elucidate my methods, I do not want discussions in the comments to latch on to some one figure or another and make a big deal out of it. Let us save this for later.

But in broad terms – and very happily for my thesis – these relations DO tend to hold historically.

Classical Greece was almost certainly the first society to attain something resembling craftsman level literacy rates (~10%). Ancient Greeks were also unusually tall (indicating good nutrition, for a preindustrial society), lived in stem/authoritarian family systems, and actively bred out during their period of greatness. They produced the greatest scientific and cultural explosion up to that date anywhere in the world, but evidently didn’t have quite the demographic weight – there were no more than 10 million Greeks scattered across the Mediterranean at peak – to sustain it.

In 15th century Europe, literacy once again begun soaring in Italy, to beyond Roman levels, and – surely helped by the good nutrition levels following the Black Death – helped usher in the Renaissance. In the 17th century, the center of gravity shifted towards Anglo-Germanic Europe in the wake of the Reformation with its obsession with literacy, and would stay there ever after.

As regards other civilizations…

The Islamic Golden Age was eventually cut short more by the increasing inbreeding than by the severe but ultimately temporary shock from the Mongol invasions. India was too depressed by the caste system and by parasitic load to ever be a first rate intellectual power, although the caste system also ensured a stream of occasional geniuses, especially in the more abstract areas like math and philosophy. China and Japan might have had an innate IQ advantage over Europeans – albeit one that was quite modest in the most critical area, verbal IQ – but they were too severely hampered by labour-heavy agricultural systems and a very ineffective writing system.

In contrast, The Europeans, fed on meat and mead, had some of the best nutrition and lowest parasitic load indicators amongst any advanced civilization, and even as rising population pressure began to impinge on those advantages by the 17th-18th centuries, they had already burst far ahead in literacy, and intellectual predominance was now theirs to lose.

5. Intelligence and Technology under Industrialism

After 1800, the world globalized intellectually. This was totally unprecedented. There had certainly been preludes to it, e.g. in the Jesuit missions to Qing China. But these were very much exceptional cases. Even in the 18th century, for instance, European and Japanese mathematicians worked on (and solved) many of the same problems independently.


Source: Human Accomplishment.

But in the following two centuries, this picture of independent intellectual traditions – shining most brightly in Europe by at least an order of magnitude, to be sure, but still diverse on the global level – was to be homogenized. European science became the only science that mattered, as laggard civilizations throughout the rest of the world were to soon discover to their sorrow in the form of percussion rifles and ironclad warships. And by “Europe,” that mostly meant the “Hajnal” core of the continent: France, Germany, the UK, Scandinavia, and Northern Italy.

And what had previously been but a big gap became an awning chasm.

(1) In the 19th century, the populations of European countries grew, and the advanced ones attained universal literacy or as good as made no difference. Aggregate mindpower (M) exploded, and kept well ahead of the advancing threshold IQ needed to make new discoveries.

(2) From 1890-1970, there was a second revolution, in nutrition and epidemiology – average heights increased by 10cm+, and the prevalence of debilitating infectitious diseases was reduced to almost zero – that raised IQ by as much as a standard deviation across the industrialized world. The chasm widened further.

(3) During this period, the straggling civilizations – far from making any novel contributions of their own – devoted most of their meager intellectual resources to merely coming to grips with Western developments.

This was as true – and consequential – in culture and social sciences as it was in science and technology; the Russian philosopher Nikolay Trubetzkoy described this traumatic process very eloquently in The Struggle Between Europe and Mankind. What was true even for “semi-peripheral” Russia was doubly true for China.

In science and technology, once the rest of the world had come to terms with Western dominance and the new era of the nation-state, the focus was on catchup, not innovation.This is because for developing countries, it is much more useful in terms of marginal returns to invest their cognitive energies into copying, stealing, and/or adapting existing technology to catch up to the West than to develop unique technology of their own. Arguments about, say, China’s supposed lack of ability to innovate are completely besides the point. At this stage of its development, even now, copying is much easier than creating!

This means that at this stage of global history, a country’s contribution to technological growth isn’t only a matter of the size of its smart fractions above the technological discovery IQ threshold. (This remains unchanged: E.g., note that a country like Germany remains MUCH more innovative per capita than, say, Greece, even though their aveage national IQs differ by a mere 5 points or so. Why? Because since we’re looking only at the far right tails of the bell curve, even minor differences in averages translate to big differences in innovation-generating smart fractions).

It also relates closely to its level of development. Countries that are far away from the technological frontier today are better served by using their research dollars and cognitive elites to catch up as opposed to inventing new stuff. This is confirmed by real life evidence: A very big percentage of world spending on fundamental research since WW2 has been carried out in the US. It was low in the USSR, and negligible in countries like Japan until recently. Or in China today.

Bearing this in mind, the technological growth equation today (and since 1800, more or less) – now due to its global character better described as innovation potential – would be better approximated by something like this:

Innovation potential ≈ c * M(>threshold IQ for new discovery) * literacy rate * (GDP/GDP[potential])^x


I c * M(>discovery-threshold) * l * (Y/Y[P])^x

in which the first three terms are as before (though literacy = 100% virtually everywhere now), and potential GDP is the GDP this country would obtain were its technological endowment to be increased to the maximum level possible as dictated by its cognitive profile. The “x” is a further constant that is bigger than 1 to reflect the idea that catchup only ceases to be the most useful strategy once a country has come very close to convergence or has completely converged.

Japan has won a third of all its Nobel Prizes before 2000; another third in the 2000s; and the last third in the 2010s. Its scientific achievements, in other words, are finally beginning to catch up with its famously high IQ levels. Why did it take so long?

Somebody like JayMan would say its because the Japanese are clannish or something like that. Other psychometrists like Kenya Kura would notice that perhaps they are far less creative than Westerners (this I think has a measure of truth to it). But the main “purely IQ” reasons are pretty much good enough by themselves:

  • The Nobel Prize is typically recognized with a ~25-30 year lag nowadays.
  • It is taking ever longer amounts of time to work up to a Nobel Prize because ever greater amounts of information and methods have to be mastered before original creative work can begin. (This is one consequence of the rising threshold discovery IQ frontier).
  • Critically, Japan in the 1950s was still something of a Third World country, with the attended insults upon average IQ. It is entirely possible that elderly Japanese are duller than their American counterparts, and perhaps even many Europeans of that age, meaning smaller smart fractions from the Nobel Prize winning age groups.

Japan only became an unambiguously developed country in the 1970s.

And it just so happens that precisely 40 years after this did it begin to see a big and still accelerating increase in the numbers of Nobel Prizes accruing to it!

Extending this to South Korea and Taiwan, both of which lagged around 20 years behind Japan, we can only expect to see an explosion in Nobel Prizes for them from the 2020s, regardless of how wildly their teenagers currently top out the PISA rankings.

Extending this to China, which lags around 20 years behind South Korea, and we can expect to see it start gobbling up Nobel Prizes by 2040, or maybe 2050, considering the ongoing widening of the time gap between discovery and recognition. However, due to its massive population – ten times as large as Japan’s – once China does emerge as a major scientific leader, it will do so in a very big way that will rival or even displace the US from its current position of absolute primacy.

As of 2014, China already publishes almost as many scientific papers per year as does the US, and has an outright lead in major STEM fields such as Math, Physics, Chemistry, and Computer Science. (Though to be sure, their quality is much lower, and a significant fraction of them are outright “catching up” or “adaption” style papers with no new findings).

If we assume that x=1, and that c is equal for both China and the US, then it implies that both countries currently have broadly equal innovation potential. But of course c is not quite equal between them – it is lower for China, because its system is obviously less conductive to scientific research than the American – and x is higher than 1, so in practice China’s innovation potential is still considerably lower than that of the US (maybe a quarter or a third). Nonetheless, as China continues to convege, c is going to trend towards the US level, and the GDP gap is going to narrow; plus it may also be able to eke out some further increases in its national average IQ from the current ~103 (as proxied by PISA in 2009) to South Korea’s level of ~107 as it becomes a truly First World country.

And by mid-century it will likely translate into a strong challenge to American scientific preeminence.

6. Future Consequences

The entry of China onto the world intellectual stage (if the model above is more or less correct) will be portentuous, but ultimately it will in its effects on aggregate mindpower be nowhere near the magnitude in global terms of the expansion in the numbers of literate, mostly European high IQ people from 1450 to 1900, nor the vast rise in First World IQ levels from 1890-1970 due to the Flynn Effect.

Moreover, even this may be counteracted by the dysgenic effects already making themselves felt in the US and Western Europe due to Idiocracy-resembling breeding patterns and 80 IQ Third World immigration.

And no need for pesky implants!

Radically raise IQ. And no need for pesky neural implants!

A lot of the techno-optimistic rhetoric you encounter around transhumanist circles is founded on the idea that observed exponential trends in technology – most concisely encapsulated by Moore’s Law – are somehow self-sustaining, though the precise reasons why never seem to be clearly explained. But non-IT technological growth peaked in the 1950s-70s, and has declined since; and as a matter of fact, Moore’s Law has also ground to a halt in the past 2 years. Will we be rescued by a new paradigm? Maybe. But new paradigms take mindpower to generate, and the rate of increase in global mindpower has almost certainly peaked. This is not a good omen.

Speaking of the technological singularity, it is entirely possible that the mindpower discovery threshold for constructing a superintelligence is in fact far higher than we currently have or are likely to ever have short of a global eugenics program (and so Nick Bostrom can sleep in peace).

On the other hand, there are two technologies that combined may decisively tip the balance: CRISPR-Cas9, and the discovery of the genes for general intelligence. Their maturation and potential mating may become feasible as early as 2025.

While there are very good reasons – e.g., on the basis of animal breeding experiments – for doubting Steve Hsu’s claims that genetically corrected designer babies will have IQs beyond that of any living human today, increases on the order of 4-5 S.D.’s are entirely possible. If even a small fraction of a major country like China adopts it – say, 10% of the population – then that will in two decades start to produce an explosion in aggregate global elite mindpower that will soon come to rival or even eclipse the Renaissance or the Enlightenment in the size and scope of their effects on the world.

The global balance of power would be shifted beyond recognition, and truly transformational – indeed, transhuman – possibilities will genuinely open up.

Anatoly Karlin
About Anatoly Karlin

I am a blogger, thinker, and businessman in the SF Bay Area. I’m originally from Russia, spent many years in Britain, and studied at U.C. Berkeley.

One of my tenets is that ideologies tend to suck. As such, I hesitate about attaching labels to myself. That said, if it’s really necessary, I suppose “liberal-conservative neoreactionary” would be close enough.

Though I consider myself part of the Orthodox Church, my philosophy and spiritual views are more influenced by digital physics, Gnosticism, and Russian cosmism than anything specifically Judeo-Christian.

Confederate Flag Day, State Capitol, Raleigh, N.C. -- March 3, 2007
The major media overlooked Communist spies and Madoff’s fraud. What are they missing today?
Are elite university admissions based on meritocracy and diversity as claimed?
The “war hero” candidate buried information about POWs left behind in Vietnam.
The evidence is clear — but often ignored