The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information

 Russian Reaction Blog
Age of Malthusian IndustrialismTeasers

The population of the world’s major regions according to the UN’s World Population Prospects 2017 report.

World Population Prospects (2017) 2015 2050 2100
WORLD 7,383,008,820 9,771,822,753 11,184,367,721
Sub-Saharan Africa 969,234,251 2,167,651,879 4,001,755,801
East Asia 1,635,150,365 1,586,491,284 1,198,264,520
South Asia 1,823,308,471 2,381,796,561 2,230,668,781
South-East Asia 634,609,846 797,648,622 771,527,666
MENA & C. Asia 551,964,576 850,895,914 1,045,856,658
Europe 740,813,959 715,721,014 653,261,252
Latin America 632,380,831 779,841,201 712,012,636
North America 356,003,541 434,654,823 499,197,606
Oceania 39,542,980 57,121,455 71,822,801

Assume the usual S.D.=15, and that their average IQs as of 2017 are as follows: Sub-Saharan Africa 70, East Asia 100, South Asia 80, South-East Asia 85, MENA & C. Asia 85, Europe 100, Latin America 85, North America 100, Oceania 90.

This should look plausible to people who’ve looked at the data. East Asian (Japanese, Korean, Chinese) IQ tends to be higher than 100, usually around 103-105, but I am giving it as 100 because in practice, for unclear reasons, East Asian IQs also tend to be “worth” 5 points less than Euro-American ones so far as economic performance and human accomplishment go.

Anyhow, if we also assume that regional IQs will remain “fixed” for the rest of the century, then the world average IQ will drop from 87 today to 82 by 2100, primarily on account of the massive demographic expansion of Sub-Saharan Africa.

However, fortunately, the number of people belonging to smart fractions” – which I will denote as people with an IQ above 160 (the approximate level that you have to be at to be capable of contributing to elite scientific progress today) – will remain similar to today, though it will be negatively impacted by demographic decline in Europe and East Asia.

Smart Fractions (No Flynn) 2015 2050 2100
WORLD 87,196 87,580 75,397
Sub-Saharan Africa 1 2 4
East Asia 51,787 50,246 37,951
South Asia 88 115 108
South-East Asia 182 229 221
MENA & C. Asia 158 244 300
Europe 23,462 22,668 20,690
Latin America 181 224 204
North America 11,275 13,766 15,810
Oceania 61 87 110

But what happens when we adjust for the FLynn effect? In his 2016 survey of psychometrists, Heiner Rindermann and co. compiled the following expert assessments.


This leads to a massive increase in the number of smart fractions, almost entirely on account of East Asia.

China as a now fully developed country drives global scientific progress pretty much single-handedly, like Europe did in the 19th century.

IQ Flynn (Rindermann) 2015 2100
WORLD 87,196 294,485
Sub-Saharan Africa 1 63
East Asia 51,787 245,857
South Asia 88 1,266
South-East Asia 182 1,181
MENA & C. Asia 158 1,155
Europe 23,462 27,364
Latin America 181 1,504
North America 11,275 15,810
Oceania 61 285

That said, I don’t think those FLynn projects are realistic, in part because East Asia is projected to increase in IQ so incredibly fast even though it is already a reasonably well developed place.

China itself can still probably eke out 3-5 IQ points, but Chinese fertility has been dysgenic since the 1960s, so this won’t last. I suspect East Asia – which in demographic terms is pretty much just China – will remain at a consistent level, with FLynn and dysgenics canceling each other out over the course of the century.

What if we use the following estimates for IQ changes during the 21st century (broadly justified here):

  • +10: Sub-Saharan Africa, South Asia
  • +5: South-East Asia
  • 0: East Asia, MENA & Central Asia, Latin America
  • -5: Europe, North America

Resulting table of smart fractions in 2100:

IQ Flynn (AK) 2015 2100
WORLD 87,196 51,726
Sub-Saharan Africa 1 193
East Asia 51,787 37,951
South Asia 88 3,414
South-East Asia 182 1,181
MENA & C. Asia 158 300
Europe 23,462 4,797
Latin America 181 204
North America 11,275 3,666
Oceania 61 21

So what has basically happened is that smart fractions plummet in the high-IQ world due to a combination of demographic decline, dysgenic fertility, and low-IQ mass immigration.

Meanwhile, the quantity of smart fractions from the Global South will rise, due to some FLynn catchup, but absolute numbers will remain modest.

Overall, this is a pretty catastrophic outcome.

Not only do we see a halving of 160+ IQ smart fractions, but it is also very likely that the threshold for new scientific discoveries will have risen in the meantime, since problems tend to get harder, not easier as you climb up the technological tree.

For instance, if by 2100 the new “discovery threshold” is at an IQ of 175, the people still capable of driving global science forwards might number in the mere hundreds, in a world of more than ten billion.

The likely end result of this would be an end to scientific progress, and eventually, the Age of Malthusian Industrialism once a technologically stagnant and progressively more fecund world bumps up against the limits of the industrial economy.



Kong, Augustine et al. – 2016 – Selection against variants in the genome associated with educational attainment


This paper makes the case that there has been a decline in the prevalence of genes increasing propensity for more education (POLY EDU) in Iceland from 1910-1975.


Here are some of the key points:

  • The main mechanism was greater age at first child, not total number of children (i.e. the clever are breeding more slowly).
  • As in many such studies, the effect is stronger for women.
  • One allele associated with more children and having them earlier also tags a haplotype associated with “reduced intercranial volume” and neuroticism: “… thus a striking case where a variant associated with a phenotype typically regarded as unfavorable could nonetheless be also associated with increased “ fitness” in the evolutionary sense.
  • The decrease in POLY EDU prevalence was faster earlier this century, but this is an artifact of the higher survival schedules of people with a higher propensity for education (i.e. tying in with the well known finding that higher IQ is associated with higher life expectancy). The decline from 1940 onwards becomes linear, and is a better measure of estimating the change of the average polygenic score over time.
  • It is estimated that is POLY EDU declining by 0.010 SUs per decade, but this rises to 0.028 SUs per decade because the measure captures only a fraction of the full genetic component of education attainment (POLY FULL).
  • The trends in POLY FULL are estimated to be causing a decline of 0.30 IQ points per decade.
  • The authors note that this has entirely canceled out and then some by the Flynn effect, but it could still have “a very substantial effect if the trend persists for centuries.”

Many other studies indicate that the FLynn effect has ended or gone into reverse across the developed world around the 2000s by the latest.

If it’s a permanent plateau, we could be seeing 3 IQ point declines per century. Extend that out for two or three centuries, add some more Third World immigration, and you get the 1 S.D. IQ decline that I posited for the Age of Malthusian Industrialism aka the business as usual scenario.


Fundamentally solve the “intelligence problem,” and all other problems become trivial.

The problem is that this problem is a very hard one, and our native wit is unlikely to suffice. Moreover, because problems tend to get harder, not easier, as you advance up the technological ladder (Karlin, 2015), in a “business as usual” scenario with no substantial intelligence augmentation we will effectively only have a 100-200 year “window” to effect this breakthrough before global dysgenic fertility patterns rule it out entirely for a large part of the next millennium.

To avoid a period of prolonged technological and scientific stagnation, with its attendant risks of collapse, our global “hive mind” (or “noosphere”) will at a minimum have to sustain and preferably sustainably augment its own intelligence. The end goal is to create (or become) a machine, or network of machines, that recursively augment their own intelligence – “the last invention that man need ever make” (Good, 1965).

In light of this, there are five main distinct ways in which human (or posthuman) civilization could develop in the next millennium.


(1) Direct Technosingularity

kurzweil-singularity-is-near The development of artificial general intelligence (AGI), which should quickly bootstrap itself into a superintelligence – defined by Nick Bostrom as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (Bostrom, 2014). Especially if this is a “hard” takeoff, the superintelligence will also likely become a singleton, an entity with global hegemony (Bostrom, 2006).

Many experts predict AGI could appear by the middle of the 21st century (Kurzweil, 2005; Müller & Bostrom, 2016). This should quickly auto-translate into a technological singularity, henceforth “technosingularity,” whose utilitarian value for humanity will depend on whether we manage to solve the AI alignment problem (i.e., whether we manage to figure out how to persuade the robots not to kill us all).

The technosingularity will creep up on us, and then radically transform absolutely everything, including the very possibility of any further meaningful prognostication – it will be “a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control” (Vinge, 1993). The “direct technosingularity” scenario is likely if AGI turns out to be relatively easy, as the futurist Ray Kurzweil and DeepMind CEO Demis Hassabis believe.

(2) The Age of Em

The development of Whole Brain Emulation (WBE) could accelerate the technosingularity, if it is relatively easy and is developed before AGI. As the economist Robin Hanson argues in his book The Age of Em, untold quintillions of emulated human minds, or “ems,” running trillions of times faster than biological wetware, should be able to effect a transition to true superintelligence and the technosingularity within a couple of human years (Hanson, 2016). This assumes that em civilization does not self-destruct, and that AGI does not ultimately prove to be an intractable problem. A simple Monte Carlo simulation by Anders Sandberg hints that WBE might be achieved by the 2060s (Sandberg, 2014).


Deus Ex: Human Revolution.

(3) Biosingularity

We still haven’t come close to exhausting our biological and biomechatronic potential for intelligence augmentation. The level of biological complexity has increased hyperbolically since the appearance of life on Earth (Markov & Korotayev, 2007), so even if both WBE and AGI turn out to be very hard, it might still be perfectly possible for human civilization to continue eking out huge further increases in aggregate cognitive power. Enough, perhaps, to kickstart the technosingularity.

There are many possible paths to a biosingularity.

The simplest one is through demographics: The tried and tested method of population growth (Korotaev & Khaltourina, 2006). As “technocornucopians” like Julian Simon argue, more people equals more potential innovators. However, only a tiny “smart fraction” can meaningfully contribute to technological progress, and global dysgenic fertility patterns imply that its share of the world population is going to go down inexorably now that the FLynn effect of environmental IQ increases is petering out across the world, especially in the high IQ nations responsible for most technological progress in the first place (Dutton, Van Der Linden, & Lynn, 2016). In the longterm “business as usual” scenario, this will result in an Idiocracy incapable of any further technological progress and at permanent risk of a Malthusian population crash should average IQ fall below the level necessary to sustain technological civilization.

As such, dysgenic fertility will have to be countered by eugenic policies or technological interventions. The former are either too mild to make a cardinal difference, or too coercive to seriously advocate. This leaves us with the technological solutions, which in turn largely fall into two bins: Genomics and biomechatronics.

The simplest route, already on the cusp of technological feasibility, is embryo selection for IQ. This could result in gains of one standard deviation per generation, and an eventual increase of as much as 300 IQ points over baseline once all IQ-affecting alleles have been discovered and optimized for (Hsu, 2014; Shulman & Bostrom, 2014). That is perhaps overoptimistic, since it assumes that the effects will remain strictly additive and will not run into diminishing returns.

Even so, a world with a thousand or a million times as many John von Neumanns running about will be more civilized, far richer, and orders of magnitude more technologically dynamic than what we have now (just compare the differences in civility, prosperity, and social cohesion between regions in the same country separated by a mere half of a standard deviation in average IQ, such as Massachussetts and West Virginia). This hyperintelligent civilization’s chances of solving the WBE and/or AGI problem will be correspondingly much higher.

The problem is that getting to the promised land will take about a dozen generations, that is, at least 200-300 years. Do we really want to wait that long? We needn’t. Once technologies such as CRISPR/Cas9 maturate, we can drastically accelerate the process and accomplish the same thing through direct gene editing. All this of course assumes that a concert of the world’s most powerful states doesn’t coordinate to vigorously clamp down on the new technologies.

Even so, we would still remain “bounded” by human biology. For instance, womb size and metabolic load are a crimper on brain size, and the specificities of our neural substrate places an ultimate ceiling even on “genetically corrected” human intellectual potential.

There are four potential ways to go beyond biology, presented below from “most realistic” to “most sci-fi”:

Neuropharmocology: Nootropics already exist, but they do not increase IQ by any significant amount and are unlikely to do so in the future (Bostrom, 2014).

Biomechatronics: The development of neural implants to augment human cognition beyond its peak biological potential. The first start-ups, based for now on treatment as opposed to enhancement, are beginning to appear, such as Kernel, where the futurist Randal Koene is the head scientist. This “cyborg” approach promises a more seamless, and likely safer, integration with ems and/or intelligent machines, whensoever they might appear – this is the reason why Elon Musk is a proponent of this approach. However, there’s a good chance that meaningful brain-machine interfaces will be very hard to implement (Bostrom, 2014).

Nanotechnology: Nanobots could potentially optimize neural pathways, or even create their own foglet-based neural nets.

Direct Biosingularity: If WBE and/or superintelligence prove to be very hard or intractable, or come with “minor” issues such as a lack of rigorous solutions to the AI alignment problem or the permanent loss of conscious experience (Johnson, 2016), then we might attempt a direct biosingularity – for instance, Nick Bostrom suggests the development of novel synthetic genes, and even more “exotic possibilities” such as vats full of complexly structured cortical tissue or “uplifted” transgenic animals, especially elephants or whales that can support very large brains (Bostrom, 2014). The terminal result of a true biosingularity could might be some kind of “ecotechnic singleton,” e.g. Stanisław Lem’s Solaris, a planet dominated by a globe-spanning sentient ocean.

Bounded by the speed of neuronal chemical reactions, it is safe to say that the biosingularity will be a much slower affair than The Age of Em or a superintelligence explosion, not to mention the technosingularity that would likely soon follow either of those two events. However, human civilization in this scenario might still eventually achieve the critical mass of cognitive power needed to solve WBE or AGI, thus setting off the chain reaction that leads to the technosingularity.


(4) Eschaton

Nick Bostrom defined existential risk thus: “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.(Bostrom, 2002)

We can divide existential risks into four main bins: Geoplanetary; Anthropic; Technological; and Philosophical.

In any given decade, a gamma ray burst or even a very big asteroid could snuff us out in our earthly cradle. However, the background risk is both constant and extremely low, so it would be cosmically bad luck for a geoplanetary Götterdämmerung to do us in just as we are about to enter the posthuman era.

There are three big sources of “anthropic” existential risk: Nuclear war, climate change, and the exhaustion of high-EROEI energy sources.

Fears of atomic annihilation are understandable, but even a full-scale thermonuclear exchange between Russia and the US is survivable, and will not result in the collapse of industrial civilization ala A Canticle for Leibowitz or the Fallout video games, let alone human extinction (Kahn, 1960; Kearny, 1979). This was true during the Cold War and it is doubly true today, when nuclear weapons stocks are much lower. To be sure, some modest percentage of the world population will die, and a majority of the capital stock in the warring nations will be destroyed, but as Herman Kahn might have said, this is a tragic but nonetheless distinguishable outcome compared to a true “existential risk.”

Much the same can be said of anthropogenic climate change. While it would probably do more harm than good, at least in the medium-term (Stager, 2011), even the worst outcomes like a clathrate collapse will most likely not translate into James Lovelock’s apocalyptic visions of “breeding pairs” desperately eking out a hardscrabble survival in the Arctic. The only truly terminal outcome would be a runaway greenhouse effect that turns Earth into Venus, but there is simply nowhere near enough carbon on our planetary surface for that to happen.

As regards global energy supplies, while the end of high-density fossil fuels might somewhat reduce living standards relative to what they would have otherwise been, there is no evidence it would cause economic decline, let alone technological regression back to the Olduvai Gorge conditions as some of the most alarmist “doomers” have claimed. We still have a lot of fat to cut! Ultimately, the material culture even of an energy-starved country like Cuba compares very positively to those of 95% of all humans who have ever lived. Besides, there are still centuries’ worth of coal reserves left on the planet, and nuclear and solar power have been exploited to only a small fraction of their potential.

By far the biggest technological risk is malevolent AGI, so much so that entire research outfits such as MIRI have sprung up to work on it. However, it is so tightly coupled to the Technosingularity scenario that I will refrain from further commentary on it here.

This leaves mostly just the “philosophical,” or logically derived, existential risks. For instance, the computer simulation we are in might end (Bostrom, 2003) – perhaps because we are not interesting enough (if we fail to reach technosingularity), or for lack of hardware to simulate an intelligence explosion (if we do). Another disquieting possibility is implied by the foreboding silence all around as – as Enrico Fermi asked, “Where is everyone?” Perhaps we are truly alone. Or perhaps alien post-singularity civilizations stay silent for a good reason.

We began to blithely broadcast our presence to the void more than a century ago, so if there is indeed a “superpredator” civilization keeping watch over the galaxy, ready to swoop down at the first sign of a potential rival (e.g. for the simulation’s limited computing resources), then our doom may have already long been written onto the stars. However, unless they have figured out how to subvert the laws of physics, their response will be bounded by the speed of light. As such, the question of whether it takes us half a century or a millenium to solve the intelligence problem – and by extension, all other problems, including space colonization – assumes the most cardinal importance!


Vladimir Manyukhin, Tower of Sin.

(5) The Age of Malthusian Industrialism (or, “Business as Usual”)

The 21st century turns out to be a disappointment in all respects. We do not merge with the Machine God, nor do we descend back into the Olduvai Gorge by way of the Fury Road. Instead, we get to experience the true torture of seeing the conventional, mainstream forecasts of all the boring, besuited economists, businessmen, and sundry beigeocrats pan out.

Human genetic editing is banned by government edict around the world, to “protect human dignity” in the religious countries and “prevent inequality” in the religiously progressive ones. The 1% predictably flout these regulations at will, improving their progeny while keeping the rest of the human biomass down where they believe it belongs, but the elites do not have the demographic weight to compensate for plummeting average IQs as dysgenics decisively overtakes the FLynn Effect.

We discover that Kurzweil’s cake is a lie. Moore’s Law stalls, and the current buzz over deep learning turns into a permanent AI winter. Robin Hanson dies a disappointed man, though not before cryogenically freezing himself in the hope that he would be revived as an em. But Alcor goes bankrupt in 2145, and when it is discovered that somebody had embezzled the funds set aside for just such a contingency, nobody can be found to pay to keep those weird ice mummies around. They are perfunctorily tossed into a ditch, and whatever vestigial consciousness their frozen husks might have still possessed seeps and dissolves into the dirt along with their thawing lifeblood. A supermall is build on their bones around what is now an extremely crowded location in the Phoenix megapolis.

For the old concerns about graying populations and pensions are now ancient history. Because fertility preferences, like all aspects of personality, are heritable – and thus ultracompetitive in a world where the old Malthusian constraints have been relaxed – the “breeders” have long overtaken the “rearers” as a percentage of the population, and humanity is now in the midst of an epochal baby boom that will last centuries. Just as the human population rose tenfold from 1 billion in 1800 to 10 billion by 2100, so it will rise by yet another order of magnitude in the next two or three centuries. But this demographic expansion is highly dysgenic, so global average IQ falls by a standard deviation and technology stagnates. Sometime towards the middle of the millenium, the population will approach 100 billion souls and will soar past the carrying capacity of the global industrial economy.

Then things will get pretty awful.

But as they say, every problem contains the seed of its own solution. Gnon sets to winnowing the population, culling the sickly, the stupid, and the spendthrift. As the neoreactionary philosopher Nick Land notes, waxing Lovecraftian, “There is no machinery extant, or even rigorously imaginable, that can sustain a single iota of attained value outside the forges of Hell.”

In the harsh new world of Malthusian industrialism, Idiocracy starts giving way to A Farewell to Alms, the eugenic fertility patterns that undergirded IQ gains in Early Modern Britain and paved the way to the industrial revolution. A few more centuries of the most intelligent and hard-working having more surviving grandchildren, and we will be back to where we are now today, capable of having a second stab at solving the intelligence problem but able to draw from a vastly bigger population for the task.

Assuming that a Tyranid hive fleet hadn’t gobbled up Terra in the intervening millennium…

2061su-longing-for-home, Longing for Home

The Forking Paths of the Third Millennium

In response to criticism that he was wasting his time on an unlikely scenario, Robin Hanson pointed out that even if there was just a 1% chance of The Age of Em coming about, studying it was well worth his while considering the sheer amount of future consciences and potential suffering at stake.

Although I can imagine some readers considering some of these scenarios as less likely than others, I think it’s fair to say that all of them are at least minimally plausible, and that most people would also assign a greater than 1% likelihood to a majority of them. As such, they are legitimate objects of serious consideration.

My own probability assessment is as follows:

(1) (a) Direct Technosingularity – 25%, if Kurzweil/MIRI/DeepMind are correct, with a probability peak around 2045, and most likely to be implemented via neural networks (Lin & Tegmark, 2016).

(2) The Age of Em – <1%, since we cannot obtain functional models even of 40 year old microchips from scanning them, to say nothing of biological organisms (Jonas & Kording, 2016)

(3) (a) Biosingularity to Technosingularity – 50%, since the genomics revolution is just getting started and governments are unlikely to either want to, let alone be successful at, rigorously suppressing it. And if AGI is harder than the optimists say, and will take considerably longer than mid-century to develop, then it’s a safe bet that IQ-augmented humans will come to play a critical role in eventually developing it. I would put the probability peak for a technosingularity from a biosingularity at around 2100.

(3) (b) Direct Biosingularity – 5%, if we decide that proceeding with AGI is too risky, or that consciousness both has cardinal inherent value and is only possible with a biological substrate.

(4) Eschaton – 10%, of which: (a) Philosophical existential risks – 5%; (b) Malevolent AGI – 1%; (c) Other existential risks, primarily technological ones: 4%.

(5) The Age of Malthusian Industrialism – 10%, with about even odds on whether we manage to launch the technosingularity the second time round.

There is a huge amount of literature on four of these five scenarios. The most famous book on the technosingularity is Ray Kurzweil’s The Singularity is Near, though you could make do with Vernor Vinge’s classic article The Coming Technological Singularity. Robin Hanson’s The Age of Em is the book on its subject. Some of the components of a potential biosingularity are already within our technological horizon – Stephen Hsu is worth following on this topic, though as regards biomechatronics, for now it remains more sci-fi than science (obligatory nod to the Deus Ex video game franchise). The popular literature on existential risks of all kinds is vast, with Nick Bostrom’s Superintelligence being the definitional work on AGI risks. It is also well worth reading his many articles on philosophical existential risks.

Ironically, by far the biggest lacuna is with regards to the “business as usual” scenario. It’s as if the world’s futurist thinkers have been so consumed with the most exotic and “interesting” scenarios (e.g. superintelligence, ems, socio-economic collapse, etc.) that they have neglected to consider what will happen if we take all the standard economic and demographic projections for this century, apply our understanding of economics, psychometrics, technology, and evolutionary psychology to them, and stretch them out to their logical conclusions.

The resultant Age of Industrial Malthusianism is not only something that’s easier to imagine than many of the other scenarios, and by extension easier for modern people to connect with, but it is also something that is genuinely interesting in its own right. It is also very important to understand well. That is because it is by no means a “good scenario,” even if it is perhaps the most “natural” one, since it will eventually entail unimaginable amounts of suffering for untold billions a few centuries down the line, when the time comes to balance the Malthusian equation. We will also have to spend an extended amount of time under an elevated level of philosophical existential risk. This would be the price we will have to pay for state regulations that block the path to a biosingularity today.


Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology / WTA, 9(1), 1–31.

Bostrom, N. (2003). Are We Living in a Computer Simulation? The Philosophical Quarterly, 53(211), 243–255.

Bostrom, N. (2006). What is a Singleton. Linguistic and Philosophical Investigations, 5(2), 48–54.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Dutton, E., Van Der Linden, D., & Lynn, R. (2016). The negative Flynn Effect: A systematic literature review. Intelligence, 59, 163–169.

Good, I. J. (1965). Speculations Concerning the First Ultraintelligent Machine. In F. Alt & M. Ruminoff (Eds.), Advances in Computers, volume 6. Academic Press.

Hanson, R. (2016). The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford University Press.

Hsu, S. D. H. (2014, August 14). On the genetic architecture of intelligence and other quantitative traits. arXiv [q-bio.GN]. Retrieved from

Johnson, M. (2016). Principia Qualia: the executive summary. Open Theory. Retrieved from

Jonas, E., & Kording, K. (2016). Could a neuroscientist understand a microprocessor? bioRxiv. Retrieved from

Kahn, H. (1960). On thermonuclear war (Vol. 141). Cambridge Univ Press.

Karlin, A. (2015). Introduction to Apollo’s Ascent. The Unz Review. Retrieved from

Kearny, C. H. (1979). Nuclear war survival skills. NWS Research Bureau.

Korotaev, A. V., & Khaltourina, D. (2006). Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends in Africa. Editorial URSS.

Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin.

Lin, H. W., & Tegmark, M. (2016, August 29). Why does deep and cheap learning work so well?arXiv [cond-mat.dis-nn]. Retrieved from

Markov, A. V., & Korotayev, A. V. (2007). Phanerozoic marine biodiversity follows a hyperbolic trend. Palaeoworld, 16(4), 311–318.

Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing.

Sandberg, A. (2014). Monte Carlo model of brain emulation development. Retrieved from

Shulman, C., & Bostrom, N. (2014). Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer? Global Policy, 5(1), 85–92.

Stager, C. (2011). Deep Future: The Next 100,000 Years of Life on Earth. Macmillan.

Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Retrieved from

Anatoly Karlin
About Anatoly Karlin

I am a blogger, thinker, and businessman in the SF Bay Area. I’m originally from Russia, spent many years in Britain, and studied at U.C. Berkeley.

One of my tenets is that ideologies tend to suck. As such, I hesitate about attaching labels to myself. That said, if it’s really necessary, I suppose “liberal-conservative neoreactionary” would be close enough.

Though I consider myself part of the Orthodox Church, my philosophy and spiritual views are more influenced by digital physics, Gnosticism, and Russian cosmism than anything specifically Judeo-Christian.

Confederate Flag Day, State Capitol, Raleigh, N.C. -- March 3, 2007
The major media overlooked Communist spies and Madoff’s fraud. What are they missing today?
Are elite university admissions based on meritocracy and diversity as claimed?
The “war hero” candidate buried information about POWs left behind in Vietnam.
The evidence is clear — but often ignored