The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
Topics/Categories Filter?
Foreign Policy Politics Western Media Russia Society Russophobes Western Hypocrisy Human Rights Economy USA Corruption Ukraine Opinion Poll Liberal Opposition Putin Ideology Demography International Relations Translations Core Article History Elections Military Geopolitics Humor Race/Ethnicity IQ China Futurism Psychometrics Democracy UK Georgia Blogging International Comparisons Moscow Crime Russian Media Convergence Human Biodiversity Sociology Education Soviet Union Admin Economics Medvedev Miscellaneous Public Health Putin Derangement Syndrome Germany NATO Energy Obama PISA Science United States Immigration Climate Change France Imperialism War Finance India Islam Baltics Belarus RealWorld Wikileaks Chechnya Guardian Islamism Jews The Economist US-Russia.org Expert Discussion Panel Culture Forecasts Translation Venezuela Inequality Islamophobia Israel Nick Eberstadt Open Thread Poland Syria Terrorism Feminism Iran Kompromat Literacy Migration Alcoholism BRICs Caucasus Estonia Internet Population Growth Soviet History The AK Turkey Communism Demographics Europe Fertility Rates Gender Relations Greece Law Mortality Nationalism news-2008 Statistics Armenia Conspiracy Theories Donald Trump Journalism Near Abroad Peak Oil SJWs Sociobiology Video War In Donbass Development East Asians Espionage Far Abroad Flynn Effect Interviews Living Standards Manufacturing Media Pax Americana Syrian Civil War AIDS Alcohol Arab Spring Big History Canada Crisis Economic History Finland Food John McCain Literature Mitt Romney Sweden Technology TIMSS Trade Whites Al Jazeera Alexei Navalny Azerbaijan Brazil Color Revolution Death Penalty Environment Nuclear Weapons Nutrition Obesity Philosophy Racism Rationality Richard Lynn Ron Unz Singapore Taiwan Trump Derangement Syndrome Ukrainian Crisis US Blacks US Elections 2016 Vladimir Putin World War II Anti-Semitism Censorship East Asian Exception Egypt Iceland Intelligence Law Levada Center Malthusianism Neocons Political Economy Poverty Projects Saudi Arabia United Kingdom WSJ Academia Alexei Kudrin Apollo's Ascent Arctic Sea Ice Melting Bahrain California Corruption Perceptions Index EROEI Freedom Green Guest Hezbollah Hillary Clinton Homosexuality Infrastructure Islamist-liberal Alliance Italy Japan maps Mexico Middle East Military Technology Nazism Opposition R&D Russian Economy Russian Politics Sex Ratio Socialism Svidomy Transhumanism Travel Trolling Tsarist Russia Agriculture Ancestral Health Arabs Assange Berezovsky BigPost Cars Central Asia Chinese History Dark Lord Of The Kremlin Edward Snowden Eurasia European Union Genetics Global Warming Hubbert's Peak Ideologies Islamic State kremlinology Languages Learning LGBT Liberalism North Korea Norway Paris Attacks Philosophy Pigs Prediction Psychology Ray Kurzweil Russian Far East Russian History Scandinavia South Korea stalin Superintelligence systems-modeling The Bell Curve The Russian Spectrum Thermoeconomics 2010 Census 2012 US Elections Afghanistan Aging Alexander Mercouris Alt Right Anthropology Arctic Resources Blacks Bolivarian Revolution Capitalism Chinese IQ Chinese Language CNN CO2 Emissions Coal Crimea Czech Republic discussion Drugs Eastern Europe falsifiable-predictions Fantasy FEMEN Financial Times Football Foreign Policy Glenn Greenwald Great Powers Guardian Censorship Health Hindu Caste System Hispanics Human Achievement Illegal Immigration Indians Inequality IPCC Ireland Jared Diamond John Michael Greer Joseph Tainter Julian Assange Junta Latin America Libya life-expectancy Malnutrition Masculinity Milan Kundera Militarization Military History Money Moscow Mayoral Election 2013 NAMs Natural Gas navalny ngos Novorossiya Paper Review race-realism Robert Ayres Romanticism Russia Debate Russian Demography Russian Orthodox Church sergey-zhuravlev Sex Social Media Space Sport Steven Pinker Switzerland Urbanization Yemen 2008-south-ossetia-war Abortion Administration AGW Denial Ahmadinejad Anatoly Karlin Andrei Korotayev Arctic Civilization Asian Americans Australia authoritarianism Beer Cartoon CEC Charles Murray Chinese Economy Chinese People Christianity Cliodynamics collapse Copenhagen Summit Cousin Marriage Cuba Demoscope Dmitry Medvedev Dysgenic Dzhokhar Tsarnaev Earth Day Economist Democracy Index Ecuador Effective Altruism EMP Weapons Eugenics Genetic Engineering Gérard Depardieu Globalization Hanzi Healthcare Hist kai Hitler homicides human-capital immigrants inosmi Iranian Nuclear Weapons Program Iraq Islam Ivan Bloch Jorge Luis Borges Kenneth Pomeranz Kremlin Clans la-russophobe levada Life Limits To Growth Malthusian Loop Map Marxism Massive Ordnance Penetrator me Monarchy Navalny Affair Netherlands New York Times Niall Ferguson Nuclear Power Occupy Oil Diet PDVSA Peter Turchin Police Propaganda Protestantism Rape Razib Khan Review RFERL rise-of-the-rest RTS Stock Market Russian Military Russian Reaction RussPol San Francisco Sci-Fi Scotland Debt Space Exploration Tamerlan Tsarnaev Taxes The Sublime Tim Ferriss Twitter UN Unemployment World War I Zoology 9/11 Aesthetics Affirmative Action Africa Age Of Malthusian Industrialism Airborne Laser Aircraft Carriers American Media Anarchism Apocalypse ARCS Of Progress Arctic Methane Release Argentina Arthur H. Smith Arthur Jensen Assad Aubrey De Grey Austria Automation Bangladeshis Barbarians Bashar Al-Assad Berkeley Books Boris Berezovsky Brahmans Brexit Brezhnev Brighter Brains Business Calisthenics Charlie Hebdo Chechens china-russia-relations Chinese Communist Party Chuck Schumer CIA Class Climate Cognitive Elitism Cold War Collapse Party Colmar Von Der Goltz Colombia Confucianism Marriage Conservatism Crimean Tatars Crispr Cultural Marxism Cyprus David Moser Demographic Transition Digital Philosophy Dostoevsky Drought Dubai Elites Enemy Belligerent Act Of 2010 Ester Boserup Eurabia European History Evolution Family Fascism fat-diets FEL Weapons Fertility fertility-rate Fossil Fuels Free Speech Freedom Of Speech Friedrich List Gail The Actuary Gaza Flotilla Raid Geography George Friedman George Soros GMD Goldman Sachs Graham Turner grains Greeks Green Party USA Gregory Clark Guantanamo Guns Half Sigma Hank Pellissier Hashemi Rafsanjani HBDchick Himachal Pradesh Hong Kong HplusNRx Hungary Ibn Khaldun ICBMs Idiocracy IMF incarceration-rate Indian Economy Indian IQ industrialization Inflation interview IT James Kunstler James Lovelock Jennifer Rubin Jezebel Jim O'neill John Yoo Kant Karlinism Khamenei khodorkovsky konstantin-von-eggert Korean Cuisine Laissez-faire Lazy Glossophiliac Libertarianism limp-wristed-liberals Linguistics LNG london luke-harding Malthus Maoism mark-adomanis Matt Forney Max Weber Meme Middle Ages Moltke The Elder Muammar Gaddafi Muslims NCVS Neoreaction Nick Bostrom Nobel Prize Norman Finkelstein Novorossiya Sitrep NYT oligarchs open-discussion orientalism Orinoco Belt Orissa Orthodoxy Pakistan Palestine Patriot Missiles Paul Chefurka Pedophilia People's Liberation Army PIRLS PLAN Podcast Polar Regions Political Correctness Poll Productivity protests pussy-riot Race/IQ Reading RIA Novosti RossPress Russia-Germany Relations russian-cuisine Russian Society Schlieffen Plan schools Schopenhauer Science Fiction Serbia sergey-magnitsky Sergey Nefedov Shanghai Singularity Sisyphean Loop Slavoj Zizek SLBMs SM-3 sobornost Social Evolution Songun space-based-solar-power Spain Steve Sailer Strait Of Hormuz String Of Pearls Sublime Oblivion Suicide Supercomputers Survivalism Tamil Nadu THAAD The Bible The Guardian The Lancet The Matrix The Oil Drum War transparency-international UAE UAVs UC Berkeley Ugo Bardi UKIP Universities US Navy us-russia-relations vegetarianism Vekhi Velayat-e Faqih Vietnam Viktor Yushchenko Wall Street wealth-creation Welfare Willem Buiter william-burns William Catton Womyn's Studies World Health Organization World Values Survey Writing yulia-latynina Zombies
 Russian Reaction Blog / Age of Malthusian IndustrialismTeasers

PAPER REVIEW

Kong, Augustine et al. – 2016 – Selection against variants in the genome associated with educational attainment


marker

This paper makes the case that there has been a decline in the prevalence of genes increasing propensity for more education (POLY EDU) in Iceland from 1910-1975.

polyEDU-fertility

Here are some of the key points:

  • The main mechanism was greater age at first child, not total number of children (i.e. the clever are breeding more slowly).
  • As in many such studies, the effect is stronger for women.
  • One allele associated with more children and having them earlier also tags a haplotype associated with “reduced intercranial volume” and neuroticism: “… thus a striking case where a variant associated with a phenotype typically regarded as unfavorable could nonetheless be also associated with increased “ fitness” in the evolutionary sense.
  • The decrease in POLY EDU prevalence was faster earlier this century, but this is an artifact of the higher survival schedules of people with a higher propensity for education (i.e. tying in with the well known finding that higher IQ is associated with higher life expectancy). The decline from 1940 onwards becomes linear, and is a better measure of estimating the change of the average polygenic score over time.
  • It is estimated that is POLY EDU declining by 0.010 SUs per decade, but this rises to 0.028 SUs per decade because the measure captures only a fraction of the full genetic component of education attainment (POLY FULL).
  • The trends in POLY FULL are estimated to be causing a decline of 0.30 IQ points per decade.
  • The authors note that this has entirely canceled out and then some by the Flynn effect, but it could still have “a very substantial effect if the trend persists for centuries.”

Many other studies indicate that the FLynn effect has ended or gone into reverse across the developed world around the 2000s by the latest.

If it’s a permanent plateau, we could be seeing 3 IQ point declines per century. Extend that out for two or three centuries, add some more Third World immigration, and you get the 1 S.D. IQ decline that I posited for the Age of Malthusian Industrialism aka the business as usual scenario.

 

Fundamentally solve the “intelligence problem,” and all other problems become trivial.

The problem is that this problem is a very hard one, and our native wit is unlikely to suffice. Moreover, because problems tend to get harder, not easier, as you advance up the technological ladder (Karlin, 2015), in a “business as usual” scenario with no substantial intelligence augmentation we will effectively only have a 100-200 year “window” to effect this breakthrough before global dysgenic fertility patterns rule it out entirely for a large part of the next millennium.

To avoid a period of prolonged technological and scientific stagnation, with its attendant risks of collapse, our global “hive mind” (or “noosphere”) will at a minimum have to sustain and preferably sustainably augment its own intelligence. The end goal is to create (or become) a machine, or network of machines, that recursively augment their own intelligence – “the last invention that man need ever make” (Good, 1965).

In light of this, there are five main distinct ways in which human (or posthuman) civilization could develop in the next millennium.

matrix-art

(1) Direct Technosingularity

kurzweil-singularity-is-near The development of artificial general intelligence (AGI), which should quickly bootstrap itself into a superintelligence – defined by Nick Bostrom as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (Bostrom, 2014). Especially if this is a “hard” takeoff, the superintelligence will also likely become a singleton, an entity with global hegemony (Bostrom, 2006).

Many experts predict AGI could appear by the middle of the 21st century (Kurzweil, 2005; Müller & Bostrom, 2016). This should quickly auto-translate into a technological singularity, henceforth “technosingularity,” whose utilitarian value for humanity will depend on whether we manage to solve the AI alignment problem (i.e., whether we manage to figure out how to persuade the robots not to kill us all).

The technosingularity will creep up on us, and then radically transform absolutely everything, including the very possibility of any further meaningful prognostication – it will be “a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control” (Vinge, 1993). The “direct technosingularity” scenario is likely if AGI turns out to be relatively easy, as the futurist Ray Kurzweil and DeepMind CEO Demis Hassabis believe.

(2) The Age of Em

The development of Whole Brain Emulation (WBE) could accelerate the technosingularity, if it is relatively easy and is developed before AGI. As the economist Robin Hanson argues in his book The Age of Em, untold quintillions of emulated human minds, or “ems,” running trillions of times faster than biological wetware, should be able to effect a transition to true superintelligence and the technosingularity within a couple of human years (Hanson, 2016). This assumes that em civilization does not self-destruct, and that AGI does not ultimately prove to be an intractable problem. A simple Monte Carlo simulation by Anders Sandberg hints that WBE might be achieved by the 2060s (Sandberg, 2014).

deus-ex-rbs

Deus Ex: Human Revolution.

(3) Biosingularity

We still haven’t come close to exhausting our biological and biomechatronic potential for intelligence augmentation. The level of biological complexity has increased hyperbolically since the appearance of life on Earth (Markov & Korotayev, 2007), so even if both WBE and AGI turn out to be very hard, it might still be perfectly possible for human civilization to continue eking out huge further increases in aggregate cognitive power. Enough, perhaps, to kickstart the technosingularity.

There are many possible paths to a biosingularity.

The simplest one is through demographics: The tried and tested method of population growth (Korotaev & Khaltourina, 2006). As “technocornucopians” like Julian Simon argue, more people equals more potential innovators. However, only a tiny “smart fraction” can meaningfully contribute to technological progress, and global dysgenic fertility patterns imply that its share of the world population is going to go down inexorably now that the FLynn effect of environmental IQ increases is petering out across the world, especially in the high IQ nations responsible for most technological progress in the first place (Dutton, Van Der Linden, & Lynn, 2016). In the longterm “business as usual” scenario, this will result in an Idiocracy incapable of any further technological progress and at permanent risk of a Malthusian population crash should average IQ fall below the level necessary to sustain technological civilization.

As such, dysgenic fertility will have to be countered by eugenic policies or technological interventions. The former are either too mild to make a cardinal difference, or too coercive to seriously advocate. This leaves us with the technological solutions, which in turn largely fall into two bins: Genomics and biomechatronics.

The simplest route, already on the cusp of technological feasibility, is embryo selection for IQ. This could result in gains of one standard deviation per generation, and an eventual increase of as much as 300 IQ points over baseline once all IQ-affecting alleles have been discovered and optimized for (Hsu, 2014; Shulman & Bostrom, 2014). That is perhaps overoptimistic, since it assumes that the effects will remain strictly additive and will not run into diminishing returns.

Even so, a world with a thousand or a million times as many John von Neumanns running about will be more civilized, far richer, and orders of magnitude more technologically dynamic than what we have now (just compare the differences in civility, prosperity, and social cohesion between regions in the same country separated by a mere half of a standard deviation in average IQ, such as Massachussetts and West Virginia). This hyperintelligent civilization’s chances of solving the WBE and/or AGI problem will be correspondingly much higher.

The problem is that getting to the promised land will take about a dozen generations, that is, at least 200-300 years. Do we really want to wait that long? We needn’t. Once technologies such as CRISPR/Cas9 maturate, we can drastically accelerate the process and accomplish the same thing through direct gene editing. All this of course assumes that a concert of the world’s most powerful states doesn’t coordinate to vigorously clamp down on the new technologies.

Even so, we would still remain “bounded” by human biology. For instance, womb size and metabolic load are a crimper on brain size, and the specificities of our neural substrate places an ultimate ceiling even on “genetically corrected” human intellectual potential.

There are four potential ways to go beyond biology, presented below from “most realistic” to “most sci-fi”:

Neuropharmocology: Nootropics already exist, but they do not increase IQ by any significant amount and are unlikely to do so in the future (Bostrom, 2014).

Biomechatronics: The development of neural implants to augment human cognition beyond its peak biological potential. The first start-ups, based for now on treatment as opposed to enhancement, are beginning to appear, such as Kernel, where the futurist Randal Koene is the head scientist. This “cyborg” approach promises a more seamless, and likely safer, integration with ems and/or intelligent machines, whensoever they might appear – this is the reason why Elon Musk is a proponent of this approach. However, there’s a good chance that meaningful brain-machine interfaces will be very hard to implement (Bostrom, 2014).

Nanotechnology: Nanobots could potentially optimize neural pathways, or even create their own foglet-based neural nets.

Direct Biosingularity: If WBE and/or superintelligence prove to be very hard or intractable, or come with “minor” issues such as a lack of rigorous solutions to the AI alignment problem or the permanent loss of conscious experience (Johnson, 2016), then we might attempt a direct biosingularity – for instance, Nick Bostrom suggests the development of novel synthetic genes, and even more “exotic possibilities” such as vats full of complexly structured cortical tissue or “uplifted” transgenic animals, especially elephants or whales that can support very large brains (Bostrom, 2014). The terminal result of a true biosingularity could might be some kind of “ecotechnic singleton,” e.g. Stanisław Lem’s Solaris, a planet dominated by a globe-spanning sentient ocean.

Bounded by the speed of neuronal chemical reactions, it is safe to say that the biosingularity will be a much slower affair than The Age of Em or a superintelligence explosion, not to mention the technosingularity that would likely soon follow either of those two events. However, human civilization in this scenario might still eventually achieve the critical mass of cognitive power needed to solve WBE or AGI, thus setting off the chain reaction that leads to the technosingularity.

great-filter

(4) Eschaton

Nick Bostrom defined existential risk thus: “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.(Bostrom, 2002)

We can divide existential risks into four main bins: Geoplanetary; Anthropic; Technological; and Philosophical.

In any given decade, a gamma ray burst or even a very big asteroid could snuff us out in our earthly cradle. However, the background risk is both constant and extremely low, so it would be cosmically bad luck for a geoplanetary Götterdämmerung to do us in just as we are about to enter the posthuman era.

There are three big sources of “anthropic” existential risk: Nuclear war, climate change, and the exhaustion of high-EROEI energy sources.

Fears of atomic annihilation are understandable, but even a full-scale thermonuclear exchange between Russia and the US is survivable, and will not result in the collapse of industrial civilization ala A Canticle for Leibowitz or the Fallout video games, let alone human extinction (Kahn, 1960; Kearny, 1979). This was true during the Cold War and it is doubly true today, when nuclear weapons stocks are much lower. To be sure, some modest percentage of the world population will die, and a majority of the capital stock in the warring nations will be destroyed, but as Herman Kahn might have said, this is a tragic but nonetheless distinguishable outcome compared to a true “existential risk.”

Much the same can be said of anthropogenic climate change. While it would probably do more harm than good, at least in the medium-term (Stager, 2011), even the worst outcomes like a clathrate collapse will most likely not translate into James Lovelock’s apocalyptic visions of “breeding pairs” desperately eking out a hardscrabble survival in the Arctic. The only truly terminal outcome would be a runaway greenhouse effect that turns Earth into Venus, but there is simply nowhere near enough carbon on our planetary surface for that to happen.

As regards global energy supplies, while the end of high-density fossil fuels might somewhat reduce living standards relative to what they would have otherwise been, there is no evidence it would cause economic decline, let alone technological regression back to the Olduvai Gorge conditions as some of the most alarmist “doomers” have claimed. We still have a lot of fat to cut! Ultimately, the material culture even of an energy-starved country like Cuba compares very positively to those of 95% of all humans who have ever lived. Besides, there are still centuries’ worth of coal reserves left on the planet, and nuclear and solar power have been exploited to only a small fraction of their potential.

By far the biggest technological risk is malevolent AGI, so much so that entire research outfits such as MIRI have sprung up to work on it. However, it is so tightly coupled to the Technosingularity scenario that I will refrain from further commentary on it here.

This leaves mostly just the “philosophical,” or logically derived, existential risks. For instance, the computer simulation we are in might end (Bostrom, 2003) – perhaps because we are not interesting enough (if we fail to reach technosingularity), or for lack of hardware to simulate an intelligence explosion (if we do). Another disquieting possibility is implied by the foreboding silence all around as – as Enrico Fermi asked, “Where is everyone?” Perhaps we are truly alone. Or perhaps alien post-singularity civilizations stay silent for a good reason.

We began to blithely broadcast our presence to the void more than a century ago, so if there is indeed a “superpredator” civilization keeping watch over the galaxy, ready to swoop down at the first sign of a potential rival (e.g. for the simulation’s limited computing resources), then our doom may have already long been written onto the stars. However, unless they have figured out how to subvert the laws of physics, their response will be bounded by the speed of light. As such, the question of whether it takes us half a century or a millenium to solve the intelligence problem – and by extension, all other problems, including space colonization – assumes the most cardinal importance!

manyukhin-tower-of-sin

Vladimir Manyukhin, Tower of Sin.

(5) The Age of Malthusian Industrialism (or, “Business as Usual”)

The 21st century turns out to be a disappointment in all respects. We do not merge with the Machine God, nor do we descend back into the Olduvai Gorge by way of the Fury Road. Instead, we get to experience the true torture of seeing the conventional, mainstream forecasts of all the boring, besuited economists, businessmen, and sundry beigeocrats pan out.

Human genetic editing is banned by government edict around the world, to “protect human dignity” in the religious countries and “prevent inequality” in the religiously progressive ones. The 1% predictably flout these regulations at will, improving their progeny while keeping the rest of the human biomass down where they believe it belongs, but the elites do not have the demographic weight to compensate for plummeting average IQs as dysgenics decisively overtakes the FLynn Effect.

We discover that Kurzweil’s cake is a lie. Moore’s Law stalls, and the current buzz over deep learning turns into a permanent AI winter. Robin Hanson dies a disappointed man, though not before cryogenically freezing himself in the hope that he would be revived as an em. But Alcor goes bankrupt in 2145, and when it is discovered that somebody had embezzled the funds set aside for just such a contingency, nobody can be found to pay to keep those weird ice mummies around. They are perfunctorily tossed into a ditch, and whatever vestigial consciousness their frozen husks might have still possessed seeps and dissolves into the dirt along with their thawing lifeblood. A supermall is build on their bones around what is now an extremely crowded location in the Phoenix megapolis.

For the old concerns about graying populations and pensions are now ancient history. Because fertility preferences, like all aspects of personality, are heritable – and thus ultracompetitive in a world where the old Malthusian constraints have been relaxed – the “breeders” have long overtaken the “rearers” as a percentage of the population, and humanity is now in the midst of an epochal baby boom that will last centuries. Just as the human population rose tenfold from 1 billion in 1800 to 10 billion by 2100, so it will rise by yet another order of magnitude in the next two or three centuries. But this demographic expansion is highly dysgenic, so global average IQ falls by a standard deviation and technology stagnates. Sometime towards the middle of the millenium, the population will approach 100 billion souls and will soar past the carrying capacity of the global industrial economy.

Then things will get pretty awful.

But as they say, every problem contains the seed of its own solution. Gnon sets to winnowing the population, culling the sickly, the stupid, and the spendthrift. As the neoreactionary philosopher Nick Land notes, waxing Lovecraftian, “There is no machinery extant, or even rigorously imaginable, that can sustain a single iota of attained value outside the forges of Hell.”

In the harsh new world of Malthusian industrialism, Idiocracy starts giving way to A Farewell to Alms, the eugenic fertility patterns that undergirded IQ gains in Early Modern Britain and paved the way to the industrial revolution. A few more centuries of the most intelligent and hard-working having more surviving grandchildren, and we will be back to where we are now today, capable of having a second stab at solving the intelligence problem but able to draw from a vastly bigger population for the task.

Assuming that a Tyranid hive fleet hadn’t gobbled up Terra in the intervening millennium…

2061su-longing-for-home

2061.su, Longing for Home

The Forking Paths of the Third Millennium

In response to criticism that he was wasting his time on an unlikely scenario, Robin Hanson pointed out that even if there was just a 1% chance of The Age of Em coming about, studying it was well worth his while considering the sheer amount of future consciences and potential suffering at stake.

Although I can imagine some readers considering some of these scenarios as less likely than others, I think it’s fair to say that all of them are at least minimally plausible, and that most people would also assign a greater than 1% likelihood to a majority of them. As such, they are legitimate objects of serious consideration.

My own probability assessment is as follows:

(1) (a) Direct Technosingularity – 25%, if Kurzweil/MIRI/DeepMind are correct, with a probability peak around 2045, and most likely to be implemented via neural networks (Lin & Tegmark, 2016).

(2) The Age of Em – <1%, since we cannot obtain functional models even of 40 year old microchips from scanning them, to say nothing of biological organisms (Jonas & Kording, 2016)

(3) (a) Biosingularity to Technosingularity – 50%, since the genomics revolution is just getting started and governments are unlikely to either want to, let alone be successful at, rigorously suppressing it. And if AGI is harder than the optimists say, and will take considerably longer than mid-century to develop, then it’s a safe bet that IQ-augmented humans will come to play a critical role in eventually developing it. I would put the probability peak for a technosingularity from a biosingularity at around 2100.

(3) (b) Direct Biosingularity – 5%, if we decide that proceeding with AGI is too risky, or that consciousness both has cardinal inherent value and is only possible with a biological substrate.

(4) Eschaton – 10%, of which: (a) Philosophical existential risks – 5%; (b) Malevolent AGI – 1%; (c) Other existential risks, primarily technological ones: 4%.

(5) The Age of Malthusian Industrialism – 10%, with about even odds on whether we manage to launch the technosingularity the second time round.

There is a huge amount of literature on four of these five scenarios. The most famous book on the technosingularity is Ray Kurzweil’s The Singularity is Near, though you could make do with Vernor Vinge’s classic article The Coming Technological Singularity. Robin Hanson’s The Age of Em is the book on its subject. Some of the components of a potential biosingularity are already within our technological horizon – Stephen Hsu is worth following on this topic, though as regards biomechatronics, for now it remains more sci-fi than science (obligatory nod to the Deus Ex video game franchise). The popular literature on existential risks of all kinds is vast, with Nick Bostrom’s Superintelligence being the definitional work on AGI risks. It is also well worth reading his many articles on philosophical existential risks.

Ironically, by far the biggest lacuna is with regards to the “business as usual” scenario. It’s as if the world’s futurist thinkers have been so consumed with the most exotic and “interesting” scenarios (e.g. superintelligence, ems, socio-economic collapse, etc.) that they have neglected to consider what will happen if we take all the standard economic and demographic projections for this century, apply our understanding of economics, psychometrics, technology, and evolutionary psychology to them, and stretch them out to their logical conclusions.

The resultant Age of Industrial Malthusianism is not only something that’s easier to imagine than many of the other scenarios, and by extension easier for modern people to connect with, but it is also something that is genuinely interesting in its own right. It is also very important to understand well. That is because it is by no means a “good scenario,” even if it is perhaps the most “natural” one, since it will eventually entail unimaginable amounts of suffering for untold billions a few centuries down the line, when the time comes to balance the Malthusian equation. We will also have to spend an extended amount of time under an elevated level of philosophical existential risk. This would be the price we will have to pay for state regulations that block the path to a biosingularity today.

Sources

Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology / WTA, 9(1), 1–31.

Bostrom, N. (2003). Are We Living in a Computer Simulation? The Philosophical Quarterly, 53(211), 243–255.

Bostrom, N. (2006). What is a Singleton. Linguistic and Philosophical Investigations, 5(2), 48–54.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Dutton, E., Van Der Linden, D., & Lynn, R. (2016). The negative Flynn Effect: A systematic literature review. Intelligence, 59, 163–169.

Good, I. J. (1965). Speculations Concerning the First Ultraintelligent Machine. In F. Alt & M. Ruminoff (Eds.), Advances in Computers, volume 6. Academic Press.

Hanson, R. (2016). The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford University Press.

Hsu, S. D. H. (2014, August 14). On the genetic architecture of intelligence and other quantitative traits. arXiv [q-bio.GN]. Retrieved from http://arxiv.org/abs/1408.3421

Johnson, M. (2016). Principia Qualia: the executive summary. Open Theory. Retrieved from http://opentheory.net/2016/12/principia-qualia-executive-summary/

Jonas, E., & Kording, K. (2016). Could a neuroscientist understand a microprocessor? bioRxiv. Retrieved from http://www.biorxiv.org/content/early/2016/05/26/055624.abstract

Kahn, H. (1960). On thermonuclear war (Vol. 141). Cambridge Univ Press.

Karlin, A. (2015). Introduction to Apollo’s Ascent. The Unz Review. Retrieved from http://www.unz.com/akarlin/intro-apollos-ascent/

Kearny, C. H. (1979). Nuclear war survival skills. NWS Research Bureau.

Korotaev, A. V., & Khaltourina, D. (2006). Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends in Africa. Editorial URSS.

Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin.

Lin, H. W., & Tegmark, M. (2016, August 29). Why does deep and cheap learning work so well?arXiv [cond-mat.dis-nn]. Retrieved from http://arxiv.org/abs/1608.08225

Markov, A. V., & Korotayev, A. V. (2007). Phanerozoic marine biodiversity follows a hyperbolic trend. Palaeoworld, 16(4), 311–318.

Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing.

Sandberg, A. (2014). Monte Carlo model of brain emulation development. Retrieved from https://www.fhi.ox.ac.uk/reports/2014-1.pdf

Shulman, C., & Bostrom, N. (2014). Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer? Global Policy, 5(1), 85–92.

Stager, C. (2011). Deep Future: The Next 100,000 Years of Life on Earth. Macmillan.

Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Retrieved from https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

 
Anatoly Karlin
About Anatoly Karlin

I am a blogger, thinker, and businessman in the SF Bay Area. I’m originally from Russia, spent many years in Britain, and studied at U.C. Berkeley.

One of my tenets is that ideologies tend to suck. As such, I hesitate about attaching labels to myself. That said, if it’s really necessary, I suppose “liberal-conservative neoreactionary” would be close enough.

Though I consider myself part of the Orthodox Church, my philosophy and spiritual views are more influenced by digital physics, Gnosticism, and Russian cosmism than anything specifically Judeo-Christian.