Fundamentally solve the “intelligence problem,” and all other problems become trivial.
The problem is that this problem is a very hard one, and our native wit is unlikely to suffice. Moreover, because problems tend to get harder, not easier, as you advance up the technological ladder (Karlin, 2015), in a “business as usual” scenario with no substantial intelligence augmentation we will effectively only have a 100-200 year “window” to effect this breakthrough before global dysgenic fertility patterns rule it out entirely for a large part of the next millennium.
To avoid a period of prolonged technological and scientific stagnation, with its attendant risks of collapse, our global “hive mind” (or “noosphere”) will at a minimum have to sustain and preferably sustainably augment its own intelligence. The end goal is to create (or become) a machine, or network of machines, that recursively augment their own intelligence – “the last invention that man need ever make” (Good, 1965).
In light of this, there are five main distinct ways in which human (or posthuman) civilization could develop in the next millennium.
(1) Direct Technosingularity
The development of artificial general intelligence (AGI), which should quickly bootstrap itself into a superintelligence – defined by Nick Bostrom as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (Bostrom, 2014). Especially if this is a “hard” takeoff, the superintelligence will also likely become a singleton, an entity with global hegemony (Bostrom, 2006).
Many experts predict AGI could appear by the middle of the 21st century (Kurzweil, 2005; Müller & Bostrom, 2016). This should quickly auto-translate into a technological singularity, henceforth “technosingularity,” whose utilitarian value for humanity will depend on whether we manage to solve the AI alignment problem (i.e., whether we manage to figure out how to persuade the robots not to kill us all).
The technosingularity will creep up on us, and then radically transform absolutely everything, including the very possibility of any further meaningful prognostication – it will be “a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control” (Vinge, 1993). The “direct technosingularity” scenario is likely if AGI turns out to be relatively easy, as the futurist Ray Kurzweil and DeepMind CEO Demis Hassabis believe.
(2) The Age of Em
The development of Whole Brain Emulation (WBE) could accelerate the technosingularity, if it is relatively easy and is developed before AGI. As the economist Robin Hanson argues in his book The Age of Em, untold quintillions of emulated human minds, or “ems,” running trillions of times faster than biological wetware, should be able to effect a transition to true superintelligence and the technosingularity within a couple of human years (Hanson, 2016). This assumes that em civilization does not self-destruct, and that AGI does not ultimately prove to be an intractable problem. A simple Monte Carlo simulation by Anders Sandberg hints that WBE might be achieved by the 2060s (Sandberg, 2014).
Deus Ex: Human Revolution.
We still haven’t come close to exhausting our biological and biomechatronic potential for intelligence augmentation. The level of biological complexity has increased hyperbolically since the appearance of life on Earth (Markov & Korotayev, 2007), so even if both WBE and AGI turn out to be very hard, it might still be perfectly possible for human civilization to continue eking out huge further increases in aggregate cognitive power. Enough, perhaps, to kickstart the technosingularity.
There are many possible paths to a biosingularity.
The simplest one is through demographics: The tried and tested method of population growth (Korotaev & Khaltourina, 2006). As “technocornucopians” like Julian Simon argue, more people equals more potential innovators. However, only a tiny “smart fraction” can meaningfully contribute to technological progress, and global dysgenic fertility patterns imply that its share of the world population is going to go down inexorably now that the FLynn effect of environmental IQ increases is petering out across the world, especially in the high IQ nations responsible for most technological progress in the first place (Dutton, Van Der Linden, & Lynn, 2016). In the longterm “business as usual” scenario, this will result in an Idiocracy incapable of any further technological progress and at permanent risk of a Malthusian population crash should average IQ fall below the level necessary to sustain technological civilization.
As such, dysgenic fertility will have to be countered by eugenic policies or technological interventions. The former are either too mild to make a cardinal difference, or too coercive to seriously advocate. This leaves us with the technological solutions, which in turn largely fall into two bins: Genomics and biomechatronics.
The simplest route, already on the cusp of technological feasibility, is embryo selection for IQ. This could result in gains of one standard deviation per generation, and an eventual increase of as much as 300 IQ points over baseline once all IQ-affecting alleles have been discovered and optimized for (Hsu, 2014; Shulman & Bostrom, 2014). That is perhaps overoptimistic, since it assumes that the effects will remain strictly additive and will not run into diminishing returns.
Even so, a world with a thousand or a million times as many John von Neumanns running about will be more civilized, far richer, and orders of magnitude more technologically dynamic than what we have now (just compare the differences in civility, prosperity, and social cohesion between regions in the same country separated by a mere half of a standard deviation in average IQ, such as Massachussetts and West Virginia). This hyperintelligent civilization’s chances of solving the WBE and/or AGI problem will be correspondingly much higher.
The problem is that getting to the promised land will take about a dozen generations, that is, at least 200-300 years. Do we really want to wait that long? We needn’t. Once technologies such as CRISPR/Cas9 maturate, we can drastically accelerate the process and accomplish the same thing through direct gene editing. All this of course assumes that a concert of the world’s most powerful states doesn’t coordinate to vigorously clamp down on the new technologies.
Even so, we would still remain “bounded” by human biology. For instance, womb size and metabolic load are a crimper on brain size, and the specificities of our neural substrate places an ultimate ceiling even on “genetically corrected” human intellectual potential.
There are four potential ways to go beyond biology, presented below from “most realistic” to “most sci-fi”:
Neuropharmocology: Nootropics already exist, but they do not increase IQ by any significant amount and are unlikely to do so in the future (Bostrom, 2014).
Biomechatronics: The development of neural implants to augment human cognition beyond its peak biological potential. The first start-ups, based for now on treatment as opposed to enhancement, are beginning to appear, such as Kernel, where the futurist Randal Koene is the head scientist. This “cyborg” approach promises a more seamless, and likely safer, integration with ems and/or intelligent machines, whensoever they might appear – this is the reason why Elon Musk is a proponent of this approach. However, there’s a good chance that meaningful brain-machine interfaces will be very hard to implement (Bostrom, 2014).
Nanotechnology: Nanobots could potentially optimize neural pathways, or even create their own foglet-based neural nets.
Direct Biosingularity: If WBE and/or superintelligence prove to be very hard or intractable, or come with “minor” issues such as a lack of rigorous solutions to the AI alignment problem or the permanent loss of conscious experience (Johnson, 2016), then we might attempt a direct biosingularity – for instance, Nick Bostrom suggests the development of novel synthetic genes, and even more “exotic possibilities” such as vats full of complexly structured cortical tissue or “uplifted” transgenic animals, especially elephants or whales that can support very large brains (Bostrom, 2014). The terminal result of a true biosingularity could might be some kind of “ecotechnic singleton,” e.g. Stanisław Lem’s Solaris, a planet dominated by a globe-spanning sentient ocean.
Bounded by the speed of neuronal chemical reactions, it is safe to say that the biosingularity will be a much slower affair than The Age of Em or a superintelligence explosion, not to mention the technosingularity that would likely soon follow either of those two events. However, human civilization in this scenario might still eventually achieve the critical mass of cognitive power needed to solve WBE or AGI, thus setting off the chain reaction that leads to the technosingularity.
Nick Bostrom defined existential risk thus: “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.” (Bostrom, 2002)
We can divide existential risks into four main bins: Geoplanetary; Anthropic; Technological; and Philosophical.
In any given decade, a gamma ray burst or even a very big asteroid could snuff us out in our earthly cradle. However, the background risk is both constant and extremely low, so it would be cosmically bad luck for a geoplanetary Götterdämmerung to do us in just as we are about to enter the posthuman era.
There are three big sources of “anthropic” existential risk: Nuclear war, climate change, and the exhaustion of high-EROEI energy sources.
Fears of atomic annihilation are understandable, but even a full-scale thermonuclear exchange between Russia and the US is survivable, and will not result in the collapse of industrial civilization ala A Canticle for Leibowitz or the Fallout video games, let alone human extinction (Kahn, 1960; Kearny, 1979). This was true during the Cold War and it is doubly true today, when nuclear weapons stocks are much lower. To be sure, some modest percentage of the world population will die, and a majority of the capital stock in the warring nations will be destroyed, but as Herman Kahn might have said, this is a tragic but nonetheless distinguishable outcome compared to a true “existential risk.”
Much the same can be said of anthropogenic climate change. While it would probably do more harm than good, at least in the medium-term (Stager, 2011), even the worst outcomes like a clathrate collapse will most likely not translate into James Lovelock’s apocalyptic visions of “breeding pairs” desperately eking out a hardscrabble survival in the Arctic. The only truly terminal outcome would be a runaway greenhouse effect that turns Earth into Venus, but there is simply nowhere near enough carbon on our planetary surface for that to happen.
As regards global energy supplies, while the end of high-density fossil fuels might somewhat reduce living standards relative to what they would have otherwise been, there is no evidence it would cause economic decline, let alone technological regression back to the Olduvai Gorge conditions as some of the most alarmist “doomers” have claimed. We still have a lot of fat to cut! Ultimately, the material culture even of an energy-starved country like Cuba compares very positively to those of 95% of all humans who have ever lived. Besides, there are still centuries’ worth of coal reserves left on the planet, and nuclear and solar power have been exploited to only a small fraction of their potential.
By far the biggest technological risk is malevolent AGI, so much so that entire research outfits such as MIRI have sprung up to work on it. However, it is so tightly coupled to the Technosingularity scenario that I will refrain from further commentary on it here.
This leaves mostly just the “philosophical,” or logically derived, existential risks. For instance, the computer simulation we are in might end (Bostrom, 2003) – perhaps because we are not interesting enough (if we fail to reach technosingularity), or for lack of hardware to simulate an intelligence explosion (if we do). Another disquieting possibility is implied by the foreboding silence all around as – as Enrico Fermi asked, “Where is everyone?” Perhaps we are truly alone. Or perhaps alien post-singularity civilizations stay silent for a good reason.
We began to blithely broadcast our presence to the void more than a century ago, so if there is indeed a “superpredator” civilization keeping watch over the galaxy, ready to swoop down at the first sign of a potential rival (e.g. for the simulation’s limited computing resources), then our doom may have already long been written onto the stars. However, unless they have figured out how to subvert the laws of physics, their response will be bounded by the speed of light. As such, the question of whether it takes us half a century or a millenium to solve the intelligence problem – and by extension, all other problems, including space colonization – assumes the most cardinal importance!
Vladimir Manyukhin, Tower of Sin.
(5) The Age of Malthusian Industrialism (or, “Business as Usual”)
The 21st century turns out to be a disappointment in all respects. We do not merge with the Machine God, nor do we descend back into the Olduvai Gorge by way of the Fury Road. Instead, we get to experience the true torture of seeing the conventional, mainstream forecasts of all the boring, besuited economists, businessmen, and sundry beigeocrats pan out.
Human genetic editing is banned by government edict around the world, to “protect human dignity” in the religious countries and “prevent inequality” in the religiously progressive ones. The 1% predictably flout these regulations at will, improving their progeny while keeping the rest of the human biomass down where they believe it belongs, but the elites do not have the demographic weight to compensate for plummeting average IQs as dysgenics decisively overtakes the FLynn Effect.
We discover that Kurzweil’s cake is a lie. Moore’s Law stalls, and the current buzz over deep learning turns into a permanent AI winter. Robin Hanson dies a disappointed man, though not before cryogenically freezing himself in the hope that he would be revived as an em. But Alcor goes bankrupt in 2145, and when it is discovered that somebody had embezzled the funds set aside for just such a contingency, nobody can be found to pay to keep those weird ice mummies around. They are perfunctorily tossed into a ditch, and whatever vestigial consciousness their frozen husks might have still possessed seeps and dissolves into the dirt along with their thawing lifeblood. A supermall is build on their bones around what is now an extremely crowded location in the Phoenix megapolis.
For the old concerns about graying populations and pensions are now ancient history. Because fertility preferences, like all aspects of personality, are heritable – and thus ultracompetitive in a world where the old Malthusian constraints have been relaxed – the “breeders” have long overtaken the “rearers” as a percentage of the population, and humanity is now in the midst of an epochal baby boom that will last centuries. Just as the human population rose tenfold from 1 billion in 1800 to 10 billion by 2100, so it will rise by yet another order of magnitude in the next two or three centuries. But this demographic expansion is highly dysgenic, so global average IQ falls by a standard deviation and technology stagnates. Sometime towards the middle of the millenium, the population will approach 100 billion souls and will soar past the carrying capacity of the global industrial economy.
Then things will get pretty awful.
But as they say, every problem contains the seed of its own solution. Gnon sets to winnowing the population, culling the sickly, the stupid, and the spendthrift. As the neoreactionary philosopher Nick Land notes, waxing Lovecraftian, “There is no machinery extant, or even rigorously imaginable, that can sustain a single iota of attained value outside the forges of Hell.”
In the harsh new world of Malthusian industrialism, Idiocracy starts giving way to A Farewell to Alms, the eugenic fertility patterns that undergirded IQ gains in Early Modern Britain and paved the way to the industrial revolution. A few more centuries of the most intelligent and hard-working having more surviving grandchildren, and we will be back to where we are now today, capable of having a second stab at solving the intelligence problem but able to draw from a vastly bigger population for the task.
Assuming that a Tyranid hive fleet hadn’t gobbled up Terra in the intervening millennium…
2061.su, Longing for Home
The Forking Paths of the Third Millennium
In response to criticism that he was wasting his time on an unlikely scenario, Robin Hanson pointed out that even if there was just a 1% chance of The Age of Em coming about, studying it was well worth his while considering the sheer amount of future consciences and potential suffering at stake.
Although I can imagine some readers considering some of these scenarios as less likely than others, I think it’s fair to say that all of them are at least minimally plausible, and that most people would also assign a greater than 1% likelihood to a majority of them. As such, they are legitimate objects of serious consideration.
My own probability assessment is as follows:
(1) (a) Direct Technosingularity – 25%, if Kurzweil/MIRI/DeepMind are correct, with a probability peak around 2045, and most likely to be implemented via neural networks (Lin & Tegmark, 2016).
(2) The Age of Em – <1%, since we cannot obtain functional models even of 40 year old microchips from scanning them, to say nothing of biological organisms (Jonas & Kording, 2016)
(3) (a) Biosingularity to Technosingularity – 50%, since the genomics revolution is just getting started and governments are unlikely to either want to, let alone be successful at, rigorously suppressing it. And if AGI is harder than the optimists say, and will take considerably longer than mid-century to develop, then it’s a safe bet that IQ-augmented humans will come to play a critical role in eventually developing it. I would put the probability peak for a technosingularity from a biosingularity at around 2100.
(3) (b) Direct Biosingularity – 5%, if we decide that proceeding with AGI is too risky, or that consciousness both has cardinal inherent value and is only possible with a biological substrate.
(4) Eschaton – 10%, of which: (a) Philosophical existential risks – 5%; (b) Malevolent AGI – 1%; (c) Other existential risks, primarily technological ones: 4%.
(5) The Age of Malthusian Industrialism – 10%, with about even odds on whether we manage to launch the technosingularity the second time round.
There is a huge amount of literature on four of these five scenarios. The most famous book on the technosingularity is Ray Kurzweil’s The Singularity is Near, though you could make do with Vernor Vinge’s classic article The Coming Technological Singularity. Robin Hanson’s The Age of Em is the book on its subject. Some of the components of a potential biosingularity are already within our technological horizon – Stephen Hsu is worth following on this topic, though as regards biomechatronics, for now it remains more sci-fi than science (obligatory nod to the Deus Ex video game franchise). The popular literature on existential risks of all kinds is vast, with Nick Bostrom’s Superintelligence being the definitional work on AGI risks. It is also well worth reading his many articles on philosophical existential risks.
Ironically, by far the biggest lacuna is with regards to the “business as usual” scenario. It’s as if the world’s futurist thinkers have been so consumed with the most exotic and “interesting” scenarios (e.g. superintelligence, ems, socio-economic collapse, etc.) that they have neglected to consider what will happen if we take all the standard economic and demographic projections for this century, apply our understanding of economics, psychometrics, technology, and evolutionary psychology to them, and stretch them out to their logical conclusions.
The resultant Age of Industrial Malthusianism is not only something that’s easier to imagine than many of the other scenarios, and by extension easier for modern people to connect with, but it is also something that is genuinely interesting in its own right. It is also very important to understand well. That is because it is by no means a “good scenario,” even if it is perhaps the most “natural” one, since it will eventually entail unimaginable amounts of suffering for untold billions a few centuries down the line, when the time comes to balance the Malthusian equation. We will also have to spend an extended amount of time under an elevated level of philosophical existential risk. This would be the price we will have to pay for state regulations that block the path to a biosingularity today.
Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing.
Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Retrieved from https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html