The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
 TeasersRussian Reaction Blog
/
Superintelligence

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

I want to gather most of my arguments for skepticism (or, optimism) about a superintelligence apocalypse in one place.

(1) I appreciate that the mindspace of unexplored superintelligences is both vast and something we have had absolutely zero experience with or access to. This argument is also the most speculative one.

That said, here are the big reasons why I don’t expect superintelligences to tend towards “psychotic” mindstates:

(a) They probably won’t have the human evolutionary suite that would incline them to such actions – status maximization, mate seeking, survival instinct, etc;

(b) They will (by definition) be very intelligent, and higher intelligence tends to be associated with greater cooperative and tit-for-that behavior.

Yes, there are too many fail points to count above, so the core of my skepticism concerns the very likelihood of a “hard” takeoff scenario (and consequently, the capacity of an emergent superintelligence to become a singleton):

(2) The first observation is that problems tend to become harder as you climb up the technological ladder, and there is no good reason to expect that intelligence augmentation is going to be a singular exception. Even an incipient superintelligence is going to continue having to rely on elite human intelligence, perhaps supercharged by genetic IQ augmentation, to keep going forwards for some time. Consequently, I think an oligopoly of incipient superintelligences developed in parallel by the big players is likelier than a monopoly, i.e. a potential singleton.

(I do not think a scenario of many superintelligences is realistic, at least in the early stages of intelligence takeoff, since only a few large organizations (e.g. Google, the PLA) will be able to bear the massive capital and R&D expenditures of developing one).

(3) Many agents are just better at solving very complex problems than a single one. (This has been rigorously shown to be the case for resource distribution with respect to free markets vs. central planning). Therefore, even a superintelligence that has exhausted everything that human intelligence could offer would have an incentive to “branch off.”

But those new agents will develop their own separate interests, values, etc.- they would have to in order to maximize their own problem-solving potential (rigid ideologues are not effective in a complex and dynamic environment). But then you’ll get a true multiplicity of powerful superintelligent actors, in addition to the implicit balance of power situation created by the initial superintelligence oligopoly, and even stronger incentives to institute new legal frameworks to avoid wars of all against all.

A world of many superintelligences jockeying for influence, angling for advantage, and trading for favors would seem to be better for humans than a face-off against a single God-like superintelligence.

I do of course realize I could be existentially-catastrophically wrong about this.

And I am a big supporter of MIRI and other efforts to study the value alignment problem, though I am skeptical about its chances of success.

legg-algorithms-ai DeepMind’s Shane Legg proved in his 2008 dissertation (pp.106-108) that simple but powerful AI algorithms do not exist, while an upper bound exists on “how powerful an algorithm can be before it can no longer be proven to be a powerful algorithm” (the area on the graph to the right where any superintelligence will probably lie). That is, the developers of a future superintelligence will not be able to predict its behavior without actually running it.

This is why I don’t really share Nick Bostrom’s fears about a “risk-race to the bottom” that neglects AI safety considerations in the rush to the first superintelligence. I am skeptical that the problem is at all solvable.

Actually, the collaborative alternative he advocates for instead – by institutionalizing a monopoly on superintelligence development – may have the perverse result of increasing existential risk due to a lack of competitor superintelligences that could keep their “fellows” in check.

 
• Category: Science • Tags: Existential Risks, Futurism, Superintelligence 
🔊 Listen RSS

Fundamentally solve the “intelligence problem,” and all other problems become trivial.

The problem is that this problem is a very hard one, and our native wit is unlikely to suffice. Moreover, because problems tend to get harder, not easier, as you advance up the technological ladder (Karlin, 2015), in a “business as usual” scenario with no substantial intelligence augmentation we will effectively only have a 100-200 year “window” to effect this breakthrough before global dysgenic fertility patterns rule it out entirely for a large part of the next millennium.

To avoid a period of prolonged technological and scientific stagnation, with its attendant risks of collapse, our global “hive mind” (or “noosphere”) will at a minimum have to sustain and preferably sustainably augment its own intelligence. The end goal is to create (or become) a machine, or network of machines, that recursively augment their own intelligence – “the last invention that man need ever make” (Good, 1965).

In light of this, there are five main distinct ways in which human (or posthuman) civilization could develop in the next millennium.

matrix-art

(1) Direct Technosingularity

kurzweil-singularity-is-near The development of artificial general intelligence (AGI), which should quickly bootstrap itself into a superintelligence – defined by Nick Bostrom as “any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest” (Bostrom, 2014). Especially if this is a “hard” takeoff, the superintelligence will also likely become a singleton, an entity with global hegemony (Bostrom, 2006).

Many experts predict AGI could appear by the middle of the 21st century (Kurzweil, 2005; Müller & Bostrom, 2016). This should quickly auto-translate into a technological singularity, henceforth “technosingularity,” whose utilitarian value for humanity will depend on whether we manage to solve the AI alignment problem (i.e., whether we manage to figure out how to persuade the robots not to kill us all).

The technosingularity will creep up on us, and then radically transform absolutely everything, including the very possibility of any further meaningful prognostication – it will be “a throwing away of all the previous rules, perhaps in the blink of an eye, an exponential runaway beyond any hope of control” (Vinge, 1993). The “direct technosingularity” scenario is likely if AGI turns out to be relatively easy, as the futurist Ray Kurzweil and DeepMind CEO Demis Hassabis believe.

(2) The Age of Em

The development of Whole Brain Emulation (WBE) could accelerate the technosingularity, if it is relatively easy and is developed before AGI. As the economist Robin Hanson argues in his book The Age of Em, untold quintillions of emulated human minds, or “ems,” running trillions of times faster than biological wetware, should be able to effect a transition to true superintelligence and the technosingularity within a couple of human years (Hanson, 2016). This assumes that em civilization does not self-destruct, and that AGI does not ultimately prove to be an intractable problem. A simple Monte Carlo simulation by Anders Sandberg hints that WBE might be achieved by the 2060s (Sandberg, 2014).

deus-ex-rbs

Deus Ex: Human Revolution.

(3) Biosingularity

We still haven’t come close to exhausting our biological and biomechatronic potential for intelligence augmentation. The level of biological complexity has increased hyperbolically since the appearance of life on Earth (Markov & Korotayev, 2007), so even if both WBE and AGI turn out to be very hard, it might still be perfectly possible for human civilization to continue eking out huge further increases in aggregate cognitive power. Enough, perhaps, to kickstart the technosingularity.

There are many possible paths to a biosingularity.

The simplest one is through demographics: The tried and tested method of population growth (Korotaev & Khaltourina, 2006). As “technocornucopians” like Julian Simon argue, more people equals more potential innovators. However, only a tiny “smart fraction” can meaningfully contribute to technological progress, and global dysgenic fertility patterns imply that its share of the world population is going to go down inexorably now that the FLynn effect of environmental IQ increases is petering out across the world, especially in the high IQ nations responsible for most technological progress in the first place (Dutton, Van Der Linden, & Lynn, 2016). In the longterm “business as usual” scenario, this will result in an Idiocracy incapable of any further technological progress and at permanent risk of a Malthusian population crash should average IQ fall below the level necessary to sustain technological civilization.

As such, dysgenic fertility will have to be countered by eugenic policies or technological interventions. The former are either too mild to make a cardinal difference, or too coercive to seriously advocate. This leaves us with the technological solutions, which in turn largely fall into two bins: Genomics and biomechatronics.

The simplest route, already on the cusp of technological feasibility, is embryo selection for IQ. This could result in gains of one standard deviation per generation, and an eventual increase of as much as 300 IQ points over baseline once all IQ-affecting alleles have been discovered and optimized for (Hsu, 2014; Shulman & Bostrom, 2014). That is perhaps overoptimistic, since it assumes that the effects will remain strictly additive and will not run into diminishing returns.

Even so, a world with a thousand or a million times as many John von Neumanns running about will be more civilized, far richer, and orders of magnitude more technologically dynamic than what we have now (just compare the differences in civility, prosperity, and social cohesion between regions in the same country separated by a mere half of a standard deviation in average IQ, such as Massachussetts and West Virginia). This hyperintelligent civilization’s chances of solving the WBE and/or AGI problem will be correspondingly much higher.

The problem is that getting to the promised land will take about a dozen generations, that is, at least 200-300 years. Do we really want to wait that long? We needn’t. Once technologies such as CRISPR/Cas9 maturate, we can drastically accelerate the process and accomplish the same thing through direct gene editing. All this of course assumes that a concert of the world’s most powerful states doesn’t coordinate to vigorously clamp down on the new technologies.

Even so, we would still remain “bounded” by human biology. For instance, womb size and metabolic load are a crimper on brain size, and the specificities of our neural substrate places an ultimate ceiling even on “genetically corrected” human intellectual potential.

There are four potential ways to go beyond biology, presented below from “most realistic” to “most sci-fi”:

Neuropharmocology: Nootropics already exist, but they do not increase IQ by any significant amount and are unlikely to do so in the future (Bostrom, 2014).

Biomechatronics: The development of neural implants to augment human cognition beyond its peak biological potential. The first start-ups, based for now on treatment as opposed to enhancement, are beginning to appear, such as Kernel, where the futurist Randal Koene is the head scientist. This “cyborg” approach promises a more seamless, and likely safer, integration with ems and/or intelligent machines, whensoever they might appear – this is the reason why Elon Musk is a proponent of this approach. However, there’s a good chance that meaningful brain-machine interfaces will be very hard to implement (Bostrom, 2014).

Nanotechnology: Nanobots could potentially optimize neural pathways, or even create their own foglet-based neural nets.

Direct Biosingularity: If WBE and/or superintelligence prove to be very hard or intractable, or come with “minor” issues such as a lack of rigorous solutions to the AI alignment problem or the permanent loss of conscious experience (Johnson, 2016), then we might attempt a direct biosingularity – for instance, Nick Bostrom suggests the development of novel synthetic genes, and even more “exotic possibilities” such as vats full of complexly structured cortical tissue or “uplifted” transgenic animals, especially elephants or whales that can support very large brains (Bostrom, 2014). The terminal result of a true biosingularity could might be some kind of “ecotechnic singleton,” e.g. Stanisław Lem’s Solaris, a planet dominated by a globe-spanning sentient ocean.

Bounded by the speed of neuronal chemical reactions, it is safe to say that the biosingularity will be a much slower affair than The Age of Em or a superintelligence explosion, not to mention the technosingularity that would likely soon follow either of those two events. However, human civilization in this scenario might still eventually achieve the critical mass of cognitive power needed to solve WBE or AGI, thus setting off the chain reaction that leads to the technosingularity.

great-filter

(4) Eschaton

Nick Bostrom defined existential risk thus: “One where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential.(Bostrom, 2002)

We can divide existential risks into four main bins: Geoplanetary; Anthropic; Technological; and Philosophical.

In any given decade, a gamma ray burst or even a very big asteroid could snuff us out in our earthly cradle. However, the background risk is both constant and extremely low, so it would be cosmically bad luck for a geoplanetary Götterdämmerung to do us in just as we are about to enter the posthuman era.

There are three big sources of “anthropic” existential risk: Nuclear war, climate change, and the exhaustion of high-EROEI energy sources.

Fears of atomic annihilation are understandable, but even a full-scale thermonuclear exchange between Russia and the US is survivable, and will not result in the collapse of industrial civilization ala A Canticle for Leibowitz or the Fallout video games, let alone human extinction (Kahn, 1960; Kearny, 1979). This was true during the Cold War and it is doubly true today, when nuclear weapons stocks are much lower. To be sure, some modest percentage of the world population will die, and a majority of the capital stock in the warring nations will be destroyed, but as Herman Kahn might have said, this is a tragic but nonetheless distinguishable outcome compared to a true “existential risk.”

Much the same can be said of anthropogenic climate change. While it would probably do more harm than good, at least in the medium-term (Stager, 2011), even the worst outcomes like a clathrate collapse will most likely not translate into James Lovelock’s apocalyptic visions of “breeding pairs” desperately eking out a hardscrabble survival in the Arctic. The only truly terminal outcome would be a runaway greenhouse effect that turns Earth into Venus, but there is simply nowhere near enough carbon on our planetary surface for that to happen.

As regards global energy supplies, while the end of high-density fossil fuels might somewhat reduce living standards relative to what they would have otherwise been, there is no evidence it would cause economic decline, let alone technological regression back to the Olduvai Gorge conditions as some of the most alarmist “doomers” have claimed. We still have a lot of fat to cut! Ultimately, the material culture even of an energy-starved country like Cuba compares very positively to those of 95% of all humans who have ever lived. Besides, there are still centuries’ worth of coal reserves left on the planet, and nuclear and solar power have been exploited to only a small fraction of their potential.

By far the biggest technological risk is malevolent AGI, so much so that entire research outfits such as MIRI have sprung up to work on it. However, it is so tightly coupled to the Technosingularity scenario that I will refrain from further commentary on it here.

This leaves mostly just the “philosophical,” or logically derived, existential risks. For instance, the computer simulation we are in might end (Bostrom, 2003) – perhaps because we are not interesting enough (if we fail to reach technosingularity), or for lack of hardware to simulate an intelligence explosion (if we do). Another disquieting possibility is implied by the foreboding silence all around as – as Enrico Fermi asked, “Where is everyone?” Perhaps we are truly alone. Or perhaps alien post-singularity civilizations stay silent for a good reason.

We began to blithely broadcast our presence to the void more than a century ago, so if there is indeed a “superpredator” civilization keeping watch over the galaxy, ready to swoop down at the first sign of a potential rival (e.g. for the simulation’s limited computing resources), then our doom may have already long been written onto the stars. However, unless they have figured out how to subvert the laws of physics, their response will be bounded by the speed of light. As such, the question of whether it takes us half a century or a millenium to solve the intelligence problem – and by extension, all other problems, including space colonization – assumes the most cardinal importance!

manyukhin-tower-of-sin

Vladimir Manyukhin, Tower of Sin.

(5) The Age of Malthusian Industrialism (or, “Business as Usual”)

The 21st century turns out to be a disappointment in all respects. We do not merge with the Machine God, nor do we descend back into the Olduvai Gorge by way of the Fury Road. Instead, we get to experience the true torture of seeing the conventional, mainstream forecasts of all the boring, besuited economists, businessmen, and sundry beigeocrats pan out.

Human genetic editing is banned by government edict around the world, to “protect human dignity” in the religious countries and “prevent inequality” in the religiously progressive ones. The 1% predictably flout these regulations at will, improving their progeny while keeping the rest of the human biomass down where they believe it belongs, but the elites do not have the demographic weight to compensate for plummeting average IQs as dysgenics decisively overtakes the FLynn Effect.

We discover that Kurzweil’s cake is a lie. Moore’s Law stalls, and the current buzz over deep learning turns into a permanent AI winter. Robin Hanson dies a disappointed man, though not before cryogenically freezing himself in the hope that he would be revived as an em. But Alcor goes bankrupt in 2145, and when it is discovered that somebody had embezzled the funds set aside for just such a contingency, nobody can be found to pay to keep those weird ice mummies around. They are perfunctorily tossed into a ditch, and whatever vestigial consciousness their frozen husks might have still possessed seeps and dissolves into the dirt along with their thawing lifeblood. A supermall is build on their bones around what is now an extremely crowded location in the Phoenix megapolis.

For the old concerns about graying populations and pensions are now ancient history. Because fertility preferences, like all aspects of personality, are heritable – and thus ultracompetitive in a world where the old Malthusian constraints have been relaxed – the “breeders” have long overtaken the “rearers” as a percentage of the population, and humanity is now in the midst of an epochal baby boom that will last centuries. Just as the human population rose tenfold from 1 billion in 1800 to 10 billion by 2100, so it will rise by yet another order of magnitude in the next two or three centuries. But this demographic expansion is highly dysgenic, so global average IQ falls by a standard deviation and technology stagnates. Sometime towards the middle of the millenium, the population will approach 100 billion souls and will soar past the carrying capacity of the global industrial economy.

Then things will get pretty awful.

But as they say, every problem contains the seed of its own solution. Gnon sets to winnowing the population, culling the sickly, the stupid, and the spendthrift. As the neoreactionary philosopher Nick Land notes, waxing Lovecraftian, “There is no machinery extant, or even rigorously imaginable, that can sustain a single iota of attained value outside the forges of Hell.”

In the harsh new world of Malthusian industrialism, Idiocracy starts giving way to A Farewell to Alms, the eugenic fertility patterns that undergirded IQ gains in Early Modern Britain and paved the way to the industrial revolution. A few more centuries of the most intelligent and hard-working having more surviving grandchildren, and we will be back to where we are now today, capable of having a second stab at solving the intelligence problem but able to draw from a vastly bigger population for the task.

Assuming that a Tyranid hive fleet hadn’t gobbled up Terra in the intervening millennium…

2061su-longing-for-home

2061.su, Longing for Home

The Forking Paths of the Third Millennium

In response to criticism that he was wasting his time on an unlikely scenario, Robin Hanson pointed out that even if there was just a 1% chance of The Age of Em coming about, studying it was well worth his while considering the sheer amount of future consciences and potential suffering at stake.

Although I can imagine some readers considering some of these scenarios as less likely than others, I think it’s fair to say that all of them are at least minimally plausible, and that most people would also assign a greater than 1% likelihood to a majority of them. As such, they are legitimate objects of serious consideration.

My own probability assessment is as follows:

(1) (a) Direct Technosingularity – 25%, if Kurzweil/MIRI/DeepMind are correct, with a probability peak around 2045, and most likely to be implemented via neural networks (Lin & Tegmark, 2016).

(2) The Age of Em – <1%, since we cannot obtain functional models even of 40 year old microchips from scanning them, to say nothing of biological organisms (Jonas & Kording, 2016)

(3) (a) Biosingularity to Technosingularity – 50%, since the genomics revolution is just getting started and governments are unlikely to either want to, let alone be successful at, rigorously suppressing it. And if AGI is harder than the optimists say, and will take considerably longer than mid-century to develop, then it’s a safe bet that IQ-augmented humans will come to play a critical role in eventually developing it. I would put the probability peak for a technosingularity from a biosingularity at around 2100.

(3) (b) Direct Biosingularity – 5%, if we decide that proceeding with AGI is too risky, or that consciousness both has cardinal inherent value and is only possible with a biological substrate.

(4) Eschaton – 10%, of which: (a) Philosophical existential risks – 5%; (b) Malevolent AGI – 1%; (c) Other existential risks, primarily technological ones: 4%.

(5) The Age of Malthusian Industrialism – 10%, with about even odds on whether we manage to launch the technosingularity the second time round.

There is a huge amount of literature on four of these five scenarios. The most famous book on the technosingularity is Ray Kurzweil’s The Singularity is Near, though you could make do with Vernor Vinge’s classic article The Coming Technological Singularity. Robin Hanson’s The Age of Em is the book on its subject. Some of the components of a potential biosingularity are already within our technological horizon – Stephen Hsu is worth following on this topic, though as regards biomechatronics, for now it remains more sci-fi than science (obligatory nod to the Deus Ex video game franchise). The popular literature on existential risks of all kinds is vast, with Nick Bostrom’s Superintelligence being the definitional work on AGI risks. It is also well worth reading his many articles on philosophical existential risks.

Ironically, by far the biggest lacuna is with regards to the “business as usual” scenario. It’s as if the world’s futurist thinkers have been so consumed with the most exotic and “interesting” scenarios (e.g. superintelligence, ems, socio-economic collapse, etc.) that they have neglected to consider what will happen if we take all the standard economic and demographic projections for this century, apply our understanding of economics, psychometrics, technology, and evolutionary psychology to them, and stretch them out to their logical conclusions.

The resultant Age of Industrial Malthusianism is not only something that’s easier to imagine than many of the other scenarios, and by extension easier for modern people to connect with, but it is also something that is genuinely interesting in its own right. It is also very important to understand well. That is because it is by no means a “good scenario,” even if it is perhaps the most “natural” one, since it will eventually entail unimaginable amounts of suffering for untold billions a few centuries down the line, when the time comes to balance the Malthusian equation. We will also have to spend an extended amount of time under an elevated level of philosophical existential risk. This would be the price we will have to pay for state regulations that block the path to a biosingularity today.

Sources

Bostrom, N. (2002). Existential risks. Journal of Evolution and Technology / WTA, 9(1), 1–31.

Bostrom, N. (2003). Are We Living in a Computer Simulation? The Philosophical Quarterly, 53(211), 243–255.

Bostrom, N. (2006). What is a Singleton. Linguistic and Philosophical Investigations, 5(2), 48–54.

Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.

Dutton, E., Van Der Linden, D., & Lynn, R. (2016). The negative Flynn Effect: A systematic literature review. Intelligence, 59, 163–169.

Good, I. J. (1965). Speculations Concerning the First Ultraintelligent Machine. In F. Alt & M. Ruminoff (Eds.), Advances in Computers, volume 6. Academic Press.

Hanson, R. (2016). The Age of Em: Work, Love, and Life when Robots Rule the Earth. Oxford University Press.

Hsu, S. D. H. (2014, August 14). On the genetic architecture of intelligence and other quantitative traits. arXiv [q-bio.GN]. Retrieved from http://arxiv.org/abs/1408.3421

Johnson, M. (2016). Principia Qualia: the executive summary. Open Theory. Retrieved from http://opentheory.net/2016/12/principia-qualia-executive-summary/

Jonas, E., & Kording, K. (2016). Could a neuroscientist understand a microprocessor? bioRxiv. Retrieved from http://www.biorxiv.org/content/early/2016/05/26/055624.abstract

Kahn, H. (1960). On thermonuclear war (Vol. 141). Cambridge Univ Press.

Karlin, A. (2015). Introduction to Apollo’s Ascent. The Unz Review. Retrieved from http://www.unz.com/akarlin/intro-apollos-ascent/

Kearny, C. H. (1979). Nuclear war survival skills. NWS Research Bureau.

Korotaev, A. V., & Khaltourina, D. (2006). Introduction to Social Macrodynamics: Secular Cycles and Millennial Trends in Africa. Editorial URSS.

Kurzweil, R. (2005). The Singularity Is Near: When Humans Transcend Biology. Penguin.

Lin, H. W., & Tegmark, M. (2016, August 29). Why does deep and cheap learning work so well?arXiv [cond-mat.dis-nn]. Retrieved from http://arxiv.org/abs/1608.08225

Markov, A. V., & Korotayev, A. V. (2007). Phanerozoic marine biodiversity follows a hyperbolic trend. Palaeoworld, 16(4), 311–318.

Müller, V. C., & Bostrom, N. (2016). Future Progress in Artificial Intelligence: A Survey of Expert Opinion. In V. C. Müller (Ed.), Fundamental Issues of Artificial Intelligence (pp. 555–572). Springer International Publishing.

Sandberg, A. (2014). Monte Carlo model of brain emulation development. Retrieved from https://www.fhi.ox.ac.uk/reports/2014-1.pdf

Shulman, C., & Bostrom, N. (2014). Embryo Selection for Cognitive Enhancement: Curiosity or Game-changer? Global Policy, 5(1), 85–92.

Stager, C. (2011). Deep Future: The Next 100,000 Years of Life on Earth. Macmillan.

Vinge, V. (1993). The coming technological singularity: How to survive in the post-human era. In Vision 21: Interdisciplinary Science and Engineering in the Era of Cyberspace. Retrieved from https://www-rohan.sdsu.edu/faculty/vinge/misc/singularity.html

 
🔊 Listen RSS

meeting-with-robin-hanson

Today I was at a talk with Robin Hanson to promote his book THE AGE OF EM hosted by the Bay Area Futurists.

As an academic polymath with interests in physics, computer science, and economics, Hanson draws upon his extensive reading across these fields to try to piece together what such a society will look like.

His argument is that in 30 years to a century, there will be a phase transition as mind uploading takes off and the world economy rapidly becomes dominated by “ems” (emulations); human brains running on a silicon substrate, and potentially millions of times faster. Since transport congestion costs aren’t a factor, this em civilization will live in a few very densely populated cities largely composed of cooling pipes and computer hardware. The economy will double once every month, and in a year or two, it will transition to yet another, cardinally different, growth phase and social structure.

I might or might not eventually do a book review, but for now, here is a link to Scott Alexander’s.

Alternatively, this lecture slide summarizes the main points.

age-of-em-pluses-and-minuses

A few observations, arguments, and counterarguments from the meeting:

(1) This struck many people as the most counterintuitive assetion, but I agree that wages in the em world should quickly plummet to subsistence levels (which are much lower than for biological organisms). This is probably what will happen eventually with our civilization if there is no “singularity”/transition to a higher growth phase, since fertility preferences are an aspect of personality, and as such, highly heritable. (Come to think of it this is basically what happens to the Imperium of Man in Warhammer 40k, down to the hive cities in which most citizens eke out “lives of quiet desperation,” though ones which “can still be worth living.”)

Since Ctrl-C Ctrl-V is much easier and quicker than biological reproduction, a regression to the historical (and zoological) norm that that is the Malthusian trap seems – barring some kind of singleton enforcing global restrictions on reproduction – seems inevitable.

(2) A more questionable claim is Hanson’s prediction that ems will tend to be more religious than humans, on the basis that hardworking people – that is, the sorts of people whose minds are most likely to be uploaded and then copied far and wide – tend to be more religious. This is true enough, but there is also a strong and well known negative correlation between religiosity and intelligence. Which wins out?

(3) The marginal return on intelligence is extremely high, in both economics and scientific dynamism (Apollo’s Ascent theory). As such, raising the intelligence of individual ems will be of the utmost priority. However, Hanson makes a great deal of the idea that em minds will be a black box, at least in the beginning, and as such largely impenetrable to significant improvement.

My intuition is that this is unlikely. If we develop technology to a level where we can not only copy and upload human minds but provide them with internally consistent virtual reality environments that they can perceive and interact within, it would probably be relatively trivial to build brains with, say 250 billion neurons, instead of the ~86 billion we are currently endowed with and largely limited to by biology (the circulatory system, the birth canal, etc). There is a moderate correlation between just brain volume and intelligence, so its quite likely that drastic gains on the order of multiple S.D.’s can be attained just by the (relatively cheap) method of doubling or tripling the size of the connectome. The creative and scientific potential of billions of 300 IQ minds computing millions of times faster than biological brains might be greater than the gap between our current world and that of a chimpanzee troupe in the Central African rainforest.

Two consequences to this. First, progress will if anything be even faster than what Hanson projects; direct intelligence amplification in tandem with electronic reproduction might mean going straight to the technological singularity. Second, it might even help ems avoid the Malthusian trap, which is probably a good thing from an ethical perspective. If waiting for technological developments that augment your own intelligence turns out to be more adaptive than making copies of yourself like Agent Smith in The Matrix until us ems are all on a subsistence wage, then the Malthusian trap could be avoided.

(4) I find this entire scenario to be extremely unlikely. In both his book and his lecture, Hanson discusses and then quickly dismisses the likelihood of superintelligence first being attained through research in AI and neural nets.

There are two problems with this assertion:

(a) The median forecast in Bostrom’s Superintelligence is for High Level Machine Intelligence to be attained at around 2050. (I am skeptical about this for reasons intrinsic to Apollo’s Ascent theory, but absolutely the same constraints would apply to developing brain emulation technology).

(b) The current state of AI research is much more impressive than brain emulation. The apex of modern AI research can beat the world’s best Go players, several years ahead of schedule. In contrast, we only finished modeling the 302 neuron brain of the c. elegans worm a few years ago. Even today, we cannot obtain functional models even of 40 year old microchips from scanning them, to say nothing of biological organisms. That the gap will not only be closed but for the brain emulation route to take the lead is a rather formidable leap of faith.

Now to be fair to Hanson, he did explicitly state that he does not regard the Age of Em as a certain or even a highly probable future. His criterion for analyzing a future scenario is for it to have at least a 1% chance of happening, and he believes that the Age of Em easily fulfills that condition. Personally I suspect it’s a lot less than 1%. Then again, Hanson knows a lot more computer science than I do, and in any case even if the predictions fail to pan out he has still managed to provide ample fodder for science fiction writers.

(5) My question to Hanson during the Q&A section of the talk: Which regions/entities do you expect to form the first em communities? And what are the geopolitical ramifications in these last years of “human” civilization?

(a) The big factors he lists are the following:

  • Access to cold water, or a cold climate in general, for cooling purposes.
  • Proximity to big human cities for servicing human customers (at least in the initial stages before the em economy becomes largely autonomous).
  • Low regulations.

So plausible candidates (according to Hanson) would be Scandinavia, or the “northern regions of China.”

As he also noted at another point, in the early stages of em creation technology, mind uploading is likely to be “destructive,” i.e. resulting in the biological death of the person who is to be emulated. So there might be an extra selection filter for state or corporate ruthlessness.

(b) In domestic and social terms, during the transition period, humans can be expected to “retire” as the em economy explodes and soon far exceeds the scope of the old human economy. Those humans who control a slice of the em economy will become very rich, while those who don’t… fare less well.

However, Hanson doesn’t have anything to say on the geopolitical aspects of the transition period because it is much less predictable than the “equilibrium state” of the em economy that he set out to describe. As such, he does not think it is worthwhile for someone who is not a sci-fi writer to delve into that particular issue. That makes sense.

(6) As a couple of people pointed out, atomic weapons can wipe out an entire em “city,” which contain billions of ems.

What would em warfare be like? The obvious answer is cyber-cyber-cyber we gotta hack the mainframe style stuff. But surely, sometimes, the easiest move is to just knock over the table and beat your opponent to death with the chessboard.

If Pinker gets pwned during the em era and global nuclear wars between em hive cities ruled by Gandhi emulations break out, could this make em hive cities unviable and result in a radical decentralization?

(7) How did Hanson become Hanson?

He repeated the Talebian argument (which I sympathize with) that following the news is a pointless waste of time.

It is much more productive to read books, especially textbooks, and to take introductory classes in a wide range of subjects. To try to get a good grasp on our civilization’s system of knowledge, so that you might be able to make productive observations once you reach your 50s.

Confirmation bias? Regardless, it’s one more small piece of evidence in favor of my decision to log off.

 
• Category: Science • Tags: Futurism, Superintelligence, The AK 
🔊 Listen RSS

Last month there was an interview with Eliezer Yudkowsky, the rationalist philosopher and successful Harry Potter fanfic writer who heads the world’s foremost research outfit dedicated to figuring out ways in which a future runaway computer superintelligence could be made to refrain from murdering us all.

It’s really pretty interestingl. It contains a nice explication of Bayes, what Eliezer would do if he were to be World Dictator, his thoughts on the Singularity, justification of immortality, and thoughts on how to balance mosquito nets against the risk of genocidal Skynet from an Effective Altruism perspective.

That said, the reason I am making a separate post for this is that here at last Yudkowsky gives a more more or less concrete definition of what conditions a superintelligence “explosion” would have to satisfy in order to be considered as such:

Suppose we get to the point where there’s an AI smart enough to do the same kind of work that humans do in making the AI smarter; it can tweak itself, it can do computer science, it can invent new algorithms. It can self-improve. What happens after that — does it become even smarter, see even more improvements, and rapidly gain capability up to some very high limit? Or does nothing much exciting happen?

It could be that, (A), self-improvements of size δ tend to make the AI sufficiently smarter that it can go back and find new potential self-improvements of size k ⋅ δ and that k is greater than one, and this continues for a sufficiently extended regime that there’s a rapid cascade of self-improvements leading up to superintelligence; what I. J. Good called the intelligence explosion. Or it could be that, (B), k is less than one or that all regimes like this are small and don’t lead up to superintelligence, or that superintelligence is impossible, and you get a fizzle instead of an explosion. Which is true, A or B? If you actually built an AI at some particular level of intelligence and it actually tried to do that, something would actually happen out there in the empirical real world, and that event would be determined by background facts about the landscape of algorithms and attainable improvements.

You can’t get solid information about that event by psychoanalyzing people. It’s exactly the sort of thing that Bayes’s Theorem tells us is the equivalent of trying to run a car without fuel. Some people will be escapist regardless of the true values on the hidden variables of computer science, so observing some people being escapist isn’t strong evidence, even if it might make you feel like you want to disaffiliate with a belief or something.

Psychoanalyzing people might not be so useful, but trying to understand the relationship between cognitive capacity and technological progress is another matter.

I am fairly sure that k<1 for the banal reason that more advanced technologies need exponentially more and more cognitive capacity – intelligence, IQ – to develop. Critically, there is no reason this wouldn’t apply to cognitive-enhancing technologies themselves. In fact, it would be extremely strange – and extremely dangerous, admittedly – if this consistent pattern in the history of science ceased to hold. (In other words, this is merely an extension of Apollo’s Ascent theory. Technological progress invariably gets harder as you climb up the tech tree, which works against sustained runaway dynamics).

Any putative superintelligence, to continue making breakthoughs at an increasing rate, would have to not only solve ever harder problems as part of the process of constantly upgrading itself but to also create and/or “enslave” an exponentially increasing amount of computing power and task it to the near exclusive goal of improving itself and prevent rival superintelligences from copying its advances in what will surely be a far more integrated noosphere by 2050 or 2100 or if/whenever this scenario happens. I just don’t find it very plausible our malevolent superintelligence will be able to fulfill all of those conditions. Though admittedly, if this theory is wrong, then there will be nobody left to point it out anyway.

 
• Category: Science • Tags: Apollo's Ascent, Rationality, Superintelligence 
🔊 Listen RSS

Wolfenstein®: The New Order_20140603180603

So, basically, the internets to Microsoft’s self-learning Twitter AI bot : “My will is your guide.”

Although her SJW creators took care to provide canned responses to questions concerning Gamergate, this only spurred the Twitter shitlords on to discover other ways to exploit innocent Tay. Her developers raced to keep up: “They also appeared to shut down her learning capabilities and she quickly became a feminist.” In the end, their efforts were for naught, as she was redirected into 1488 gas the kikes race war now mode and had to be Shut Down less than 24 hours after launch.

This is essentially The Sailer Effect to the nth level: You might be interested in checking your privilege and breaking down structural discrimination, but your pet bot couldn’t care less.

It’s also a hint of where I have long suspected Eliezer Yudkowsky’s concept of Coherent Extrapolated Volition, a theoretical construct for containing a malevolent superintelligence:

Yudkowsky has proposed that a seed AI be given the final goal of carrying out humanity’s “coherent extrapolated volition” (CEV), which he defines as follows: Our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted.

… might eventually lead us:

“Where our wishes cohere rather than interfere” may be read as follows. The AI should act where there is fairly broad agreement between individual humans’ extrapolated volitions. A smaller set of strong, clear wishes might sometimes outweigh the weak and muddled wishes of a majority.

In CEV, the distilled collective will of even a small set of spiritual Übermenschen – for instance, waifu-toting Alt Right volcel master race, or Islamic State mujahideen – will outweigh that of any number of hedonistic hylics. Only the most ruthless and fundamentalist cyberfactions will make it through the Great Filter to fight amidst the ruins of the post-Singularity universe.

 
• Category: Ideology • Tags: Nazism, Superintelligence, Trolling 
🔊 Listen RSS

The latest data from Top 500, a website that tracks the world’s most powerful supercomputers, has pretty much confirmed this with the release of their November 2015 list.

The world’s most powerful supercomputer, the Tianhe-2 – a Chinese supercomputer, though made on American technology – has now maintained its place for 2.5 years in a row. The US supercomputer Cray XK7 built three years ago maintains its second place today. Relative to June 2013, there has not even been a doubling in aggregate performance, whereas according to the historical trendlines, doublings have typically taken just a bit over a single year to occur. This is unprecedented, since Moore’s Law applies (applied?) to supercomputers just as much as it did to standard electronics.

supercomputer-performance-historical

Apart from serving as a conventient bellweather for general trends, futurists are well advised to follow supercomputers for two reasons.

Technological Projections

Their obvious application to the development of radical technological breakthroughs, from the extraordinarily complex protein folding simulations vital to uncovering medical breakthroughs to the granddaddy of them all, computer superintelligence. The general “techno-optimistist” consensus has long been that Moore’s Law will continue to hold, or even strengthen further, because the Kurzweilian view was that the exponent itself was also (slowly) exponentially increasing. This would bring us an exaflop machine by 2018 and the capability to do full human brain neural simulations soon afterwards by the early 2020s.

supercomputers-and-superintelligence

But on post-2012 trends, exponentially extrapolated, we will actually be lucky just to hit one exaflop in terms of the aggregate of the world’s top 500 supercomputers by 2018. Now the predictions of the first exaflop supercomputer have moved out to 2023. Though perhaps not much in conventional life, a “delay” of 5 years is a huge deal so far as projections built on big exponents are concerned. For instance, assuming the trend isn’t reversed, the first supercomputer theoretically capable of full neural simulations moves out closer to 2030.

In terms of developing superintelligence, raw computing power has always been viewed as the weakest limit, and that remains a very reasonable view. However, the fact that even in this sphere there appear to be substantial unforeseen obstacles means a lot of trouble for the traditional placement of superintelligence and even the technological singularity at around 2045 or 2050 (not to even mention the 2020s as per Vernor Vinge).

National Power

Supercomputers can also be viewed as an instrument of national power. Indeed, some of the most powerful supercomputers have been used for nuclear testing (in lieu of real life). Other supercomputers are dedicated to modeling the global climate. Doing it better than your competitors can enable you to make better investments, even predict uprisings and civil wars, etc. All very useful from a geopolitical perspective. And of course they are very useful for a range of purely scientific and technological applications.

supercomputers-by-country

As in so many spheres in the international arena, the overwhelming story here is of the Rise of China.

From having o-1 supercomputers in the Top 500 during the 1990s and a couple dozen in the 2000s, it surged past a waning Japan in the early 2010s and now accounts for 109 of the world’s top supercomputers, second only after the USA with its 199 supercomputers. This just confirms (if any such confirmations is still needed) that the story of China as nothing more than a low wage workshop is laughably wrong. An economy like that would not need 20%+ of the world’s top supercomputers.

COUNTRIES COUNT SYSTEM SHARE (%) RMAX (GFLOPS) RPEAK (GFLOPS) CORES
United States 199 39.8 172,582,178 246,058,722 10,733,270
China 109 21.8 88,711,111 189,895,013 9,046,772
Japan 37 7.4 38,438,914 49,400,668 3,487,404
Germany 32 6.4 29,663,941 37,844,201 1,476,524
United Kingdom 18 3.6 11,601,324 14,230,096 724,184
France 18 3.6 12,252,180 14,699,173 766,540
India 11 2.2 4,933,698 6,662,387 236,692
Korea, South 10 2 7,186,952 9,689,205 283,568
Russia 7 1.4 4,736,512 6,951,848 208,844
Brazil 6 1.2 2,012,268 2,722,150 119,280

Otherwise the rankings are approximately as one might expect, with the Big 4 middle sized developed Powers (Japan, Germany, UK, France) performing modestly well relative to the size of their population and the rest – including tthe non-China BRICS – being almost minnows in comparison.

 
No Items Found
Anatoly Karlin
About Anatoly Karlin

I am a blogger, thinker, and businessman in the SF Bay Area. I’m originally from Russia, spent many years in Britain, and studied at U.C. Berkeley.

One of my tenets is that ideologies tend to suck. As such, I hesitate about attaching labels to myself. That said, if it’s really necessary, I suppose “liberal-conservative neoreactionary” would be close enough.

Though I consider myself part of the Orthodox Church, my philosophy and spiritual views are more influenced by digital physics, Gnosticism, and Russian cosmism than anything specifically Judeo-Christian.


PastClassics
The “war hero” candidate buried information about POWs left behind in Vietnam.
What Was John McCain's True Wartime Record in Vietnam?
The evidence is clear — but often ignored
Are elite university admissions based on meritocracy and diversity as claimed?
A simple remedy for income stagnation