I want to gather most of my arguments for skepticism (or, optimism) about a superintelligence apocalypse in one place.
(1) I appreciate that the mindspace of unexplored superintelligences is both vast and something we have had absolutely zero experience with or access to. This argument is also the most speculative one.
That said, here are the big reasons why I don’t expect superintelligences to tend towards “psychotic” mindstates:
(a) They probably won’t have the human evolutionary suite that would incline them to such actions – status maximization, mate seeking, survival instinct, etc;
(b) They will (by definition) be very intelligent, and higher intelligence tends to be associated with greater cooperative and tit-for-that behavior.
Yes, there are too many fail points to count above, so the core of my skepticism concerns the very likelihood of a “hard” takeoff scenario (and consequently, the capacity of an emergent superintelligence to become a singleton):
(2) The first observation is that problems tend to become harder as you climb up the technological ladder, and there is no good reason to expect that intelligence augmentation is going to be a singular exception. Even an incipient superintelligence is going to continue having to rely on elite human intelligence, perhaps supercharged by genetic IQ augmentation, to keep going forwards for some time. Consequently, I think an oligopoly of incipient superintelligences developed in parallel by the big players is likelier than a monopoly, i.e. a potential singleton.
(I do not think a scenario of many superintelligences is realistic, at least in the early stages of intelligence takeoff, since only a few large organizations (e.g. Google, the PLA) will be able to bear the massive capital and R&D expenditures of developing one).
(3) Many agents are just better at solving very complex problems than a single one. (This has been rigorously shown to be the case for resource distribution with respect to free markets vs. central planning). Therefore, even a superintelligence that has exhausted everything that human intelligence could offer would have an incentive to “branch off.”
But those new agents will develop their own separate interests, values, etc.- they would have to in order to maximize their own problem-solving potential (rigid ideologues are not effective in a complex and dynamic environment). But then you’ll get a true multiplicity of powerful superintelligent actors, in addition to the implicit balance of power situation created by the initial superintelligence oligopoly, and even stronger incentives to institute new legal frameworks to avoid wars of all against all.
A world of many superintelligences jockeying for influence, angling for advantage, and trading for favors would seem to be better for humans than a face-off against a single God-like superintelligence.
I do of course realize I could be existentially-catastrophically wrong about this.
And I am a big supporter of MIRI and other efforts to study the value alignment problem, though I am skeptical about its chances of success.
DeepMind’s Shane Legg proved in his 2008 dissertation (pp.106-108) that simple but powerful AI algorithms do not exist, while an upper bound exists on “how powerful an algorithm can be before it can no longer be proven to be a powerful algorithm” (the area on the graph to the right where any superintelligence will probably lie). That is, the developers of a future superintelligence will not be able to predict its behavior without actually running it.
This is why I don’t really share Nick Bostrom’s fears about a “risk-race to the bottom” that neglects AI safety considerations in the rush to the first superintelligence. I am skeptical that the problem is at all solvable.
Actually, the collaborative alternative he advocates for instead – by institutionalizing a monopoly on superintelligence development – may have the perverse result of increasing existential risk due to a lack of competitor superintelligences that could keep their “fellows” in check.