The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
 TeasersRussian Reaction Blog
/
Existential Risks

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

I want to gather most of my arguments for skepticism (or, optimism) about a superintelligence apocalypse in one place.

(1) I appreciate that the mindspace of unexplored superintelligences is both vast and something we have had absolutely zero experience with or access to. This argument is also the most speculative one.

That said, here are the big reasons why I don’t expect superintelligences to tend towards “psychotic” mindstates:

(a) They probably won’t have the human evolutionary suite that would incline them to such actions – status maximization, mate seeking, survival instinct, etc;

(b) They will (by definition) be very intelligent, and higher intelligence tends to be associated with greater cooperative and tit-for-that behavior.

Yes, there are too many fail points to count above, so the core of my skepticism concerns the very likelihood of a “hard” takeoff scenario (and consequently, the capacity of an emergent superintelligence to become a singleton):

(2) The first observation is that problems tend to become harder as you climb up the technological ladder, and there is no good reason to expect that intelligence augmentation is going to be a singular exception. Even an incipient superintelligence is going to continue having to rely on elite human intelligence, perhaps supercharged by genetic IQ augmentation, to keep going forwards for some time. Consequently, I think an oligopoly of incipient superintelligences developed in parallel by the big players is likelier than a monopoly, i.e. a potential singleton.

(I do not think a scenario of many superintelligences is realistic, at least in the early stages of intelligence takeoff, since only a few large organizations (e.g. Google, the PLA) will be able to bear the massive capital and R&D expenditures of developing one).

(3) Many agents are just better at solving very complex problems than a single one. (This has been rigorously shown to be the case for resource distribution with respect to free markets vs. central planning). Therefore, even a superintelligence that has exhausted everything that human intelligence could offer would have an incentive to “branch off.”

But those new agents will develop their own separate interests, values, etc.- they would have to in order to maximize their own problem-solving potential (rigid ideologues are not effective in a complex and dynamic environment). But then you’ll get a true multiplicity of powerful superintelligent actors, in addition to the implicit balance of power situation created by the initial superintelligence oligopoly, and even stronger incentives to institute new legal frameworks to avoid wars of all against all.

A world of many superintelligences jockeying for influence, angling for advantage, and trading for favors would seem to be better for humans than a face-off against a single God-like superintelligence.

I do of course realize I could be existentially-catastrophically wrong about this.

And I am a big supporter of MIRI and other efforts to study the value alignment problem, though I am skeptical about its chances of success.

legg-algorithms-ai DeepMind’s Shane Legg proved in his 2008 dissertation (pp.106-108) that simple but powerful AI algorithms do not exist, while an upper bound exists on “how powerful an algorithm can be before it can no longer be proven to be a powerful algorithm” (the area on the graph to the right where any superintelligence will probably lie). That is, the developers of a future superintelligence will not be able to predict its behavior without actually running it.

This is why I don’t really share Nick Bostrom’s fears about a “risk-race to the bottom” that neglects AI safety considerations in the rush to the first superintelligence. I am skeptical that the problem is at all solvable.

Actually, the collaborative alternative he advocates for instead – by institutionalizing a monopoly on superintelligence development – may have the perverse result of increasing existential risk due to a lack of competitor superintelligences that could keep their “fellows” in check.

 
• Category: Science • Tags: Existential Risks, Futurism, Superintelligence 
🔊 Listen RSS

dyson-sphere-by-kerihobo

Image by Kerihobo.

While everybody is discussing the tantalizing possibility that this far off star with its strange dimming patterns hosts an alien megastructure, perhaps a Dyson Sphere under construction, there are even more exotic scenarios out there.

For instance, why not the ruins of one? One of the obvious (if pessimistic) solutions to the Fermi Paradox is that space is a war of all against all, with every surviving alien civilization soon realizing that they can’t afford to show their head above the cosmic parapets. Due to the vast distances involved across space and time, stealth is surely the decisive factor in space warfare, so the offensive reigns supreme over the defensive. Chuck a big, cool clump of dense matter at a very high velocity into a location where it is likely to intersect with the path of a rival space civilization and the guys at the receiving end would hardly have any time to know what hit them let alone where it came from.

It is thus possible that xenocidal aggressiveness is an evolved behavior across all surviving alien civilizations. Just as any good or trusting creature dreamt up by mortals and given flesh in the northern Chaos Wastes of the world of Warhammer gets instantly killed by stronger and more evil entities, so too, perhaps, the less paranoid and aggressive space civilizations get snuffed out as soon as they make their existence known to the cruel gods of the heavens.

Or maybe, Nick Bostrom is correct and we are living in a simulation – with the catch that computing resources are limited and cannot support more than a certain number of superintelligent civilizations and their subsimulations, to say nothing of some kind of Kurzweilian “the universe wakes up” intelligence saturation scenario. Maybe that explains the “supervoid.” A singularitarian civilization attempted to “wake up” the universe in an expanding radius from its home planet, and got their section of space Ctrl-Alt-Deleted by The Architect for their trouble. Since then, other advanced civilizations logically deducated what must have happened, and universally agreed – without any consultation, naturally – to adopt the Lannisterian code that everyone who isn’t us is an enemy.

Or maybe the very observation of KIC 8462852 at this moment in history is an elaborate trap. For instance, here is a particularly paranoid but not implausible scenario from a comment to a Less Wrong article by the Russian futurist Alexey Turchin on the risks of passive SETI:

A comment by JF: For example the lack of SETI-attack so far may itself be a cunning ploy: At first receipt of the developing Solar civilization’s radio signals, all interstellar ‘spam’ would have ceased, (and interference stations of some unknown (but amazing) capability and type set up around the Solar System to block all coming signals recognizable to its’ computers as of intelligent origin,) in order to get us ‘lonely’ and give us time to discover and appreciate the Fermi Paradox and even get those so philosophically inclined to despair desperate that this means the Universe is apparently hostile by some standards. Then, when desperate, we suddenly discover, slowly at first, partially at first, and then with more and more wonderful signals, the fact that space is filled with bright enticing signals (like spam). The blockade, cunning as it was (analogous to Earthly jamming stations) was yet a prelude to a slow ‘turning up’ of preplanned intriguing signal traffic. If as Earth had developed we had intercepted cunning spam followed by the agonized ‘don’t repeat our mistakes’ final messages of tricked and dying civilizations, only a fool would heed the enticing voices of SETI spam. But now, a SETI attack may benefit from the slow unmasking of a cunning masquerade as first a faint and distant light of infinite wonder, only at the end revealed as the headlight of an onrushing cosmic train…

Or maybe it really is something very banal, like a cloud of disintegrating comets…

 
• Category: Science • Tags: Existential Risks, Space Exploration 
No Items Found
Anatoly Karlin
About Anatoly Karlin

I am a blogger, thinker, and businessman in the SF Bay Area. I’m originally from Russia, spent many years in Britain, and studied at U.C. Berkeley.

One of my tenets is that ideologies tend to suck. As such, I hesitate about attaching labels to myself. That said, if it’s really necessary, I suppose “liberal-conservative neoreactionary” would be close enough.

Though I consider myself part of the Orthodox Church, my philosophy and spiritual views are more influenced by digital physics, Gnosticism, and Russian cosmism than anything specifically Judeo-Christian.