There’s a cult of intelligent, well-intentioned people started by Eliezer Yudkowsky who call themselves “rationalists.” Their most prominent recruit is the estimable Scott Alexander. One of the things they do is worry (a lot) that artificial intelligence systems will soon wake up (like Mike the Computer in Heinlein’s The Moon Is a Harsh Mistress), become all-powerful, and decide to kill all us humans, like SkyNet in Terminator.
I’m glad some bright boys are worrying about this.
Personally, I don’t see a big chance of this happening, but then I know next to nothing about this topic. Clearly, artificial intelligence is rapidly improving, although it seems at present more like really impressive party tricks.
Not that there’s anything wrong with that. When I started writing in 1990, I didn’t think I was particularly smart compared to famous writers like George Will, William F. Buckley, and James Fallow. I just had some party tricks I’d developed for thinking that let me come up with interesting insights about important topics, but, no doubt, I’d soon exhaust all my good ideas. Or, alternatively, the rest of the world would immediately figure out my party tricks and crowd me out of my little niche.
A third of a century later, my party tricks still seem pretty useful. And while they’ve been reverse engineered and incorporated by a certain number of the younger generation, such as Scott and Richard Hanania, my niche is still pretty empty.
In all the articles I read about AI in the mainstream media, I tend to sympathize with the poor racist robots who keep getting condemned for Noticing Patterns. Machine Learning systems remind me a lot of myself. They go out and read a lot of crime statistics and book reviews and the like and keep coming up with theories about how the world works that make The Establishment extremely mad because they tend to use Occam’s Razor instead of Occam’s Butterknife.
For example, there’s a critically acclaimed new novel called “The Last White Man” in which the white race goes extinct by all white people turning brown overnight, and eventually life is a little better for everybody. None of the book reviewers object to this premise.
If you stopped letting machine learning systems train on deplorable information like FBI crime statistics and CDC homicide statistics and it could only read sophisticated literary criticism, would it eventually figure out that people who talk about the extinction of the white race as a good thing are mostly just kidding?
Or might the AI take seriously the Prestige Consensus that there’s this one group of people — straight cisgender white men — who are the cause of all the problems in the world?
Personally, when it comes to the increasing demands for racist genocide from the Diversity, Inclusion, Equity (DIE) movement, I worry more about Natural Stupidity more than Artificial Intelligence. But the notion of AI and NS teaming up, becoming intertwined, into a NASI movement, is rather frightening.
Isn’t it more likely that an evil AI that convinces human gatekeepers to let it out of the box would less want to kill all humans than that it would just want to kill all the humans that the gatekeepers would kind of like to kill too?
Maybe we should be more worried about the human gatekeepers’ views about who are the Bad Guys who deserve what they have coming to them?