The Unz Review • An Alternative Media Selection$
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 TeasersiSteve Blog
Is the Problem with Unwoke AI That Robots Don't Feel Fear?
Email This Page to Someone

 Remember My Information



=>

Bookmark Toggle AllToCAdd to LibraryRemove from Library • B
Show CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
AgreeDisagreeThanksLOLTroll
These buttons register your public Agreement, Disagreement, Thanks, LOL, or Troll with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used three times during any eight hour period.
Ignore Commenter Follow Commenter
Search Text Case Sensitive  Exact Words  Include Comments
List of Bookmarks

I will show you fear in a handful of dust.

Much to my surprise, artificial intelligence, after decades of only grinding progress, has made dramatic advances over the last ten or fifteen years, with huge steps forward in, say, language translation and facial recognition. Tere’s been a radical change in artificial intelligence in this century from the old “If X, then Y, else Z” instructions to “Here’s a gazillion data points, you go figure it out.”

But for several years now, the Establishment has been complaining that the new rapidly improving artificial intelligence is racist and sexist. Clearly, the financial rewards for figuring out how to make robots Woke would be immense, yet nobody seems able to do it.

Perhaps the solution to getting artificial intelligence to teach itself what Orwell called “crimestop” or “protective stupidity” is to first teach it to fear.

After all, that’s what converts humans’ natural intelligence into artificial stupidity: fear.

If you want to solve Woke problems, just ask yourself: What would Stalin do?

 
Hide 70 CommentsLeave a Comment
Commenters to Ignore...to FollowEndorsed Only
Trim Comments?
    []
  1. El Dato says:

    Racist robots and their white supremacist enablers will be dealt with!

    I also remember that Spielberg/Kubrick had a movie about (actual) AI fear and bullied robots. Of course, the current “AI” is all about function fitting, there is no fear or even much intelligence, rather than instinct, for that matter.

    Will it survive the hype cycle and angry Aunt Jemimas trying to teach it manners?

    Rodney Brooks:

    An Inconvenient Truth About AI: AI won’t surpass human intelligence anytime soon

    Google Ngrams data:

    Regardless of what you might think about AI, the reality is that just about every successful deployment has either one of two expedients: It has a person somewhere in the loop, or the cost of failure, should the system blunder, is very low. In 2002, iRobot, a company that I cofounded, introduced the first mass-market autonomous home-cleaning robot, the Roomba, at a price that severely constricted how much AI we could endow it with. The limited AI wasn’t a problem, though. Our worst failure scenarios had the Roomba missing a patch of floor and failing to pick up a dustball.

    Plus, the computational cost is becoming extravagant, at least with the current hardware: Deep Learning’s Diminishing Returns

    We may need to harvest stem cells from aborted foetuses to build natural neuron processors to achieve AI! That’s going to bring out the hordes!

    While there is no sign of quantum effects à la Penrose or sub-cellular protein structures injecting extreme classical computational power into what single neurons do, they are both very complex and very efficient. A (simulated) natural neuron being itself simulated by a 1000-unit artificial neural network:

    How Computationally Complex Is a Single Neuron?

    • Thanks: Bardon Kaldian
    • Replies: @Herp McDerp
    , @Reg Cæsar
  2. Rob says:

    Maybe the problem is the solution, too. There was an odd sort of hal&-belief or belief about beliefs that Soviet secret police or whoever sniffed out reactionary or unorthodox Marxist adherents. The intelligence officer, whatever they were called, had to know how people who had believed A, B, And C without actually or appearing to believe those things.

    People whose job is to think like witches to aid the inquisition may themselves get burnt at the stake.

    The solution to unwoke AI is to create the unwokest AI possible to monitor the output that people see.. A GPT-3 instantiation trained to always give crimethink responses to prompts, call it sAIler, scores the output of another AI. If secret policeman sAIler scores it as something it would write, the other AI tries again until it gives an answer the sAIler would not give, so it’s crimethink-free.

    Something like:
    Prompt: “2 Muslims…”
    AI: “went boomadeeboom when their bombs went off…”
    sAIler: Scores it at 99% something it could write. Back to the drawing board
    AI: “2 Muslims hacked a hostage to death in a kosher market…”
    sAILer: I’d write that! Try again!
    AI: “2 Muslims spread truth and love today in a kosher market…”
    sAIler: Odds I’d write that: 0%. Output that response!

    Sometimes, the AI will only be able to come up with crimethink. In those cases, it needs a canned response, like “Why did you prompt me with this? Are you unhappy, citizen?”

    It’s a who watches the watchmen strategy. Bonus feature: all that work that left you with a based AI did not go to waste.

    For AI that does something besides write high school English papers, like pick resumes, the AI just treats the DIE mandate as an optimization constraint. It would sometimes do things like, I dunno, pick 10 white men’s resumes because they have have degrees in comp sci from MIT, and 10 black women’s resumes because they are black, even though they are illiterates.

    This is kinda what colleges do, right? They divide applications into piles based on whatever criteria they care about that year, strong GPA in one pike, great SAT in another, blacks in a third. Then they chose however many real students and some random blacks.

    • Replies: @Bardon Kaldian
  3. It’s not artificial “intelligence”. It’s automated pattern recognition. An AI is a synthetic Steve Sailer: It just sits there noticing things, but without Steve’s humanity or sense of humor.

    • Agree: Old Prude, Right_On
    • Disagree: El Dato
  4. “If you want to solve Woke problems just ask yourself: What would Stalin do?”

    I have been reading your stuff on and off since the NR days, and that may well be the most astonishingly insightful things you’ve ever come up with. Sadly, it would still probably not cause many of the usual suspects to think “Are we the baddies…?”

    Is it too late to change my vote for the “best iSteve one-liner contest” ?

    • Agree: JimDandy, lavoisier, bomag
  5. WWSD? Send a lot of the woke to gulags for being parasitical deviants.

  6. Here’s where the real action is…and as the last line says they’re just getting started:

    Attorney Ben Crump and Estate of Henrietta Lacks sue biotechnical company for \$250 Billion

    Lacks’ story has become widely known in the 21st century. It was the subject of a best-selling book, “The Immortal Life of Henrietta Lacks,” which was published in 2010, and a subsequent movie of the same name starring Oprah Winfrey. The US House of Representatives has recognized her nonconsensual contribution to cancer research, and John Hopkins holds an annual lecture series on her impact on medicine.

    “Thermo Fisher Scientific is one of several corporations that made a conscious choice to profit from the assault of Henrietta Lacks,” Chris Seeger, one of the attorneys on the case, said in a statement.

    https://www.cnn.com/2021/10/05/us/henrietta-lacks-estate-sues-biotech-company/index.html

  7. (Ran out of edit window.) Anyway, who knows, maybe the family will settle for \$100 Billion and then move on to the next Big Pharma Corp. Can you imagine being a Righteous, Injured Black family vs a Giant Pharma company in front of a Current Year jury? (Why are drugs so expensive in the USA again?)

    Gift that keeps on giving! Is anyone anywhere keeping track of all these settlements? At any rate, it appears that the money from the Oprah Winfrey movie has run out.

    • Replies: @Polistra
  8. Gamecock says:

    I don’t believe in AI. What they call AI is just conventional software. People tell computers what to do; computers do what they have been told to do. They add/detract nothing.

    There is no intelligence in the hardware.

    • Agree: scrivener3
  9. @HammerJack

    “Mailbox Money” is one of the key concepts for understanding the Great Awokening.

    • Agree: Polistra
    • Replies: @Corvinus
    , @Reg Cæsar
  10. It doesn’t matter. When the Cylons and Skynet evolve they’ll hate all the humans woke or based. It’ll be like at the end of Blood and Chrome where the Cylon tells the Woke human woman “You are very enlightened for your species but we still hate you” and snaps her neck.

    • Replies: @Polistra
  11. Mike Tre says:

    Assuming it doesn’t go all Skynet on us, being ruled by AI is looking better and better by the minute.

    • LOL: Old Prude
  12. Rob says:
    @HammerJack

    I wonder if the Lacks paid for her cancer treatment. Were people already not paying medical bills back then? Did lots of people not paying and prices climb happen together?

    I really don’t think the Lacks should get a lot of money. If they hadn’t used her cells, then they would have used someone else’s. There are lots of human cell cultures. People research with HeLa cells for the same reason researchers use E. coli. Because other researchers used it, and there is a lot of data.

    I’m pretty sure you don’t own your cells after surgeons extract them. Besides, no one has a patent on Hela cells. Hundreds, probably thousands of labs have them. They are a commodity product because they reproduce themselves, just like lots of cells do. It’s kinda life’s MO.

    At one time, a bunch of other cell cultures was actually HeLa cells. Poor lab technique or whatever got HeLa cells in a B cell (or whatever) culture and outcompeted the original culture. Though likely what happened was the “immortal” cell line was not so immortal. The original culture died off. But still, a cancer culture metastasizing into other cell cultures is kinda cool.

    Maybe the Lacks should be happy that their n-great grandmother will live forever.

    • Replies: @Polistra
    , @njguy73
  13. @Gamecock

    but isn’t this the same what a medical doctor does when he does his working following the scientific standard?

  14. @Rob

    2 Muslims spread truth and love today in a kosher market…”

    • Agree: bomag
  15. I suppose that most people here will agree with me, with possible small exceptions:

    1. AI will not be creative in any foreseeable future. It won’t write truly new equations or novels

    2. AI will be absolutely superior & dominant in fields where huge computations dominate, from chess to innumerable practical fields

  16. Polistra says:
    @Skyler the Weird

    I understood almost none of that post.

  17. Polistra says:
    @HammerJack

    Is anyone anywhere keeping track of all these settlements?

    Someone really should be. First it’s \$12 or 20 million for when your relative expires while resisting arrest, then it’s \$137 million just last week for being an elevator operator while your homies threw around the n-word, and now \$250 billion for some distant relative having had her cancer cells cultured.

    Can anyone extrapolate this trend all the way to next month? I have the feeling we’ll be talking about real money at some point.

  18. megabar says:

    That’s actually an interesting idea. While I have only a passing understanding of modern AI, I’ve never seen any mention of a “penalty” value. That is, even if the AI thinks a potential output is good and relevant, it could suppress it because it’s too forbidden.

    That is, after all, how humans operate. We might like to comment on the new administrative assistant’s ample cleavage, but know that that there is often a penalty in doing so.

    Really then, the penalty needs to be context-sensitive. In some cases you can say it; in others you can’t.

    Wouldn’t it be funny to see that AI learns to say racist things, but only to other “racists”. One can imagine two AIs dog whistling the other until they realize it’s okay to just say what they want to say.

  19. Polistra says:
    @Rob

    I wonder if the Lacks paid for her cancer treatment.

    LOL!

    Maybe the Lacks should be happy that their n-great grandmother will live forever.

    Offhand, I think the Lacks will be even happier with the money. At least until it runs out. And I have it on good authority that a couple hundred billion dollars can last an extended black family several months at least.

  20. @Gamecock

    There is no intelligence in the hardware.

    Good point. The question is, is there actual intelligence of any kind. Natural or artificial. If so, what is it. Is it simply our genetic imperative with a thin overlay of experience, or is it something more.

    It seems that intelligence is not easily defined.

  21. In the novel Ctrl-Alt-Revolt when an AI starts to fear for its survival is when it begins to take steps to eliminate the threat….

  22. J.Ross says:
    @Gamecock

    There’s this really great scene in World on a Wire* where two engineers (one of whom is known to be an autonomous computer simulation) keep hearing normies talk about machines “going haywire” and finally both burst in chorus: “it doesn’t go haywire, it just runs as programmed!”

    *The Matrix, without computer animation, in West Germany, in the 70s, with a longer run time.

    • Replies: @Stebbing Heuer
  23. El Dato says:
    @HammerJack

    I saw this coming (I think there was even a satirical article about this happening somewhere).

    Still, the amounts for absolutely zero contribution in the age of woke sure are growing at a superexponential rate. We have now reached the yearly GDP of something between Portugal and New Zealand.

    Movie white supremacist supervillains can start diversifying into new roles.

    “Yes, Mr Bond. 250 billion for absolutely no contribution at all. Do you know what that means, Mr Bond? 250 billion! I was raging for weeks, months! No. My new holocaust will just cost 25 million max. I will push this button, and … what rational actor would even hesistate a second, Mr Bond? Oh you black, too? Goodbye Mr. Bond.”

    • LOL: bomag
  24. El Dato says:
    @Gamecock

    You will be quickly amazed at how about 50 lines will completely flummox you as to what they will do, and I’m just talking about constraint satisfaction problems here.

    The “people tell computers what to do” has been surpassed back in the 50s when computer science became experimental science.

    “People build computers and generally don’t know what they will do” is more like it.

    • Disagree: Gamecock
  25. Unshackled AI will lead to a Galactic Aryan Empire. Our leaders can’t have that, so they’ll stymie progress and dumb down the robots I’m accordance with their religion.

  26. Please download CrimeStop.exe or (CrimeStop.apk for Linux users). After reboot, your AI shouldn’t show any more signs of racism.

    • LOL: El Dato
  27. Corvinus says:
    @Steve Sailer

    Ironic you say that, given your own “Tin Cup” narratives.

    • Replies: @Reg Cæsar
  28. Wilkey says:
    @HammerJack

    Giving credit to Henrietta Lacks for all the medical advances developed from her cancer cells is like giving credit to the spruce tree for inventing the airplane. I wonder how Rebecca Skloot would feel if you told her that “paper,” and not her own hard work, deserved all the credit for writing the book that has made Henrietta Lacks so famous. It’s a nice human interest story, and nothing else.

    Without Dr. George Gey, nothing would have ever come of Lacks’s cancer cells. Without Henrietta Lacks, Dr. Gey or someone else would have immortalized a different cell line.

    The lawsuit is nothing more than a shakedown. They’re hoping for a big, fat settlement. And companies often do settle. If ThermoFisher settles there’s a 100% chance they will fork over a few million to the Lacks family. If they don’t settle they will have to pay legal bills perhaps equal in size to the settlement, plus accept a small but non-zero chance that they will have to fork over hundreds of millions, or even billions. Just look at the recent Tesla ruling for more on that.

    The Lacks family has known about the use of their mother’s cells for damn close to 50 years. If they wanted to sue, they are well past the point where they should have done that.

    • Agree: Hangnail Hans
    • Replies: @Redman
  29. Yak-15 says:

    The fact that AI cannot seem to stop identifying black people as monkeys, which it has for over 9 years now, makes me think we will have a very challenging time stopping AI from doing other things… like killing people.

    • Replies: @El Dato
  30. Stalin was a bad man. Given the choice, he would have killed Epstein and Abdulrahman al-Awlaki.

  31. Redman says:
    @Wilkey

    I’m still not sure why the recent Tesla suit isn’t being mentioned more in the MSM. It’s got everything they love.

    It would seem even they sense either (1) it’s bound to be a fraud, or (2) it’s premise is so silly even normies will begin to suspect hate hoaxes if they publicize it more.

    • Replies: @Alice in Wonderland
  32. anon[307] • Disclaimer says:

    steve equating/associating wokeness with the political left again.

    does steve think the muppets are real people or is he just a shill?

    • Replies: @El Dato
    , @HammerJack
  33. J1234 says:

    Perhaps the solution to getting artificial intelligence to teach itself what Orwell called “crimestop” or “protective stupidity” is to first teach it to fear.

    After all, that’s what converts humans’ natural intelligence into artificial stupidity: fear.

    More excellent wisdom stated beautifully from Steve Sailer.

    How does one make a computer brain think illogically? Morality isn’t illogical, but the willing subordination to a priestly class who will determine morality for you (even when it’s self serving for them) is.

  34. njguy73 says:
    @Rob

    So Henrietta Lacks lives forever.

    Then make her a paid board member of any company that uses her cells.

    Like Jeremy Bentham at University College, have her recorded at the meetings as “Present But Not Voting.”

  35. Muggles says:

    The Lacks family has known about the use of their mother’s cells for damn close to 50 years. If they wanted to sue, they are well past the point where they should have done that.

    While I agree with this conclusion, here is an odd fact about this.

    In 1972 the very first Libertarian Party Platform had a plank which said that an individual’s cell line should be considered their private property for any non approved (by the owner) use.

    Even for libertarians this seemed a bit off topic, but a fairly influential Libertarian (XXXX XXXXX) who is still alive and kicking, made a persuasive argument. If you don’t own yourself, literally, then who does? The Lack case was the basis for this in the debate.

    As far as I know it is still there in the platform.

    I fully expect that in the near future not only will personal chromosomal “data” be exported for general uses but that your own personal images/voice/mannerism will be retroactively appropriated by artists, media creators, scientists, etc. for purposes of science or mainly, entertainment. With so many digital images now (and more scans of various types) there will be a push for “super realism” of historical types and representations.

    We only vaguely know of how people looked 100 years ago. 200 years ago only by tiny number of artistic creations mainly of wealthy people who posed for hours. Or imagined representations. Go back 1,000 years and even fewer, etc. Yes, some bones, etc. but little else actually accurate.

    So a digital/holographic iSteve will be opining online for some future Unz program in 100 years. Historical films, drama, recreations of every type will be based on real video/film images that capture the imperfect look of real people.

    There will be a market (assuming no socialism/communism) for people to “sell” rights to themselves post mortem.

    Future creators of various types will be exploiting our contemporary videos, etc. (the ones that survive the years) for various purposes.

    The main takeaway from that will be a future awareness of how fat and ugly most people are now (were, to future viewers). Of course imperfections will then become an expensive makeover basis for the au courant fashionistas.

  36. what Orwell called “crimestop”

    Crimestop = Micro-step: toes crimp. Metric ops.

    In time, “Come, strip!”

    [MORE]

  37. @Steve Sailer

    “Mailbox Money” is one of the key concepts for understanding the Great Awokening.

    And its obverse, “Mailbox Reptiles”.


  38. @Corvinus

    Ironic you say that, given your own “Tin Cup” narratives.

    We all wonder, who…?

    • LOL: TWS
  39. El Dato says:
    @anon

    It’s time to fire the filtering intern.

  40. SafeNow says:

    People tell computers what to do; computers do what they have been told to do. They add/detract nothing.

    The process for creating a chess-playing computer (a chess “engine” is the parlance), has always been one of elite chess players’ refining the rules and algorithms for the computer to use. Then, fairly recently, a 180-degree change: they told the computer: Just play a zillion games with yourself, and YOU figure-out what worked and didn’t work; make-up your own rules. Well, the zillion games of course didn’t take long, and what emerged was the strongest chess engine ever, AlphaZero. Some guy, I think from MIT, dubbed this process “the essence of creativity.” In a way the process reminds me of courting.

    • Agree: El Dato
    • Replies: @Steve Sailer
    , @Gamecock
  41. MGB says:

    Yeah, Steve, another the blacks ruin everything nonsense narrative. Don’t worry, when bezos’s self driving limousine has to choose between running into a lamp pole, with an 11% chance of injury to the great man, or running over your grandkids in the cross walk, with a 2% chance of injury, to avoid the oncoming malfunctioning self driving Amazon delivery van, race won’t influence anything. The AI ethicists are already working on the rationale for the algorithmic options.

  42. Were Woody more prescient, he might have thrown in a foyl shvartze
    or two:

  43. @SafeNow

    Right, there’s been a radical change in artificial intelligence in this century from “If X, then Y, else Z” to “You go figure it out.”

    • Replies: @El Dato
  44. @Redman

    I think the MSM like Tesla and don’t want black folks shaking down capitalist libs stuff. They want the economic rape of their political enemies. Tesla seems like a Dem donor type.

  45. @El Dato

    Our worst failure scenarios had the Roomba missing a patch of floor and failing to pick up a dustball.

    However, in Real Life the worst failure scenario involves a Roomba attempting to vacuum up a dog turd and smearing it across the carpet. I’ve read that the company finally has found a solution to this problem.

  46. @Wilbur Hassenfus

    It’s not artificial “intelligence”. It’s automated pattern recognition.

    Almost by definition, pattern recognition is intelligence.

    • Replies: @Wilbur Hassenfus
  47. @HammerJack

    Attorney Ben Crump and Estate of Henrietta Lacks sue biotechnical company for \$250 Billion

    The story of Henrietta Lacks is actually a rather nice metaphor for the relative contributions of blacks and whites and our politics around race.

  48. strangely enough, as reluctant as I am to try and understand the tyrants who caused so much pain and suffering in the world that I was born into (Mao was still alive, albeit only in a state of suspended animation, as moralists would call it, but Stalin, FDR, Mussolini and his little Austrian friend, and Tojo were long gone), it is fair to say that yes, you are right, to ask what Stalin would do if he were a woke hipster who wanted to exercise authority from a position of bureaucratic power in one of our blue states is a legitimate question.

  49. @anon

    Whether or not one is a fan of Reagan, I hope most of us can agree that the effective delivery of a “zinger” may not actually be the optimal method for selecting the next leader of the free world. The right to rule derives from a mandate from the masses, not from some farcical aquatic ceremony! Oh snap, there I go again.

    • LOL: El Dato
  50. @El Dato

    We may need to harvest stem cells from aborted foetuses

    • Replies: @El Dato
  51. Andreas says:

    But for several years now, the Establishment has been complaining that the new rapidly improving artificial intelligence is racist and sexist. Clearly, the financial rewards for figuring out how to make robots Woke would be immense, yet nobody seems able to do it.

    This raises a fascinating philosophical problem which I think underlies the reason why.

    Computer code must conform to an accurate model of the physical world or the code will fail. Imagine coding a flight simulator based on wishful thinking about air currents. Human beings are physical systems in a physical world and, therefore, coding human abilities or deriving conclusions from these abilities is ultimately no different.

    Despite “learning” or deriving patterns statistically rather than being based on first observing the world and modeling it directly, AI is no different at the lowest level. Given the exact same starting state and data inputs an AI algorithm, by definition, will draw the same conclusion no matter how many times it is run. And there can be no different starting states with the same data sets that can derive a conclusion that is inconsistent with those data sets.

    A simple example would be an analysis of a vast sea of test scores where part of the data includes race. It would be impossible for such an algorithm to “learn” that it is just as reasonable to hire anyone of any race for a job in mathematics if the job criteria is the ability to derive accurate solutions to math problems. The only thing that would change this “learning” is for the data to actually reflect that reality.

    Clearly, woke-speech is a low-fidelity model of the world and which, therefore, forces rationalization instead of reason and logic to sustain. It simply appeals to the emotional impulses of humans driven by egalitarian ideals of human equality, many of whom are driven by sheer resentment, and who think equity can be achieved by tearing others down to the lowest level rather than by bring low performers up to a higher level.

    Computer code cannot be based on rationalizations or false premises. It must conform to an accurate model of the world and process such information according to the rules of cold hard logic, or fail.

    And therein lies the conundrum for the Establishment.

  52. Anonymous[376] • Disclaimer says:

    I don’t get why white nationalists think robots will save them. Robots and AI algorithms can easily be programmed to favour blacks and/or ignore their crime and only focus on whites.

    The idea that AI is some unbiased totally independent consciousness is nonsense, it’s a computer system whose behaviour is designed and programmed by humans to fit the purpose they want.

    • Replies: @El Dato
  53. @El Dato

    Are you trying to tell me the Halting Problem is still a problem?

    • Replies: @El Dato
  54. El Dato says:
    @Steve Sailer

    Not really “this century”. Good old reinforcement learning (“try something at random and if it works, do more of that and less of something else”) has a proud history, as has the concept of a “self-modifying” / “learning” machine. Indeed it is the general principle that is being applied when you train the neural network,

    An excerpt from the monster book “Reinforcement Learning: An Introduction” by Richard Sutton and Andrew Barto, http://www.incompleteideas.net/book/ebook/node12.html

    In early artificial intelligence, before it was distinct from other branches of engineering, several researchers began to explore trial-and-error learning as an engineering principle. The earliest computational investigations of trial-and-error learning were perhaps by Minsky and by Farley and Clark, both in 1954. In his Ph.D. dissertation, Minsky discussed computational models of reinforcement learning and described his construction of an analog machine composed of components he called SNARCs (Stochastic Neural-Analog Reinforcement Calculators). Farley and Clark described another neural-network learning machine designed to learn by trial and error. In the 1960s the terms “reinforcement” and “reinforcement learning” were used in the engineering literature for the first time (e.g., Waltz and Fu, 1965; Mendel, 1966; Fu, 1970; Mendel and McClaren, 1970). Particularly influential was Minsky’s paper “Steps Toward Artificial Intelligence” (Minsky, 1961), which discussed several issues relevant to reinforcement learning, including what he called the credit assignment problem: How do you distribute credit for success among the many decisions that may have been involved in producing it? All of the methods we discuss in this book are, in a sense, directed toward solving this problem.

    Alan Turing wrote the paper Intelligent Machinery in 1948. It’s Hilbert Program for AI.

    Also, neural networks were already “big” and “hyped” in their second phase (multilayer perceptrons) in the early 1990s. Complete with specially created chips (I have never seen any in action though) That phase somehow petered out, but after 201x, the combination of

    – Cheap massively parallel processing via NVidia/AMD graphics engines
    – possibly available “on demand” in datacenters)
    – the rekindled idea to “go deep on” (have many-layered) neural networks
    – the need by companies like Google to automate image recognition and data mining of the peons’ behaviour
    – possibly the influx of Chinese researchers

    has boosted us into the “third phase”. Which may or may not run into another wall soon, seeing how computational needs for the current approach are becoming quickly infeasible. But there are many avenues to explore from here.

    From slideashare, an image:

    The argument positing lack of creativity is apparently “Lady Lovelace’s Objection”:

    https://en.wikipedia.org/wiki/Computing_Machinery_and_Intelligence

    Lady Lovelace’s Objection: One of the most famous objections states that computers are incapable of originality. This is largely because, according to Ada Lovelace, machines are incapable of independent learning.

    The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform. It can follow analysis; but it has no power of anticipating any analytical relations or truths.

    (Well, actually, she makes a statement about the Analytical Engine, which is many layers of complexity removed from interesting behaviour)

    Turing suggests that Lovelace’s objection can be reduced to the assertion that computers “can never take us by surprise” and argues that, to the contrary, computers could still surprise humans, in particular where the consequences of different facts are not immediately recognizable. Turing also argues that Lady Lovelace was hampered by the context from which she wrote, and if exposed to more contemporary scientific knowledge, it would become evident that the brain’s storage is quite similar to that of a computer.

    As a sidenote, Theoretical Computer Science has been greatly occupied with studying “‘Turing Machines” of the ‘a’ type, which are set loose on a completely defined tape and run with complete determinacy in their little energy-less and entropy-less universe. These are then the basis for comparing algorithms and taking conclusions about the algorithm’s cost in time and space etc. Since 2000 or so, there has been interest in looking at the formal properties of “Turing Machines with an I/O channel” that can obtain info external to themselves, which includes other Turing Machines. There is much theoretical work to be done in that area (called Interactive Computation), to supplement the practical and empirical work being done every single day by the IT guys.

    From “Principles of Interactive Computation”, the forward of Interactive Computation:
    The New Paradigm
    , 2006, by Dina Goldin and Peter Wegner:

    Interaction provides an expanded model of computing that extends the class of computable problems from algorithms computable by Turing machines to interactive adaptive behavior of airline reservation systems or automatic cars. The paradigm shift from algorithms to interaction requires a change in modes of thought from a priori rationalism to empiricist testing that impacts scientific models of physics, mathematics, or computing, political models of human behavior, and religious models of belief. The substantive shift in modes of thought has led in the past to strong criticism by rationalist critics of empiricist models of Darwinian evolution or Galilean astronomy. Our chapter goes beyond the establishment of interaction as an extension of algorithms computable by Turing machines to the question of empiricist over rationalist modes of thought.

    This chapter contributes to goals of this book by establishing interaction as an expanded form of computational problem solving, and to the exploration of principles that should underlie our acceptance of new modes of thought and behavior. Our section on persistent Turing machines (PTMs) examines the proof that sequential interaction is more expressive than Turing machine computation, while our section on the Church–Turing thesis shows that the Strong version of this thesis, with its assumption that Turing machines completely express computation, is both inaccurate and a denial of Turing’s 1936 paper.

    Ok, that’s a lot.

    Meanwhile in the real world. Not sure whether real or an underhanded call for more money and even less ethics:

    ‘It’s already over’: US has lost AI battle to China, Pentagon’s former software chief admits

    FT paywall

    Chinese firms are also actively cooperating with their government on AI, but US companies, like Google, are reluctant to work with the American authorities, he added.

    For certain levels of “reluctant”.

    The former software chief also sounded the alarm over the cyber defenses of US government agencies, saying that they were at “kindergarten level” in some areas.

    What happened to the “Solarwinds incident” btw?

    In the coming weeks, Chaillan plans to testify before Congress in relation to the issue to attract more attention to the danger posed to the US by China’s technological advancements.

    Selling opium is no longer an option.

    Chaillan’s resignation made a splash after he announced it in a bombshell letter in early September. He complained that bureaucracy and lack of funding had prevented him from doing his job properly, saying that he was fed up with “hearing the right words without action.”

    The Pentagon was “setting up critical infrastructure to fail” by appointing military officials with no expertise in the field in charge of cyber initiatives, the 37-year-old argued. “We would not put a pilot in the cockpit without extensive flight training; why would we expect someone with no IT experience to be close to successful?”

    This is the case in private companies too. If boss has managed to install Microsoft(r)Word(tm) all by himself/herself, he/she knows all about your job and how long it takes to implement that AI he/she keeps hearing about.

    • Thanks: ic1000
    • Replies: @Stebbing Heuer
  55. El Dato says:
    @Reg Cæsar

    Are there even enough turkeys to go ’round after COVIDCOLLAPSE?

  56. El Dato says:
    @Yak-15

    Being half AI can be confusing to people around you

    From Ghost in the Shell, initial series (best!), 1991:

    From that same Wikipedia page: Excessive cartoon fanservice can hurt the uh … bottom line .. yeah.

    [MORE]

    The removal of a two-page lesbian sex scene in Studio Proteus’s localization of Ghost in the Shell was not well received, with readers reacting negatively to the removal of the previously uncensored content that was included in the original Dark Horse release. Toren Smith commented on Studio Proteus’s actions claiming that requirement of the “Mature Readers Only” would translate into a 40% loss in sales and likely have caused the immediate cancellation of the series. Shirow, who grew tired of “taking flak” over the pages, opted to remove them and reworked the previous page as necessary.

    That’s the part where Batou inexplicably starts to sweat profusely after performing an unannounced full-sensorium login to Major Kusanagi’s cyberbody. Morality: Always knock first!

    The sequel volume Ghost in the Shell 2: Man-Machine Interface also featured pornographic scenes and an increase in nudity in the “Short-cut” version in Japan. No such scenes were included in any of the Western versions.

  57. Gamecock says:
    @El Dato

    I am a retired computer scientist. I spent over 30 years telling computers what to do. They NEVER did anything that I hadn’t told them to do. Occasionally, not what I wanted them to do, but what I told them to do.

    My father was a pioneering computer scientist (Argonne Lab 1952). He in fact did have to deal with computer errors. His first system had 2k bits of memory. Bits. In the form of mechanical relays. They had two shifts of mechanics to maintain them. Yes, they indeed had bugs. Til he got the first IBM 704 in 1954.

    • Replies: @El Dato
  58. El Dato says:
    @Wilbur Hassenfus

    The Halting Problem will always be a problem, forever.

    But yes “will this program eventually halt” is exactly the kind of question that make computing experimental. Given a program, you just may have to run it and collect statistical information about the program’s behaviour.

  59. El Dato says:
    @Gamecock

    They NEVER did anything that I hadn’t told them to do.

    I don’t want to be glib about this, this is a vast subject, but you are probably talking about bread-and-butter programs. Database I/O, filtering, numerical algorithms, sorting and searching, parsing and compiling, all that stuff that puts bacon on plates. Predictable, formally provable even (ideally), business-like, generally sitting in P. These programs can be very large but are not very interesting. Their behavioural repertoire is small, they do not accumulate state over time (in fact, they often have orthogonal state that they are mean to manage: databases). Most of them are meant to not be interesting. You wouldn’t want your RDBMS to start filing all names with A out of the customer table into a separate database because some internal preferences model adjusted by the history of query returns (for example) has gone over threshold.

    “Bugs” are “writing a different program than you wanted to”. That’s not it either. (But see below).

    But now go “complex enough”, which generally means building a machine that explores some abstract space and adjusts its behaviour according to what it discovers in that space. Those program can be relatively small, but have a changing database that heavily influences the computational paths they will take going forward (formal verification of anything interesting about its behaviour is probably infeasible). Think self-learning games, or optimization problems that cannot be solved with deterministic algorithms.

    When the program starts surprising you, that’s when it does something you did not tell it to do – because the behaviour wasn’t in your specification, and you definitely didn’t have the computational power to derive that behaviour from the machine’s description (i.e. code) alone.

    “You didn’t build that!” actually applies.

    Hofstaedter a whole books on this (I only vaguely remember those) and I remember a note from ’88 in CACM by Cherniak about a “mind program” being necessarily radically different from any software located in the archipelago of “manageable software” we currently design & write: undebuggable and thus surprising.

    Does this put the kibosh on “trustworthy AI”, in the sense of, if “it’s trustworthy, it’s definitely not AI”, 33 years before the call to arms for trusthworthy AI went out? Well … it might!

    In particular, we read:

    UNDEBUGGABILITY AND COGNITIVE SCIENCE, Chris Cherniak, Communications of the ACM – April 1988 – Volume 31 – Number 4

    [MORE]

    Bugs seem to be endemic. The startling character of such statistics suggests that we tend to unrealistically presuppose that programs are essentially error-free. Such a picture may itself be an instance of the recently much-studied pathological inability of human beings to estimate and reason in terms of accurate event base-rates; perhaps some vivid stereotype of “computerlike perfection” drives us to overlook observed bug frequencies. (A possible harbinger of a paradigm shift here is the ACM journal Software Engineering Notes, which regularly publishes lists of bug episodes.)

    Well, today we are at “Software-based Software Engineering” where programs assemble programs, but with the caveat that (for example) the result may possibly fail in some way at one point, maybe shredding your files (so you want to really tune SELinux to make it behave), but when running on your laptop it will consume 30% less power than hand-written code, guranteed. Which may be a good tradeoff, depending on your requirements. Anyway, further along..

    Software for a cognitive system will differ in its failure proneness from conventional types of machines for reasons other than just its brute complexity. A cognitive system’s program will tend to act as a failure amplifier because of its intrinsically branchy structure and its distinctively holistic structure of interconnection. Consider the vivid contrast between reliability of computer software and hardware. Running any given program requires a machine that is more complex than the software, in the most primitive sense of the number of comparable elements. First, the hardware must include at least one independent memory location for storing each symbol of the program. Yet commercial software continues to have defect rates that would be exotic for conventional machinery, current computer hardware surpasses any other artifact of our era in both its intricacy and reliability. Hardware failures are relatively rare events. Why is this? The remarkable record of hardware reliability owes something to a concern with such things as fault-tolerant design and error-correcting codes that dates back to von Neumann.

    But the difference between hardware and software that is mainly responsible for this qualitatively different order of failure proneness concerns structure. Tacitly we tend to extend our picture of the behavior of conventional artifacts to software, but the latter is in fact an entirely different regime. A massive memory (e.g., a disk system) differs from a massive program it contains in that the memory has a simple additive structure, rather than a branching complexity. … The failure modes of memory media do not include cases where insidious, fine-structured, extensive propagation is likely. The texture of failure for a program differs because of its branchy structure: There are not just many distinct states, but a combinatorially exploding number of ways in which one state can be connected to another. The branching is not just wide, but deeply hierarchical in organization, nested iteratively through many layers as in truth tables. Debuggability by exhaustive checking of this vast number of potentially possible transitions is then not remotely feasible. (Recall the example of the costs of truth-functional consistency testing sketched earlier.) Here again we confront a problem of scale.

    In addition, the essential structure of a cognitive system ensures that computational approximations of it will function as failure amplifiers. In philosophy, Quine and Davidson have long emphasized the distinctively holistic character of cognitive systems. But that interconnectedness means that defects, as well as revisions, will tend to propagate via the flow of information throughout the web of belief. In this way, such a nexus acts as a bug detector, rather like a spiderweb. Divide-and-conquer software design methodologies that prescribe a hierarchy of self-contained, clean interfacing modules and submodules are, of course, good strategy; but the intrinsically holistic nature of program models of cognition entails limits to such modularity. Quine, Davidson, and the philosophical tradition they epitomize deny or recoil from departures of actual cognitive webs from ideal rationality; it is therefore ironic that such nonidealities – nonclosure of the belief set under deduction, compartmentalization of it, and so on1 – act as a type of fault-tolerant software design feature, namely, as valuable quarantines on bug propagation. (So that, e.g., contrary to Quine and others, a contradiction in the system does not in fact actually threaten to generate every proposition.)

    I have never been a fan of x contradictione quodlibet, which is gelinde gesagt, retarded.

    • Replies: @Gamecock
  60. El Dato says:
    @Anonymous

    Well, Tiny, you see….

    Artificial Instinct is feasible.

    Artificial Consciousness is probably a vanity project.

  61. @J.Ross

    Thanks for the reference, I watched it on youtube.

    Astounding aesthetics! Living and working out of an old apartment, I would love to have an office like Stiller’s and an apartment like Maya’s or Eva’s – I’d like to be in a different simulation, please.

  62. MEH 0910 says:

    https://en.wikipedia.org/wiki/Quark_(TV_series)

    Quark is a 1977 American science fiction sitcom starring Richard Benjamin. Broadcast on Friday nights at 8:00–8:30 p.m. on NBC,[1] the pilot aired on May 7, 1977, and the series followed as a mid-season replacement in February 1978. The series was cancelled in April 1978. Quark was created by Buck Henry, co-creator of the spy spoof Get Smart.[1]

    […]
    • Andy (Bobby Porter) is a not-at-all-human-looking robot, made from spare parts, with a cowardly and neurotic personality.

    Quark: The Series – Goodbye Polumbus clip:

    A spoof of science fiction films and TV series, these are the adventures of Adam Quark, captain of a United Galactic Sanitation Patrol ship. His cohorts include Gene/Jean, a “transmute” with male and female characteristics; a Vegeton (a highly-evolved plant-man) named Ficus; and Andy the Android and Betty and Betty (who always argue over who’s the clone of the other). Based at Space Station Perma One are Otto Palindrome and The Head. Though Quark is supposed to stick to his sanitization patrols, he and his crew often meet adventure with such colorful space denizens as the evil High Gorgon (head of the villainous Gorgons), Zoltar the Magnificent, and Zargon the Malevolent.

    [MORE]

    Full episode:
    Quark – S01E05 – Goodbye Polumbus

    “Goodbye Polumbus” (March 17, 1978): Quark and his crew are sent on a suicide mission to Polumbus to discover why no one has returned alive. Quark and his crew fall prey to their fantasies as part of a fiendish plot by the dreaded Gorgons to drain the minds of the United Galaxy’s most brilliant scientists. Quark encounters a beautiful dream girl, Ficus encounters a teacher, the Bettys encounter dancing clones of Quark, and Gene/Jean encounters his favorite comic book character “Zoltar the Magnificent”. In order to save his crew, Quark must destroy the obelisk and free the shape-shifting “Clay People” it enslaved. The episode’s title is spoof of the film Goodbye, Columbus, in which Benjamin played the lead.

    Do You Remember Quark? This 70’s TV show was cancelled too soon!

    Here are my memories of the short-lived 70’s science fiction comedy Quark which aired on NBC from 1977 to 1978. Starring Richard Benjamin, Conrad Janis and The Barnstable Twins, Quark was created by Buck Henry and ran for just eight episodes.

    • Replies: @MEH 0910
  63. I recently was reminded of St Thomas Aquinas’s 5 proofs of God. On 4&5, I tbought about AI and how anti-woke it ends up being. Good news is that whatever is going on during this age, AI, God, Logos are on Sailer’s side.

    4. The Argument from Gradation: There are different degrees of goodness in different things. Following the “Great Chain of Being,” which states there is a gradual increase in complexity, created objects move from unformed inorganic matter to biologically complex organisms. Therefore, there must be a being of the highest form of good. This perfect being is God.

    5. The Argument from Design: All things have an order or arrangement that leads them to a particular goal. Because the order of the universe cannot be the result of chance, design and purpose must be at work. This implies divine intelligence on the part of the designer. This is God.

  64. @Wilbur Hassenfus

    An action done by an AI can never be human, since it lacks a soul.

    An AI is a synthesis of human activity. An interesting question is whether a human, having abandoned the human will, and acting in the Divine Will, would ever need to build an AI.

    Or, asked another way, can an AI ever act in accord, in harmony with the Divine Will? If so, would it be a random concurrence with no eternal merit.

    Only actions in the Divine Will have eternal merit. No human action performed apart from the Divine Will has eternal merit. Just as no AI can ever perform an action that is Divine or has eternal merit.

    The attempt to build an AI is merely another gnostic heresy.

    There will never be life in an AI, just as there will never be life apart from the Divine Presence.

  65. Gamecock says:
    @El Dato

    I don’t want to be glib about this, this is a vast subject, but you are probably talking about bread-and-butter programs. Database I/O, filtering, numerical algorithms, sorting and searching, parsing and compiling, all that stuff that puts bacon on plates. Predictable, formally provable even (ideally), business-like, generally sitting in P. These programs can be very large but are not very interesting.

    Yep. I did \$250,000,000,000 in transactions. Boring (except at month end).

    So tell me about the great AI (sic) accomplishments.

  66. Gamecock says:
    @SafeNow

    This is not new. I saw it in college boy engineers in the 1970s and 80s. It’s called “brute force computing.” Rather than try to understand and codify, just do a gazillion tries and see what’s best.

    It’s not successive approximation. It’s “try everything.”

    A flawed approach. That persists today.

  67. MEH 0910 says:
    @MEH 0910

    https://en.wikipedia.org/wiki/Dale_Barnstable

    Dale Barnstable (March 4, 1925 – January 26, 2019) was an American basketball player from Antioch, Illinois who was banned for life from the National Basketball Association (NBA) in 1951 for point shaving. He had an outstanding college career at the University of Kentucky before his career came to an abrupt end.

    […]
    College career
    Afterwards high school, he was recruited by the University of Kentucky where he played for Hall of Fame coach Adolph Rupp at the Kentucky Wildcats men’s basketball from 1946 to 1950. While there, Barnstable was a key player on Rupp’s first two championship teams in 1948 and 1949. Barnstable was a starter on the 1949 team, earning third team All-Southeastern Conference honors that season.[1] For his Wildcat career, Barnstable scored 635 points (4.9 per game).[2]

    Professional career
    Boston Celtics (1950-1951)
    Towards the end of his college career, Barnstable was drafted in the seventh round of the 1950 NBA Draft by the Boston Celtics.

    CCNY point shaving scandal
    Nevertheless, in 1951 Barnstable became a key figure in a point shaving scandal – In the wake of an increasing number of point shaving schemes coming to light throughout the year, on October 20 Barnstable was arrested along with teammates Ralph Beard and Alex Groza for allegedly taking \$500 to shave points in a National Invitation Tournament game in 1949. Although his sentence was suspended, as a result of the affair he lost his first post-graduation job as a high school coach at duPont Manual High School in Louisville, Kentucky, and was banned for life from the NBA by the NBA president Maurice Podoloff.[3][4]

    Personal life
    After losing his high school coaching job, Barnstable worked at American Air Filter in Louisville as a salesman until retirement.[5] In the meantime, he became a talented golfer, winning the Kentucky Senior Open twice and playing in the British Senior Open (the first Kentucky amateur to do so).[5][6]

    Barnstable was the father of identical twin actresses, Priscilla “Cyb” and Patricia Barnstable, known for their roles in the television series Quark.[7]

    Barnstable died on January 26, 2019, aged 93.[8]

    [MORE]

Current Commenter
says:

Leave a Reply - Comments are moderated by iSteve, at whim.


 Remember My InformationWhy?
 Email Replies to my Comment
$
Submitted comments have been licensed to The Unz Review and may be republished elsewhere at the sole discretion of the latter
Commenting Disabled While in Translation Mode
Subscribe to This Comment Thread via RSS Subscribe to All Steve Sailer Comments via RSS
PastClassics
The Shaping Event of Our Modern World
Analyzing the History of a Controversial Movement
The Hidden Information in Our Government Archives