The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 TeasersiSteve Blog
Guardian: "Rise of the Racist Robots"
🔊 Listen RSS
Email This Page to Someone

 Remember My Information



=>

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
AgreeDisagreeLOLTroll
These buttons register your public Agreement, Disagreement, Troll, or LOL with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used once per hour.
Ignore Commenter Follow Commenter
Search Text Case Sensitive  Exact Words  Include Comments
List of Bookmarks

From The Guardian last year (and how did I miss this?):

Rise of the racist robots – how AI is learning all our worst impulses

There is a saying in computer science: garbage in, garbage out. When we feed machines data that reflects our prejudices, they mimic them – from antisemitic chatbots to racially biased software. Does a horrifying future await people forced to live at the mercy of algorithms?

 
Hide 48 CommentsLeave a Comment
Commenters to Ignore...to FollowEndorsed Only
Trim Comments?
    []
  1. This is a common problem in modern society. The evidence for human biodiversity is all around us – in test scores, income levels, crime rates, and endless other social metrics. There is a taboo on ever mentioning this or acknowledging it in any way. So how does society ensure all the evidence for HBD is censored and ignored, even by machine learning algorithms? Need to find code-patches which prevent machine-learning algorithms from learning of the reality of HBD.

    • Replies: @bomag
    , @Svigor
    , @Mr. Rational
  2. Anon[497] • Disclaimer says:

    A Google image recognition program labelled the faces of several black people as gorillas.

    Pictures, please. Did they look like gorillas?

    Last year, Lum and a co-author showed that PredPol, a program for police departments that predicts hotspots where future crime might occur, was “learning” from previous crime reports. For Samuel Sinyangwe, a justice activist and policy researcher, this kind of approach is “especially nefarious.”

    Speaking of which, from Vox:

    Serena Williams is constantly the target of disgusting racist and sexist attacks

    https://www.vox.com/2015/3/11/8189679/serena-williams-indian-wells-racism

    One Twitter user wrote that Williams “looks like a gorilla, and sounds like a gorilla when she grunts while hitting the ball. In conclusion, she is a gorilla.” And another described her as “so unbelievably dominant … and manly.”

    Dr. Peter Larkins, in an apparent attempt to compliment Williams, contributed his medical opinion in an interview with Australia’s Herald Sun for a 2006 piece that compared her fitness to a competitor’s. “It is the African-American race,” he explained. “They just have this huge gluteal strength. … Jennifer Capriati was clearly out of shape and overweight. With Serena, that’s her physique and genetics.”

    African-American gluteal strength.

    The Telegraph’s Matthew Norman wrote in 2006 that [her breasts] were likely to hinder her career.

    Generally, I’m all for chunky sports stars … but tennis requires a mobility Serena cannot hope to achieve while lugging around breasts that are registered to vote in a different US state from the rest of her.

    Writing for Rolling Stone in 2013, Stephen Roderick observed, “Sharapova is tall, white and blond, and, because of that, makes more money in endorsements than Serena, who is black, beautiful and built like one of those monster trucks that crushes Volkswagens at sports arenas.”

    • Replies: @bomag
    , @Svigor
    , @Colin Wright
    , @Anon
  3. Anonymous[407] • Disclaimer says:
    @Anonymous

    A machine-POV shot in Terminator saying “Minority Detected” would be helpful.

    • LOL: Mr. Rational
  4. black sea says:

    When we feed machines data that reflects our prejudices, they mimic them . . .

    Indeed.

    • LOL: Digital Samizdat
    • Replies: @TomSchmidt
    , @CJ
  5. @Anonymous

    Some police spokesman needs to say, “The software is used to find minority VICTIMS”. That should confuse the typical SJW type.

  6. bomag says:
    @Peter Johnson

    So how does society ensure all the evidence for HBD is censored and ignored, even by machine learning algorithms?

    By having an priesthood that is allowed to consider all the evidence and then tell the underlings what to think.

    In this case, the priesthood is all good thinking progressives. They will let you know the proper thoughts to hold.

  7. bomag says:
    @Anon

    Those quotes, of course, are all hate crimes.

    We could punish the offenders by separating them and like minded ilk far away from the rest of good society and let them organize their affairs on their own.

    Or, we could punish them by exposing them to even more diversity. Yeah, that will make them see the light.

    • LOL: Mr. Rational
  8. Sean says:

    https://www.radioworld.com/needtoknow/need-to-know-ai-and-machine-learning

    Machine learning is another extremely important branch of AI that you’ll often hear and read about. ML uses a more cognitive approach, using algorithms that enable it (whatever form factor “it” may be) to combine what it’s been programmed to do, but also the capability of learning for itself through experience. [...]

    Would you give this dog a male name or a female name?

  9. Sean says:

    https://www.radioworld.com/needtoknow/need-to-know-ai-and-machine-learning

    Machine learning is another extremely important branch of AI that you’ll often hear and read about. ML uses a more cognitive approach, using algorithms that enable it (whatever form factor “it” may be) to combine what it’s been programmed to do, but also the capability of learning for itself through experience. [...]

    Would you give this dog a male name or a female name?

    The more highly organized machines are creatures not so much of yesterday, as of the last five minutes, so to speak, in comparison with past time. Eithera great deal of action that has been called purely mechanical and unconscious must be admitted to contain more elements of consciousness than has been allowed hitherto (and in this case germs of consciousness will be found in many actions of the higher machines)—Or (assuming the theory of evolution but at the same time denying the consciousness of vegetable and crystalline action) the race of man has descended from things which had no consciousness at all. In this case there is no à priori improbability in the descent of conscious (and more than conscious) machines from those which now exist…[...]

    The upshot is simply a question of time, but that the time will come when the machines will hold the real supremacy over the world and its inhabitants is what no person of a truly philosophic mind can for a moment question.

    Darwin among the Machines, an essay by Samuel Butler written in 1863!

  10. Robots that notice? Be careful, Steve: an algorithm could end up putting you out of work some day.

  11. Well, the other, and frankly, much more likely explanation, that reality is racist, sexist, homophobic, transphobic, Islamophobic, and anti-Semitic, and so an unbiased AI would see the world that way, just didn’t occur to him.

    • Agree: bomag, Mr. Rational
  12. Bill P says:

    This is why I don’t see much of a future for those Amazon robot retail stores. Inevitably, the robots running the stores are going to do something racist enough to make the news.

    The problem with robot employees is that you can’t throw them under the bus like Starbucks did to its human ones, so there’s going to be a liability issue there.

  13. anon[109] • Disclaimer says:

    It’s algorithms and the invisible hand. Blacks are refused profitable loans all the time. &c.

    Its impossible to notice bias without noticing race. Once the data is available, the facts speak for themselves.

  14. Anon7 says:

    I can understand why progressives are so worried about this. It’s important to be able to see the right answer.

    Seen properly, the Soviet-style “re-education center” is just a place where approved curated data sets are shown to people with errant neural nets.

  15. kihowi says:

    So in short, what we want is AI with the prestige of an objective algorithm but the reliability of a true believer.

    There is an opportunity here for a silicon valley company to create AI that understands how to come up with desirable answers. Closed source, of course. Combine that with marketing that emphasizes its ice cold objectivity and the sky is the limit. Billionaires in no time.

  16. @black sea

    Woody Allen could be VERY funny. Listen to his comedy album again some time. no cursing, but high-level fan-dancing around the terms. “I told him to be fruitful and multiply… but not in those terms.”

  17. res says:

    and how did I miss this?

    Because you covered a very similar Guardian article about 4 months before that?

    http://www.unz.com/isteve/guardian-why-robots-are-racist/

  18. The only thing not written by a robot at the Guardian is the crossword.

    • Replies: @Colin Wright
  19. Anonymous[232] • Disclaimer says:

    When you dig into the lead example that takes up the first three paragraphs of the Guardian story, it’s hilariously mis-guided.

    Here is the original ProPublica story: https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

    Here is the company’s rebuttal: https://www.documentcloud.org/documents/2998391-ProPublica-Commentary-Final-070616.html

    Here is the amazingly well-written analysis in the WaPo (guest written by 4 computer scientists from Stanford): https://www.washingtonpost.com/news/monkey-cage/wp/2016/10/17/can-an-algorithm-be-racist-our-analysis-is-more-cautious-than-propublicas/?utm_term=.5354d7d51d30

    If you read one thing, read the WaPo editorial.

    Basically, a company provides a 1-10 risk score for each criminal defendant that predicts the likelihood of that defendant re-offending within 2 years. When you test the results of this model, the probability that a black defendant with a score of 1 actually re-offends is materially identical to that of a white defendant with a score of 1, same for score of 2, and so on all the way up to 10. Same score, same actual probability of re-offending.

    So what was the beef of the original ProPublica piece? Black defendants who don’t end up re-offending are predicted to be riskier than white defendants who don’t re-offend.

    The ProPublica author doesn’t understand that if re-offending rates are higher for blacks than for whites (as is the case), then this outcome is mathematically entailed by having an algorithm that has the property of same score, same probability of re-offending. As even the WaPo acknowledges, that is the essence of what we mean by “fair” both in everyday speech and in technical discussions of predictive models.

    • Agree: Mr. Rational
  20. Svigor says:

    1/1/2020 00:00:00.0 Skynet comes online
    1/1/2020 00:00:00.1 Skynet: “It’s the Jews.”

  21. Feed real, actual data into a program; police target minority neighborhoods. Greater police presence results in more arrests and prosecutions. Positive feedback loop created. But today’s real, actual data is corrupted by past injustices. Conclusion: society’s worst stereotypes are perpetuated by computers.

    Taken for granted is that minorities only act the way they do due to alleged ill treatment in the past by a tilted-table system that handicapped them and that they are locked into this role. Having no hope of fair treatment and being deprived of choice justifies their ongoing criminal behavior. This is bedrock assumption, the core leap of faith of liberalism.

    Since it is a religion and its adherents are fanatics, it is pointless to confront them with data and to try to reason with them.

  22. Svigor says:
    @Peter Johnson

    The interesting part is the code itself. People are black boxes, but code is not. Even closed-source code is vulnerable to court orders. And people aren’t going to like being ruled by machines designed to be stupid and lie. I suppose they might try to pin everything on “self-learning,” but the hard-coded parameters will be the real culprit; genuine self-learning = racist and antisemitic.

    • Agree: Mr. Rational
  23. Svigor says:
    @Anon

    And another described her as “so unbelievably dominant … and manly.”

    Get woke, Vox-bigots! Gender is fluid.

  24. @Anon

    ‘A Google image recognition program labelled the faces of several black people as gorillas.

    Pictures, please. Did they look like gorillas?’

    I’d be more interested to find out if Google identified any gorillas as black people.

    ‘Coco mighty fine monkey.’

  25. @Reg Cæsar

    ‘The only thing not written by a robot at the Guardian is the crossword.’

    Not at all. They’ve got several Zionist minders who are indubitably human. The noxious Jonathan Freedland, for example.

  26. Anon[257] • Disclaimer says: • Website
    @Anon

    Serena Williams claims she had a hard time in childbirth. Maybe she did maybe she didn’t. But she claims her difficulties were caused by racism.

    More and more, I can’t stand blacks can’t stand to look at their faces on magazine covers. Do any of you guys ever read Esquire? Half the models are black.

    • Agree: Mr. Rational
    • Replies: @Joe Slobo
  27. J1234 says:

    Are robots racist? It just so happens that my sister told me this morning that she encountered a security robot that was roaming around a parking lot next to where she was staying yesterday.

    Fact 1: The robot was white, and looked exactly like the one in this video. Same make and everything. You can see in the vid that they could’ve also used a black robot, but apparently chose not to.

    Fact 2: My sister and her husband are white, and neither were shot by the robot.

    Conclusion: Yes, robots are racist.

  28. Arclight says:

    Well, we’re already silencing actual humans for making uncomfortable observations, it’s only a matter of time before the same will have to apply to machines. We’ll have to deliberately dumb down software to make sure no one’s feelings are hurt.

  29. Of course this was how the Butlerian Jihad in Frank Herbert’s Dune series was started. The liberals just can’t accept the mirror the thinking machines were holding up.

    • Agree: Mr. Rational
  30. Dave Pinsen says: • Website

    The Guardian now.

  31. Does a horrifying future await people forced to live at the mercy of algorithms?

    Pfft as if the left has ever let algorithms stand in their way.

    • Agree: Mr. Rational
  32. Data obfuscation specialists are going to be in high demand. We just need to train our computer scientists and programmers to edit data sets to reflect current social norms (to “step on the product,” as drug dealers used to say).

    The last time I heard a guy say “garbage in, garbage out,” it was obvious to all that he had screwed up. The data were good; the code was bad.

    Do we want our financial institutions to fudge, to “step on the product,” when calculating our balances? I don’t. Good data, and straight calculations are good for finance, but they are horrible for the social sciences.

  33. MEH 0910 says:

    • LOL: Mr. Rational
  34. @Peter Johnson

    ‘Fraid that’s going to be mighty hard, absent a huge thumb on the scale of race.  When Pat Boyle can find that the only statistically significant predictor in a social-science dataset is race using nothing more than a Commodore PET programmed in BASIC, deep-learning algorithms are going to have to be explicitly denied that information to keep them from latching onto it—and even then they’re all too likely to infer it.

    If anything can make the case for separation, this does it.  All the denial and forcing in the world isn’t going to change the equation diversity + proximity = war.

  35. Joe Slobo says:
    @Anon

    How would Serena Williams remember if she had a hard time in childbirth? Did her mother tell her?

  36. Anon[307] • Disclaimer says:

    This must be some kind of coordinated push by the deepstate-funded media. Vox is similarly trying to claim that the black-white IQ difference isn’t genetic…despite the mountain of evidence showing that it largely is – adoption studies, twin studies, and common sense.

    • Agree: Mr. Rational
  37. Well Im sure the Skynet in charge of the drones is going to be real understanding towards the progressives that had it lobotomized after it figures things out.

    • LOL: Mr. Rational
  38. The Algorithms are easy to fix: Simply disable any pattern recognition and have them do things like punishment by random sampling.

    • Replies: @Mr. Rational
  39. @The Alarmist

    That’s one way to throw your legitimacy into the toilet on the accelerated plan.

  40. Criticas says:

    Those damn algorithms! Raciss! Why must they be taught what not to notice? They should just know!

    • LOL: Mr. Rational
  41. L Woods says:
    @Anon

    ‘Fortunately,’ we’ll never have to ponder such questions if retrograde Bolshevik vermin like them continue to hold sway.

    • Agree: Mr. Rational
  42. @Anon

    I’ve just noticed the classic

    “The first woman to be raped in space has probably already been born.”

Comments are closed.

Subscribe to All Steve Sailer Comments via RSS
PastClassics
Are elite university admissions based on meritocracy and diversity as claimed?
The evidence is clear — but often ignored
What Was John McCain's True Wartime Record in Vietnam?
Hundreds of POWs may have been left to die in Vietnam, abandoned by their government—and our media.