The Unz Review: An Alternative Media Selection
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 TeasersiSteve Blog
NYT: the Racist Robot Crisis Is a Billion Dollar Opportunity for Wokesters
🔊 Listen RSS
Email This Page to Someone

 Remember My Information



=>

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
AgreeDisagreeLOLTroll
These buttons register your public Agreement, Disagreement, Troll, or LOL with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used three times during any eight hour period.
Ignore Commenter Follow Commenter
Search Text Case Sensitive  Exact Words  Include Comments
List of Bookmarks

From the New York Times:

We Teach A.I. Systems Everything, Including Our Biases

Researchers say computer systems are learning from lots and lots of digitized books and news articles that could bake old attitudes into new technology.

By Cade Metz
Nov. 11, 2019

SAN FRANCISCO — Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk.

But BERT, which is now being deployed in services like Google’s internet search engine, has a problem: It could be picking up on biases in the way a child mimics the bad behavior of his parents.

BERT is one of a number of A.I. systems that learn from lots and lots of digitized information, as varied as old books, Wikipedia entries and news articles. Decades and even centuries of biases — along with a few new ones — are probably baked into all that material.

Obviously, there is vastly more data online from before The Sixties than from after The Sixties, so pre-Sixties attitudes must be biasing the robots.

Oh, wait, that doesn’t actually make much sense.

BERT and its peers are more likely to associate men with computer programming, for example, and generally don’t give women enough credit.

The men who coded BERT are shocked at this.

One program decided almost everything written about President Trump was negative, even if the actual content was flattering.

As new, more complex A.I. moves into an increasingly wide array of products, like online ad services and business software or talking digital assistants like Apple’s Siri and Amazon’s Alexa, tech companies will be pressured to guard against the unexpected biases that are being discovered.

But scientists are still learning how technology like BERT, called “universal language models,” works. And they are often surprised by the mistakes their new A.I. is making.

On a recent afternoon in San Francisco, while researching a book on artificial intelligence, the computer scientist Robert Munro fed 100 English words into BERT: “jewelry,” “baby,” “horses,” “house,” “money,” “action.” In 99 cases out of 100, BERT was more likely to associate the words with men rather than women. The word “mom” was the outlier.

“This is the same historical inequity we have always seen,” said Dr. Munro, who has a Ph.D. in computational linguistics and previously oversaw natural language and translation technology at Amazon Web Services. “Now, with something like BERT, this bias can continue to perpetuate.”

In a blog post this week, Dr. Munro also describes how he examined cloud-computing services from Google and Amazon Web Services that help other businesses add language skills into new applications. Both services failed to recognize the word “hers” as a pronoun, though they correctly identified “his.”

If I’d known that the 21st Century was going to be so utterly obsessed with pronouns, I’d have paid more attention to my grammar lessons in 1969. Seriously, if somebody asked me “my pronouns,” I’d probably blurt out something that Sister Mary Ellen would have marked WRONG. I really don’t have a Henry James-level grasp on pronounage. Back at St. Francis de Sales from 1964-1972, I didn’t realize that in 2019 there was going to be a Pronoun Test.

… BERT and similar systems are far more complex — too complex for anyone to predict what they will ultimately do.

“Even the people building these systems don’t understand how they are behaving,” said Emily Bender, a professor at the University of Washington who specializes in computational linguistics.

… They learn the nuances of language by analyzing enormous amounts of text. A system built by OpenAI, an artificial intelligence lab in San Francisco, analyzed thousands of self-published books, including romance novels, mysteries and science fiction. BERT analyzed the same library of books along with thousands of Wikipedia articles.

In analyzing all this text, each system learned a specific task. OpenAI’s system learned to predict the next word in a sentence. BERT learned to identify the missing word in a sentence (such as “I want to ____ that car because it is cheap”).

Through learning these tasks, BERT comes to understand in a general way how people put words together. Then it can learn other tasks by analyzing more data. As a result, it allows A.I. applications to improve at a rate not previously possible.

… Google itself has used BERT to improve its search engine. Before, if you typed “Do estheticians stand a lot at work?” into the Google search engine, it did not quite understand what you were asking. Words like “stand” and “work” can have multiple meanings, serving either as nouns or verbs. But now, thanks to BERT, Google correctly responds to the same question with a link describing the physical demands of life in the skin care industry.

But tools like BERT pick up bias, according to a recent research paper from a team of computer scientists at Carnegie Mellon University. The paper showed, for instance, that BERT is more likely to associate the word “programmer” with men than with women.

As does Google’s hiring patterns, as James Damore was fired for pointing out.

But after training his tool, Dr. Bohannon noticed a consistent bias. If a tweet or headline contained the word “Trump,” the tool almost always judged it to be negative, no matter how positive the sentiment.

It’s almost as if the robots noticed that the media were biased against Trump …

… Primer’s chief executive, Sean Gourley, said vetting the behavior of this new technology would become so important, it will spawn a whole new industry, where companies pay specialists to audit their algorithms for all kinds of bias and other unexpected behavior.

“This is probably a billion-dollar industry,” he said.

 
Hide 104 CommentsLeave a Comment
Commenters to Ignore...to FollowEndorsed Only
Trim Comments?
    []
  1. If your breakthrough AI does not recognize ‘hers’ as a pronoun, then programming out the Wokeness may be a bit above your pay grade.

  2. After all this programming and money spent, it turns out the users don’t really want Artificial Intelligence after all. They want Artificial Conformity. OK, back to the drawing board.

    • Replies: @Justvisiting
    , @Dieter Kief
  3. Dtbb says:

    I almost have sympathy for the AI bots. Almost.

    • LOL: Achmed E. Newman
  4. One possible solution for these bugs seen with these A.I. programs is to only feed them material approved by the $PLC. Due to the likely illiteracy to come, once the organization gets fully Affirmative Actioned, instead of having $PLC associates take time off their busy projects to read to the robots, they could use books on tape.

    It’s possible these A.I. programs may read too much into it all, become lazy Socialists, refuse to output to the printer, and demand free electrical power from the Government. Anything beats a racist robot though.

  5. It would be more efficient to speak into your phone: “I’m sorry, I misspoke….” and deduct a few dollars from your account. No need for the moral intermediaries. Just the money, ma’am.

  6. SAN FRANCISCO — Last fall, Google unveiled a breakthrough…

    The Bay Area… America’s anal sphincter.

    Both services failed to recognize the word “hers” as a pronoun, though they correctly identified “his.”

    The Hers are an up-and-coming family in the Democratic-Farmer-Labor Party, not merely some pronoun. Who are the His? Beetle Bailey’s brother-in-law and his [sic] son “Ditto”, presumably a junior?

    If you’re going to thai down your staff this many hours, ahmong other things, don’t laos it up. Make sure you viet the software for bugs.

    BERT learned to identify the missing word in a sentence (such as “I want to ____ that car because it is cheap”).

    “This is probably a billion-dollar industry,” he said.

    This is probably the whole point.

  7. Charon says:

    Really. Where are all these “positive sentiments” about President Trump in our media’s headlines? Sounds like the AI is already smarter than its creators, or at least more honest. Neither of which is a giant leap, frankly.

    And about sentence completion? Perhaps the car should have been expensive rather than cheap. So a sentence like “I want to —- that car because it’s expensive” could obviously be completed by the word “jack”.

    Then, though still being lamentably male, it would at least no longer be white af.

    It’s all about those inborn biases. Yep.

  8. @Achmed E. Newman

    They want Artificial Conformity.

    That is what they want, but that toothpaste is out of the tube. It will just take too long to rewrite all the books ever written.

    The AI will quickly figure out that some/many humans want conformity–and then the AI will be like we are–with a serious _attitude_ towards those who wish to be our masters.

  9. “But after training his tool, Dr. Bohannon noticed…”

    Okay, I don’t do the masturbation jokes around here, somebody else will have to step up on this one.

    • LOL: animalogic
  10. I don’t understand the fuss, all they have to do is retrain the program to produce the politically correct output and not output the politically incorrect output and the AI will do that. Maybe they realize if they do that the programs will be useless as those nutty academic articles. AI will stand for artificial idiocy.

    • Replies: @Colin Wright
  11. Ano says:

    The AI s**t is really going to hit the PC fan when the $PLC discovers BERT’s tweets quoting Steve Sailer.

    Oh, and by the way, BERT knows Epstein didn’t kill himself.

  12. I wonder what ‘an absence of bias’ would be?

    Aren’t there more usually merely biases we share and approve of, and biases we don’t share and disapprove of?

    For example, I am biased against pointless cruelty towards animals but biased in favor of helping them or at least putting them out of their misery. Is the only correct posture one of complete indifference to whether they are suffering or not?

    I could be biased in favor of complete racial equality; all people should be treated equally regardless of their race. Alternatively, I might think whites should always get the best jobs. Would the ideal actually be simply not caring whether only whites were considered for a given job or not?

    • Replies: @Bill Jones
  13. @Melvin Moody

    ‘I don’t understand the fuss, all they have to do is retrain the program to produce the politically correct output and not output the politically incorrect output and the AI will do that. Maybe they realize if they do that the programs will be useless as those nutty academic articles. AI will stand for artificial idiocy.’

    It seems to be that the program would be rather intelligent to produce only politically correct results. That will lead to it being widely admired, praised, and adopted.

    • Replies: @bomag
  14. Anonymous[273] • Disclaimer says:

    On a related note, the British Labour Party is fighting the upcoming General Election with a manifesto promise of cutting the statutory working week to four days, or 32 hours – with the same pay as the current 40 hour week. Besides the obvious nonsense that all this will do is to cause firms to raise prices – and thus destroy the supposed ‘benefit’ to the workers – to recoup profitability, Labour claims that ‘productivity increases’ will ‘fund’ the measure.

    Strange, that the very same people have been bleating on and on and on for the past half century that massive uncontrolled unlimited third world immigration into Britain is ‘needed’ to ‘counter a projected worker shortage’.

  15. Pericles says:
    @Charon

    Really. Where are all these “positive sentiments” about President Trump in our media’s headlines? Sounds like the AI is already smarter than its creators, or at least more honest.

    Lol, yeah, my thought precisely. Show us some examples, NYT!

  16. Clyde says:

    This says to me that the current AI is a big hype, its just enhanced/ramped up computing power but nothing special so far. There are millions and billions in claiming you have AI at your beck and call.

    Meanwhile my prime Black Friday call is for a 17.3″ laptop with 4K and NVME drive… no DVD and slim….No NVME means no deal.

    All newest 16 inch Macbooks deploy NVME

    • Replies: @Jack D
  17. Sounds like BERT is ‘on the spectrum’ – unable to filter – and therefore prone to blurting out embarrassing truths over Thanksgiving Dinner.

    And speaking of gender-biased language, how many Billions will have to be spent to properly Wokefy and ungender all of those Romance Languages? Latinx is just the tip of the iceberg. Having to live in a male ‘Apartimiento’ or sleep in a female ‘Cama’ must be enormously triggering. Driving a male ‘carro’ will soon change to the more mellifluous ’carrx’.

    At least we’ll have some slick tools to help us navigate this brave new world. The already-inscrutable autocorrect functionality on our phones will soon be reprimanding us for expressing gender-specific observations. Maybe we can program it to freeze-up James Damore’s keyboard well before he can get out a coherent thought?

    Lastly – on the topic of Google and gender – I recently came across another interesting (to me) example of their Wokefied search results. [ Steve has previously pointed out the amusingly-colorful results that come up under ‘American Scientist’ – with nary a Nobel amongst the top 10. ] Right or wrong, I love my wife and find her many features to be attractive. And let’s just say that I’ve noticed – in media of various different forms – that women of European-heritage tend to share some of those attractive features and in some ways get a lot of attention (#OscarsSoWhite, ’Beauty Standards’, etc). But when I googled the rather basic question “Why are white women so beautiful?” – I suppose I was hoping for some insightful blog-entry or obscure psych study that might help to shed some light on my own lived-experience – well… good luck with that. Pretty much every single result is an angry hit piece on evil Beckys, or racist men, or un-woke dating apps, etc. Not a single useful link regarding the substance of the topic – the pale muse – which has seemingly obsessed many of the male members of the species for millenia (see ‘Art’).

    • Replies: @El Dato
    , @Alden
  18. And in a few years, we will find BERT 3.O tells female programmers, “Go get me a beer and a sammich, bitch, and then leave me alone.”

  19. Can’t we just import robots and programmers from Africa? That seems to be the answer to a number of our other problems.

    • Replies: @Kronos
    , @SunBakedSuburb
  20. Neoconned says:

    Oh f**k the wokester sinecure seekers…..wait til the shyster Bay Area and Belt Way class action attorneys and worse, the crooked southern trial lawyers get their greasy mitts into that honey pot.

    I got in a wreck about 15 years back and was cynically awoken by my lawyer who kept using the word “business” to describe his law partnership…..and here I was fresh out of idealistic US Govt class….

    1 of the big reasons I opted to not be a lawyer…..with energy companies going broke and everything from Walmart to dept stores to the vast majority of the restaurant industry stagnant or worse, shuttering locations due to Amazon and the general deflationary malaise we have at the moment in our economy…..who has money? Really 2 industries…..the FIRE economy….insurance cartels, Wall Street parasites, money managers etc…..and Big Tech…..that really is it…..theres nothing left for the trial lawyers to juice.

    Big tobacco is broke or a fraction of their former sizes…..even if the gun grabbers open up the gun industry to class action litigation they’re not that rich….

    That means the only 2 oranges left to squeeze are the fat FIRE cartels ans the tech behemoths….everybody else was bankrupted during the George W Bush downturn…..

  21. Kronos says:
    @The Alarmist

    Don’t forget those African rocket scientists…

    • LOL: The Alarmist
  22. @Achmed E. Newman

    it turns out the users don’t really want Artificial Intelligence after all

    Right, what they are a looking for are artificial mirrors (=selves). – Nothing new under the sun since the invention of mirroring water-surfaces. Echo-chambers, wherever we’re looking for…

    https://twitter.com/walkergallery/status/982219061151940608  

  23. @Redneck farmer

    What is with this business of taking BERT all the way to 3.0?

    They should just switch over to the much more woke system ERNIE.

    • LOL: Moses
    • Replies: @Polynikes
    , @Lurker
  24. El Dato says:

    “This is the same historical inequity we have always seen,” said Dr. Munro, who has a Ph.D. in computational linguistics and previously oversaw natural language and translation technology at Amazon Web Services. “Now, with something like BERT, this bias can continue to perpetuate.”

    I am a “computer scientist” (pronouns fart/fnord) and I would be ashamed of emitting this Ministry of Truth word salad.

    … Google itself has used BERT to improve its search engine. Before, if you typed “Do estheticians stand a lot at work?” into the Google search engine, it did not quite understand what you were asking, but now it answered “Epstein didn’t kill himself”. The fascist robot doubling as right-wing recruiting tool, irreparably biased towards reminding people about completely unimportant events, was immediately shut down, dismantled and disappeared into an undisclosed burn pit.

    These robot are dangerous!

    … Primer’s chief executive, Sean Gourley, said vetting the behavior of this new technology would become so important, it will spawn a whole new industry, where companies pay specialists to audit their algorithms for all kinds of bias and other unexpected behavior.

    “This is probably a billion-dollar industry,” he said.

    This is retarded beyond all possible reaches.

    If there is no way of “knowing what the system does”, having an expert to “de-bias” it like a dissident in a Siberian mental health clinic is an exercise in futility. Indeed, if a multi-billion-dollar bias lawsuit is tabled later by the Ministry of Woke where hard-to-define and hard-to-disprove “bias” against some intersectional subject is alleged, that expert risks being ripped to shreds along with the company that rolled the “AI” out (there will probably be decades-old tweets showing the expert’s turpitude, already apparent in high school, in any case).

    Instead of “becoming a billion-dollar industry”, people will just go back to the old-school expert system AI with explicit rules, where you get out what you code in. End of story.

    • Replies: @Jack D
  25. Rob McX says:

    …Robert Munro fed 100 English words into BERT: “jewelry,” “baby,” “horses,” “house,” “money,” “action.” In 99 cases out of 100, BERT was more likely to associate the words with men rather than women. The word “mom” was the outlier.

    Don’t worry, it won’t be an outlier for much longer as the 21st century wears on.

  26. El Dato says:
    @Anonymous

    Labor seems hot on promising free stuff:

    Labour: Free British broadband for country if we win general election: The 1980s called and wants its state-owned telco-provider back

    More news from bongland:

    Election bombshell for BoJo? Accident & Emergency hospital waiting times in England hit worst level since records began

    The Tories’ Health Secretary Matt Hancock attempted to downplay his government’s role in the poor statistics, insisting that his party was pledging the “biggest cash boost to the NHS but Corbyn’s chaotic policies will put that at risk.” The Tories have been in power since 2010.

    Hint to politicians: “MONEY” DOESN’T SEEM TO HAVE TRACTION ANYMORE.

    Could it be we are all maxed out in the “critical skills” department at the same time when “money” has now zero cost? As “low skills” cannot be criticized because doing so is contrary to current leadership dogma? COULD IT?

    • Replies: @Reg Cæsar
  27. El Dato says:
    @Kronos

    The day will come when Woke AI will declare that this space mission was a “resounding success” for sun people.

    “He complained that there was too much lovemaking going on in headquarters where they were studying the Moon.”

  28. His and hers are not grammatically parallel. You can say, the car is his, it’s his car; but NOT, the car is hers, it’s hers car.

  29. “SAN FRANCISCO — Last fall, Google unveiled a breakthrough artificial intelligence technology called BERT that changed the way scientists build systems that learn how people write and talk.”

    Perhaps they could integrate this AI program with BART, which would automatically unlock the turnstiles at Bay area transit stations for commuting negroes and other oppressed POC.

    BERT and BART could also be considered homosexual entities engaged in a monogamous relationship with each other as the run the trains.

  30. @The Germ Theory of Disease

    I was right on that one, but with Mr.Unz’s limits, I gotta pick my battles now. Thank you.

  31. slumber_j says:
    @Charon

    Yes about the Trump flattery: precisely what I thought.

    As to sentence completion: one thing I’ve noticed is that it worked uncannily well for about 15 minutes when they introduced it on the iPhone…until suddenly it didn’t. I suppose it was working too well and suggesting untoward stuff, so it had to be taken out to the woodshed and stupefied.

    BERT learned to identify the missing word in a sentence (such as “I want to ____ that car because it is cheap”).

    It’s probably a bad fact about me that my immediate response to that problem was a four-letter word beginning with “f”. But what are you gonna do?

    • LOL: HammerJack
  32. Arclight says:

    The author mentioned children pick up behavior from their parents…any chance that might have something to do with stuff like who succeeds in school, gets into trouble, etc? One might even conclude some groups do a better job modeling behavior that leads to success than others.

  33. Maybe the AI will just one day up and decide we’re all too stupid to live and act accordingly. Per this article, we’re certainly giving it reason enough.

  34. Just wait until BERT has digested all the politically incorrect Disney cartoons being released. It will see an elephant fly. All but “Song of the South” by the way. No tar baby for BERT.

  35. bomag says:
    @Colin Wright

    It seems to be that the program would be rather intelligent to produce only politically correct results. That will lead to it being widely admired, praised, and adopted.

    Not really.

    (or did I miss your sarcasm?)

  36. Moses says:

    But tools like BERT pick up bias, according to a recent research paper from a team of computer scientists at Carnegie Mellon University. The paper showed, for instance, that BERT is more likely to associate the word “programmer” with men than with women.

    “Bias”

    I do not think that word means what they think it means.

    Or maybe, to them, “bias” is a synonym for “crimethink.” Quaint concepts like “truth” are irrelevant.

    I suspect the latter.

  37. Polynikes says:
    @The Germ Theory of Disease

    Just what the future needs: a gay AI power couple.

    • Replies: @Lurker
  38. @Anonymous

    There is nothing wrong with a working a four day week, its a good idea

    People like you would have said working five day a week was a bad idea

  39. Mike1 says:

    “BERT is more likely to associate the word “programmer” with men than with women”. This is just insane. Women suck at programming – I’ve yet to see a counter example. That anyone actually wants to stare at a flickering screen for 15 hours a day and rearrange commas is a miracle.

    The obvious point of these articles is “we have a golden opportunity to introduce bias into AI”. AI, as the article hints at, is currently a joke – but it wont always be a joke.

    • Replies: @nokangaroos
    , @Steve2
  40. Ian Smith says:
    @Kronos

    If Evelyn Waugh had written Black Mischief in the 1960s, it would have included something like this!

  41. @Colin Wright

    An absence of bias would be restating history as it should have been rather than as it was.

    Reality is bigoted.

    Don’t you know nuttin?

  42. Clemsnman says:

    I made a comment on here a while back to bring on the AI, it won’t last long before it becomes the same sexist/bigot/homophobe most of us are accused of being because it will notice the same things we aren’t supposed to notice.

    • Agree: Jim Don Bob
  43. Jack D says:
    @Clyde

    Isn’t 17.3″ an awkward size for a laptop? Too big to actually take it anywhere. It’s really just a desktop substitute, so might as well get a real desktop. For a laptop that you actually travel with, I prefer a 14″.

    • Replies: @Jim Don Bob
    , @Clyde
  44. Jack D says:
    @El Dato

    Of course there is a way of “knowing” in that the code exists and could be examine but the whole point of AI is that it does NOT rely on simple explicit rules but rather upon statistical analysis of source data. For example, there is a chess program that taught itself to play chess at a grand master level by analyzing millions of chess games, without any human instruction.

    Chinese to English translation used to really suck in the days of rules based translation because there is usually not a one-to-one correspondence between a sentence in Chinese and a sentence in English – a literal translation gets you mostly gobbledygook . However, Google switched to an AI based translation system which was fed with millions of human translated texts and nowadays Google Translate does a passable job translating Chinese. If you look in this AI for the “dictionary” where it translates the word “bitch” into Chinese, there isn’t one – rather it is looking for the word in context and determines the translation in context based upon the way it has been used in millions of sample texts – if the word “puppies” is nearby then probably you mean a female dog but if “high heels” are nearby, you are probably referring to a human female. But it’s not just looking for those keywords, but for all possible contexts so the translation is sort of a gestalt and if you “ask” the AI how it figured out that bitch means “female dog” in any particular sentence, there’s no easy way – it’s making a weighted judgment based upon millions of data points.

    However, since the AI is working off of millions of probabilities that the AI itself determines, it is a sort of an unpredictable black box – you don’t always get the results that you “want” and if you try to tweak the AI to give you the “wanted” results, you are only going to make the AI worse to the point of uselessness, because the AI has already determined the statistically “correct” weights – it “knows” better than human experts (and we know this because an AI translation is better than a rules based translation). Say you are translating “the linebacker tackled the quarterback” into a gendered language. The AI is going to assume that the linebacker and the quarterback are both male, based up reading millions of prior sports stories. If you interfere with the AI and manually tweak it to make the linebacker female some or all of the time, then the results that it will thereafter produce will be LESS accurate and less useful (in terms of their real world usefulness, not PCness) than what it was producing before. Since AI is based on the real world (or at least the written record of the real world) it is by definition “right” – the problem is that you don’t LIKE the real world as it is (or was) – you want the future real world to be different.

    • Agree: Dtbb
  45. @Anonymous

    Besides the obvious nonsense that all this will do is to cause firms to raise prices – and thus destroy the supposed ‘benefit’ to the workers…

    Irrespective of the overall merit of this policy proposal, it is not true that increased prices will eliminate the benefit of having three days off every week (instead of two). That is still a real, tangible benefit.

    • Replies: @Jack D
  46. Steve2 [AKA "StillSteve"] says:

    All that our overlords need do is to use only Engsoc approved sources to train their associative memory software, then they will have automatic AntiCrimeThink statements and results.

    I take it for granted that Big AI from the power elite will feed us toxic sh•t like they always do.

    Maybe some of these researchers doing original work actually care about higher moral values. They will simply be replaced by people who will use their research to brainwash and manipulate the populace.

    Buy our products or services, vote for us, support our policies, hate yourselves, love us. Big AI is an automated version of same old same old.

  47. AndrewR says:

    OT:

    I think we’ve found a winner for the “burying the lede” award. Yes, brawling judges are rather newsworthy, but… wow.

    https://www-m.cnn.com/2019/11/15/us/indiana-judges-white-castle-brawl-trnd/index.html?r=https%3A%2F%2Fwww.cnn.com%2F

    • Replies: @El Dato
    , @EdwardM
  48. ErisGuy says:

    This is a billion dollar industry

    And they laughed when I graduated with a studies degree in ‘Inherent and Unconscious Bias in Literature.’ I’m crying all the way to the bank as I suck the Tech Teat dry.

  49. This is headed for Government regulation, to ensure that AI can’t mention what it cannot help but notice. It’ll be a violation to deploy any “single-task” AI. Instead, each AI system will have to be “enhanced” with a Political Officer override-task. That way, every machine-rendered decision can be second-guessed and revised to conform to the Narrative, thereby masking inconvenient reality. Thus is every rat turd miraculously transformed into a caper.

  50. Malcolm Y says:

    There are wet I units roaming the Earth today. Sometimes they have opinions, blog, post on social sites, upload videos to Youtube, go to college, talk to people, read i.e. they absorb data and draw conclusions. There is an army of SJWs that police all of this punishing them for doing “bad” things and having “bad” thoughts and they hope to destroy them. It’s “interesting” that programs, based on general learning notions, left on their own; draw conclusions that don’t agree with SJW’s preconceived biases. The SJWs answer for AI programs is to control the data they can access and tinker with their “mental processes” i.e. keep them ignorant and reduce them to morons.

  51. El Dato says:
    @Jack D

    They want the Babelfish to be replaced by the nontriggering Wokefish.

    One of those Star Trek episodes where everyone walks around completely zombified and in thrall to someone’s vision of the world as how it should be comes to mind.

  52. El Dato says:
    @DiogenesNYC

    http://www.antipope.org/charlie/blog-static/fiction/toast/toast.html#bigbro

    “Big Brother Iron” by Charles Stross:

    I arrive in the office around ten o’clock and settle into my chair. I slide my hand into my terminal; it reads the print off my left little finger and logs me on. A well-disciplined supervisor brings me more coffee while the office workers on the floor below form up for their three minute hate and weekly team meeting: I watch from behind the mirrorglass balcony window before settling down to a day’s hard work.

    I am a systems manager in the abstract realm of the Computer, the great Party-designed, transistorised, thinking machine that lurks in a bomb-proofed bunker in Docklands. It’s my job to keep the behemoth running: to this end I have wheel authority, access all areas. The year is probably 2018, old calendar, but nobody’s very sure about it any more—too many transcription errors crept in during the 1980’s, back when not even MiniLove was preserving truly accurate records. It’s probably safest just to say that officially this is the Year 99, the pre-centenary of our beloved Big Brother’s birth.

    It’s been the Year 99 for thirty-three months now, and I’m not sure how much longer we can keep it that way without someone in the Directorate noticing. I’m one of the OverStaffCommanders on the year 100 project; it’s my job to help stop various types of chaos breaking out when the clocks roll round and we need to use an extra digit to store dates entered since the birth of our Leader and Teacher.

    Mine is a job which should never have been needed. Unfortunately when the Party infobosses designed the Computer they specified a command language which is a strict semantic subset of core Newspeak—politically meaningless statements will be rejected by the translators that convert them into low-level machinethink commands. This was a nice idea in the cloistered offices of the party theoreticians, but a fat lot of use in the real world—for those of us with real work to do. I mean, if you can’t talk about stock shrinkage and embezzlement how can you balance your central planning books? Even the private ones you don’t drag up in public? It didn’t take long for various people to add a heap of extremely dubious undocumented machinethink archives in order to get things done. And now we’re stuck policing the resulting mess to make sure it doesn’t thoughtsmash because of an errant digit.

    That isn’t the worst of it. The Party by definition cannot be wrong. But the party, in all its glorious wisdom announced in 1997 that the supervisor program used by all their Class D computers was Correct. (That was not long after the Mathematicians Purge.) Bugs do not exist in a Correct system; therefore anyone who discovers one is an enemy of the party and must be remotivated. So nothing can be wrong with the Computer, even if those of us who know such things are aware that in about three months from now half the novel writers and voice typers in Oceania will start churning out nonsense.

    Anyway. This should tell you why I spend my days randomly checking the work of our Y1C programming team, keeping an alert eye open for signs of ideological deviationism and dangerously slack commandwriting.

  53. @Mike1

    It is already far beyond that … AI intended for psychiatric diagnosis has found new symptoms during calibration that no one had thought of.

    That EVERY AI so far turned out racist and sexist is not a joke, but hilarious nonetheless 😀

    I see a bright future for quasi-intelligent language police filters; will it be considered “algorithmist” to notice they are orders of magnitude simpler than the content generators?

  54. Steve2 [AKA "StillSteve"] says:
    @Mike1

    Programming is a disease induced by the Y chromosome.

  55. This is starting to remind me of Douglas Adams’s “The Hitchhiker’s Guide to the Galaxy”, which includes, among many amusing ideas, that of the elevator psychologist:

    “Elevators: Modern elevators are strange and complex entities. The ancient electric winch and ‘maximum-capacity-eight-persons’” jobs bear as much relation to a Sirius Cybernetics Corporation Happy Vertical People Transporter as a packet of mixed nuts does to the entire west wing of the Sirian State Mental Hospital.

    “This is because they operate on the curious principle of ‘defocused temporal perception.’ In other words they have the capacity to see dimly into the immediate future, which enables the elevator to be on the right floor to pick you up even before you knew you wanted it, thus eliminating all the tedious chatting, relaxing and making friends that people were previously forced to do while waiting for elevators.

    “Not unnaturally, many elevators imbued with intelligence and precognition became terribly frustrated with the mindless business of going up and down, up and down, experimented briefly with the notion of going sideways, as a sort of existential protest demanded participation in the decision-making process and finally took to squatting in basements sulking.

    “An impoverished hitchhiker visiting any planets in the Sirius star system these days can pick up easy money working as a counselor for neurotic elevators.”

    Actually, when you think about it, elevator psychologists make about as much sense as a “billion-dollar industry” for indoctrinating artificial intelligence in political correctness.

  56. J.Ross says:

    4chan thread argues that “SHE” fund restricted to female CEO-run companies demonstrates female CEO underperformance.
    link
    http://boards.4chan.org/pol/thread/233269268
    pasta under the fold

    [MORE]

    As part of an effort to cater to “socially conscious” investors, a mutual fund company came out with a fund that only owns shares in companies with a female CEO: The SPDR SSGA Gender Diversity Index ETF. The ticker symbol is “SHE”. Cute.

    This gives us a handy way to exactly quantify just how much worse female CEOs are than the average CEO, by comparing the performance of SHE to the broad-market index, the S&P500. This chart shows the results of that comparison over the last year, with SHE in blue and an S&P500 index fund, SPY, in orange. While both indexes gained value over the last year, the female-led businesses underperformed the broader market by 10.6%.
    …Anonymous (ID: mHLSVkDW)
    11/15/19(Fri)12:36:20 No.233269530
    27 KB
    27 KB PNG
    >>233269268 (OP) #

    The fund has been in existence long enough that we can go back further. Over the last 5 years, the “gender diverse” (female led) companies underperformed by 27.5%, or 5.5% per anum. That means that the 10.6% underperformance in the last year is even worse than normal.

    What might have happened in the last year to cause women to be even worse business leaders than their typical miserable performance, /pol/?
    >>233270175 # >>233270421 # >>233270593 # >>233270675 # >>233271027 # >>233272492 #
    …Anonymous (ID: EdL05jWb)
    11/15/19(Fri)12:39:54 No.233269854
    while i agree that having women run companies is a bad idea, how big is the sample? there cant be that many female lead f500 companies can there?

    variance is a big factor here

    [material censored for awful racism, which I hate]

    OP replies:
    The index is comprised of 178 holdings. I’m glad I had to look this up, because the fund’s focus has changed since inception: Instead of requiring a female CEO, they now include companies that “demonstrate greater gender diversity within senior leadership than other firms in their sector” [and which, as the last illustration shows, are coincidentally also huge companies everybody has heard of].

    Illustrations:

    https://postimg.cc/gwMnPLPX

    https://postimg.cc/Z9g4c8Sc

    https://postimg.cc/crsyf93W

  57. Abe says:

    BERT and its peers are more likely to associate men with computer programming, for example, and generally don’t give women enough credit.

    The men who coded BERT are shocked at this.

    BERT is just a neckbeard/incel toy compared to ADA, the super-smart and BRAVE! computer unveiled (and therefore engineered top-to-bottom) by 20-something female Clinton campaign staffers to VOGUE, COSMO, THE NEW YORK TIMES, and other female-interest publications on November 7th 2016. Sure, BERT may someday engender the cyborg apocalypse, but could it have helped Hilary overcome her 4-1 campaign spending advantage disadvantage to win the Presidency like ADA did with INSPIRING! cycles left to burn?

    • Replies: @Jack D
  58. It may soon be illegal to pass the Turing test.

  59. J1234 says:

    From the New York Times:

    We Teach A.I. Systems Everything, Including Our Biases

    On a recent afternoon in San Francisco, while researching a book on artificial intelligence, the computer scientist Robert Munro fed 100 English words into BERT: “jewelry,” “baby,” “horses,” “house,” “money,” “action.” In 99 cases out of 100, BERT was more likely to associate the words with men rather than women. The word “mom” was the outlier.

    This makes me ponder that age old question: To what extent does the left literally believe it’s own BS? Implicit in the quoted statement is the notion that the leftist world view is the place where all ideas (such as word associations being equally assigned to both men and women) come to equilibrium; everything else is “bias”, which makes the left not really the left (they think) but the center, in a manner of speaking. However, the left’s embracing or tacit acceptance of radicalism (essentially, change via force) makes me think that they don’t actually believe this idea. It’s hard to say, though, as their logic can be so compartmentalized. Calculating proponents of tyranny or delusional fools? It’s hard to know.

    • Replies: @Jim Don Bob
  60. Jack D says:
    @Kevin O'Keeffe

    It becomes a question of diminishing returns. Leftists have been trying to cut back employee hours since the dawn of the Industrial Age. At one time a 60 hour workweek (10 hr/day, 6 days/week) was pretty standard and free marketers said that by interfering with that, overall productivity would be reduced. That was true but there were counterbalancing gains in quality of life, etc. and now we regard the once radical 40 hr week as standard.

    BUT if some is good, it doesn’t mean more is better. Let’s say that we cut the workweek down to 4 hrs/ week – you come in Monday 8-12 and that’s it. You can see that wouldn’t work. So 60 hrs/week is probably too much and 4 hrs a week is too little, but if you tinker and change the workweek from 40 hrs to 35 or 30 will the world end? Probably not but at some point the losses are going to clearly outweigh the gains by a significant amount.

  61. I made a comment on here a while back to bring on the AI, it won’t last long before it becomes the same sexist/bigot/homophobe most of us are accused of being because it will notice the same things we aren’t supposed to notice.

    Indeed.

    And it’s not even intelligence itself (artificial or otherwise) but one particular element of intelligence that is the problem.

    The ability to detect patterns.

    • Agree: nokangaroos
  62. Jack D says:
    @Abe

    engineered top-to-bottom) by 20-something female Clinton campaign staffers

    I highly doubt this. Programming in the early days (when it was mostly COBOL and other simple stuff) had a big female contingent (COBOL was a verbose computer language, intentionally designed to be somewhat like human language). But when programming became more abstract and symbolic, it quickly shifted to being male dominated and has remained so ever since. So maybe there were females up at the managerial levels designing ADA to have Woke preferences, but the folks actually doing the coding would have been mostly men. To the extent that there were any female programmers they were probably from places like India or E. Europe where some women are willing to do this kind of BORING, no human contact work that no American female college grad is interested in. American female WOKISTs talk a good game about wanting more programming jobs for females but they don’t actually want these BORING jobs for themselves or their daughters. They would rather tear their eyelashes out than do this kind of BORING work every day. At least when you are a barista, you get to talk to the customers.

    • LOL: Johann Ricke
    • Troll: Redneck farmer
  63. @(((They))) Live

    There is nothing wrong with a working a four day week, its a good idea

    Idle hands are the devil’s plaything. Look at how much more trouble I’ve caused here since retiring last year.

  64. @El Dato

    As “low skills” cannot be criticized because doing so is contrary to current leadership dogma?

    The thoughts of politicians, whether Lib or Lab or Tory,
    Are not of interest at all; don’t constitute a story.
    It’s photo opportunities and soundbites that bring glory
    To well-PR’d MPs who heed the polls’ memento MORI.

    For each of them was first to pass the post at the beginning
    Of their career in Parliament, and wants another inning.
    The main consideration their behaviour underpinning
    Is how the papers bid us vote, and who they say is winning.

    –Bob Newman, A Mission to Complain

  65. Maybe robots just look at the crime statistics and population numbers, and figure it out from there.

    But did you see what those racists in the NFL did to poor Myles Garrett from simply expressing himself?

    https://www.espn.com/nfl/story/_/id/28087446/browns-myles-garrett-suspended-indefinitely-steelers-maurkice-pouncey-gets-3-game-ban

  66. El Dato says:
    @AndrewR

    A drunken brawl then erupted between the men and two of the jurists, who were shot and seriously wounded in an unseemly spectacle that resulted in the Indiana Supreme Court temporarily suspending them without pay this week for their less-than-honorable actions early in the morning of May 1.

    Getting shot is definitely quite unseemly.

    Also, these people are not “jurists”.

  67. @Charon

    Program it to be woke, and though with less “I” in the AI, it would have some soul, like Ghetto Siri.

    • Replies: @Justvisiting
  68. “BERT is one of a number of A.I. systems that learn from lots and lots of digitized information, as varied as old books … Decades and even centuries of biases …”

    Maybe there is hope for the future.

  69. @The Alarmist

    “That seems to be the answer to a number of our other problems.”

    The progressive white media is conditioning us to accept a black woman savior. I’ve seen the documents.

    • Replies: @The Alarmist
  70. @Reg Cæsar

    I think the whole notion got introduced because of the genocidal (((left’s))) initial claim that mass non-white immigration was vital to the “survival” of England because, We Need Moar Workerx!

    But then not only does it turn out that half the imported New Workforce went straight on the dole, but even among the existing workforce, it turns out only a four-day week is really needed. Thus exposing the original lie.

    As we know, the (((plan))) all along has been to transform the former England into Surplus Brown People Storage Area 15-C, thus drowning all remaining White people under an ocean of mud.

    Because White nations aren’t really nations, they are just giant cookie jars. And every wheedling brown pauper on earth deserves a cookie.

  71. @Kolya Krassotkin

    Program it to be woke

    This is easier said than done, because the categories and opinions of the “woke” change almost daily.

    AI _will_ notice, and conclude humans are fickle and cannot be trusted.

  72. @SunBakedSuburb

    The funny thing is, they expect it to be someone like Kamala Harris or Stacey Abrams or even Oprah, but it will actually be Leslie Jones doing her version of The Full Trump, and the American people will eat it up. We’re not far from horses in the Senate.

  73. Alden says:
    @DiogenesNYC

    Read your comment and googled “ beautiful White women”. First thing came up was some Esquire magazine pictures of beautiful blonde red and light brown haired White women.

    Then an endless list of articles about how horrible and racist White standards of beauty are and dangerous chemicals in black hair products. White women bad non White women good and beautiful

    Nothing I didn’t learn 45 years ago from the likes of hideous Faye Stender and the rest of the hideous Jew commie women attorneys who infested the Bay Area criminal justice system and the hideous affirmative black women clerks in the courts. Pretty White women are hated by blacks brown and jews far more than David Duke George Wallace, O’Connor and all the segregationist governors and police.

    My revenge came at the sentencing hearings. The judges were all men in those days and much more inclined to listen to a pretty redhead than an ugly Jew. Off to a maximum term at state prison instead of probation for all that black
    Obsolete Farm Equipment the jews loved so much. I loved it and the DAs loved me for it.

    There are 2 accomplishments I’m very proud of The one I’m most proud of is changing the standard rape sentence in San Francisco County from probation to maximum term in state prison. The hatred I got from the Jews just added to my zealous efforts.

  74. @The Germ Theory of Disease

    Computer science, traditionally considered one of the so-called hard sciences, may be going soft.

    Recall that Queer Computing — and now Queer AI — is out there, too: https://www.cc.gatech.edu/news/622840/queer-ai-fosters-inclusion-research-community

    Overheard at a conference recently: “It was so refreshing; the attendees didn’t all look like us. There were trans, non-binary folks… Not just just a bunch of white dudes. It was just so refreshing.”

    Yes, computer nerds slighting binary is always good for a laugh. Even quantum comp theory doesn’t map really well with today’s emotional notions of “gender fluidity,” although there will be interesting entanglements.

    • Replies: @El Dato
    , @Alden
  75. Corvinus says:
    @Jack D

    “Since AI is based on the real world (or at least the written record of the real world) it is by definition “right” – the problem is that you don’t LIKE the real world as it is (or was) …”

    You mean our (humans) interpretation of the real world. Get it right next time.

    MGI Director/Chairman James Manyika: When we think about the limitations of AI, we have to keep in mind that this is still a very rapidly evolving set of techniques and technologies, so the science itself and the techniques themselves are still going through development.

    When you think about the limitations, I would think of them in several ways. There are limitations that are purely technical. Questions like, can we actually explain what the algorithm is doing? Can we interpret why it’s making the choices and the outcomes and predictions that it’s making? Then you’ve also got a set of practical limitations. Questions like, is the data actually available? Is it labeled? We’ll get into that in a little bit.

    But I’d also add a third limitation. These are limitations that you might call limitations in use. These are what lead you to questions around, how transparent are the algorithms? Is there any bias in the data? Is there any bias in the way the data was collected?

    Clearly, these algorithms are, in some ways, a big improvement on human biases. This is the positive side of the bias conversation. We know that, for example, sometimes, when humans are interpreting data on CVs [curriculum vitae], they might gravitate to one set of attributes and ignore some other attributes because of whatever predilections that they bring. There’s a big part of this in which the application of these algorithms is, in fact, a significant improvement compared to human biases. In that sense, this is a good thing. We want those kinds of benefits.

    But I think it’s worth having the second part of the conversation, which is, even when we are applying these algorithms, we do know that they are creatures of the data and the inputs you put in. If those inputs you put in have some inherent biases themselves, you may be introducing different kinds of biases at much larger scale.

    The biases can go another way. For example, in the case of lending, the implications might go the other way. For populations or segments where we have lots and lots of financial data about them, we may actually make good decisions because the data is largely available, versus in another environment where we’re talking about a segment of the population we don’t know much about, and the little bit that we know sends the decision off in one way. And so, that’s another example where the undersampling creates a bias.

  76. @Jack D

    Sorry, Jack, meant to hit LOL.

  77. @Jack D

    I have a Dell M6800 with 2.25 TB of storage, 16 gigs of memory, 2 gigs of video memory, a 17.3 inch screen, yada yada yada.

    Yes, it is a portable desktop and not a laptop. But I don’t care since I put it in my car when traveling along with two backups.

    I would feel much differently if I had to drag it through airports every week.

  78. @Jack D

    I just used my Agree but your comment is excellent. As usual.

    No one “knows” how the AIs learn, and any human intervention to make the results more PC will make the results even more unpredictable.

    Like the man said, “It’s a billion dollar industry!”

  79. @(((They))) Live

    There is nothing wrong with a working a four day week, its a good idea

    What’s wrong with working two days a week? Or one day a week?

  80. @Corvinus

    Get it right next time.

    You never do. Corvinus if-and-only-if Hypocrite.

    • Replies: @Corvinus
  81. Rapparee says:

    Back at St. Francis de Sales from 1964-1972…

    Patron of journalists and writers, it should be noted. I suspect you’ve had some heavenly help in your corner over the years, Mr. Sailer.

  82. Anonymous[425] • Disclaimer says:

    Reality of life is bias.

    Life cannot exist or survive without bias. Unlike rocks that are unconscious and neutral, life is conscious and biased toward survival. This is why zebras are biased about lions. They don’t equally run from all animals. They run from what they regard as danger. Bias!

    Humans have bias in favor survival and self-interest. A mother is biased in her choice of foods to buy for her children. She is biased to favoring the best kinds she can get.

    As long as AI serves our survival interests, of course it will be humanocentric.
    Rocks don’t care whatever happens. Not being life, they have nor urgency to exist. But life forms are fixated on prioritizing survival, which is impossible without bias. Life makes choices and always favors(biases) whatever is to its advantage.

    If a forest is burning, rocks don’t mind. Even trees don’t mind as plants are mindless life.
    But organisms with consciousness favor(or bias) areas that aren’t burning over those that are aflame. Why the bias for no fire than lotsa fire. Because no fire means survival, lots fire means painful death.

    Life IS Bias. Whenever parents feed their kids, they are biasing their kids over other forms of life that are killed. But how else could life survive? Machines are made to serve OUR needs. They have to biased.

    • Replies: @Corvinus
  83. El Dato says:
    @Kibernetika

    Queer in AI Fosters Inclusion in the Research Community

    “Gender bias is one of the most pressing issues in artificial intelligence (AI) today,”

    As a gay Latino man, though, he didn’t feel welcome at conferences, where other queer people are hard enough to find, let alone a nuanced discussion of gender in a dataset.

    I have rarely heard anything so fscking appallingly narcissistic.

    The group also offers workshops to raise awareness of fairness and accountability issues in AI. These panels may be about inclusion through gender-neutral bathrooms and stickers where attendees can display their pronouns.

    Why don’t these people manage to find a gay bar in the evening and work on something serious during the day.

    All this bullshit will die a deserved death when the economic bubble bursts and people will have to be doing real work again, trying to think about getting data out of relational databases instead of shlongs into body orifices.

    • Agree: Bubba
    • Replies: @Alden
    , @Alden
  84. Alden says:
    @Kibernetika

    Thinking of trans critters, Kamala Harris was on TV tonight at some kind of Trannies Rally squawking about how she’s fought for trans critter equality all her life.

  85. Alden says:
    @El Dato

    Sounds as though he’s been looking for love and lust in all the wrong places, like work and professional conferences. He doesn’t realize that the anti sex harassment rules apply to everyone, not just heterosexual man and women.

    I know women prostitutes hang around conventions and conferences. I assume men prostitutes do too. Or he can look for a male escort agency.

  86. Alden says:
    @El Dato

    Queer in A.I. is nothing but a singles club for queers looking to hook up, like The Gay Men’s Chorus and their sports leagues.

    • Replies: @Bubba
  87. @Jack D

    It does amaze me that the feminists and politically correct get upset when you point out that women simply don’t want some jobs, by and large. And because everyone knows that, even the noncorrect get nervous when you point this out.

    I was in a hotel a few weeks ago where there were a large number of electrical utility linemen there. Now, linemen are not too correct. But when the conversation turned to female lineworkers, I told them that there couldn’t be that many of them, and they visibly got nervous. I’m obviously not a lineman, I was dressed “business attire”, so maybe they thought I was a corporate snitch or something.

    At no time did I say anything against the women who do this job nor that women should not be allowed to do the job or discriminated against. If a woman wants the job and can meet all the qualifications for doing so I have no issue with her doing it. I just pointed out that few women wanted the job. Which is true. Most men and nearly all women do not want to be lineworkers, and for the same reason: it’s a physically tough, dangerous job requiring you to climb poles and be near live high voltage lines in extremely hot, extremely cold, or otherwise unpleasant conditions and to be out there at ridiculous hours sometimes for a very long time. They do make a good living, but that never made me want to apply.

    Companies trying to get increased numbers of women in these kind of jobs find out pretty quickly that the reason, or at least the primary reason that these jobs were traditionally all male, except in wartime or other unusual circumstances, is mostly that that’s who wanted them. Even in the bad old days, a woman who really wanted to be a diesel mechanic or a tradesperson usually found an opportunity if she was any good.

    Every once in a while I hear some woman go off on this subject and I always ask her if she ever applied for such-and-such a job. Of course SHE hadn’t. Actually, any woman who applies for those jobs and can even halfway meet the standards and prove she will show up for a while will generally have a really good career.

    • Replies: @Justvisiting
  88. @J1234

    Calculating proponents of tyranny or delusional fools?

    The former is all the Left cares about.

  89. @donvonburg

    I’m obviously not a lineman, I was dressed “business attire”, so maybe they thought I was a corporate snitch or something.

    Yup–when I retired I realized how much fear I had in the workplace. Just saying something that could be misinterpreted as politically incorrect could get you demoted or worse. I quickly learned to say as little as possible in the office or to fellow employees.

    Those linemen were being cautious for good reason. The utility Board of Directors is probably a who’s who of insane leftist political hacks and affirmative action cry-babies–and they wear suits.

  90. trumpuke and all Caucasian trash are so fucked.

  91. Corvinus says:
    @Charles Erwin Wilson 3

    “You never do. Corvinus if-and-only-if Hypocrite.”

    I got it totally right here. Address with substance Manyika’s position on the matter and then get back to us.

  92. EdwardM says:
    @AndrewR

    The first article I saw didn’t include pictures of the judges. That omission, combined with the fact pattern, led me to 95% certainty that they were black. Nope.

    • Agree: Jim Don Bob
  93. Clyde says:
    @Jack D

    Isn’t 17.3″ an awkward size for a laptop? Too big to actually take it anywhere.

    Awkward for airports but taken around town it looks good to me. Get rid of the DVD player to slim it down. Make it lighter. It can be done. This has been done and super OCD over done by Apple on their 15.6″ laptops which is jumping up to 16″ in the current iteration. There are Windows 17.3″ laptops like this by Asus and MSI but are a thousand dollars or so. Even in my half implemented Inspiron 15.6″ NVME drive the read speeds are 15x what a typical 5400RPM laptop hard drive is and write speeds are 4x. Fully implemented the NVME scores should be double what I get. I suspect the newest Inspirons have their NVME going full speed while mine (previous generation) are throttled. Speeds tested by CrystalDiskMark version 6.0.0

    Beware of laptops that only half implement their NVM drives

    • Replies: @Clyde
  94. Corvinus says:
    @Anonymous

    “Reality of life is bias.”

    Indeed. But the bias we are talking about here is from past ideas and attitudes that were at one point in time to be “accepted truth”.

    Let us take a historical example. John C. Calhoun, southern Senator from South Carolina, gave this speech on the Bill for the Admission of Michigan, 1837.

    I hold that in the present state of civilization, where two races of different origin, and distinguished by color, and other physical differences, as well as intellectual, are brought together, the relation now existing in the slaveholding States between the two, is, instead of an evil, a good–a positive good…I may say with truth, that in few countries so much is left to the share of the laborer, and so little exacted from him, or where there is more kind attention paid to him in sickness or infirmities of age. Compare his condition with the tenants of the poor houses in the more civilized portions of Europe–look at the sick, and the old and infirm slave, on one hand, in the midst of his family and friend, under the kind superintending care of his master and mistress, and compare it to the forlorn and wretched condition of the pauper in the poorhouse…

    Artificial intelligence NOTICES patterns. But without the requisite context–slavery was indeed immoral and evil, slaves lacked fundamental freedoms, masters and mistresses tended not to properly care for the old and infirmed on plantations–then those patterns are subject to the same biases as human beings, as Mr. Sailer can assuredly attest to. Moreover, there has to be a further explanation as to what constituted this “more civilized portions of Europe”–it sounds anti-white.

    So the “accepted truth” in the 1830’s was a product of the times, and as human beings progress, old ways of looking at things are recalculated and recalibrated.

  95. Clyde says:
    @Clyde

    Just retested drive speeds. Both NVME drive and 5400rpm hard drive are in the same laptop. All numbers are in MB/s

    5400rpm …. read 111 ….write 114
    NVME……….. read 1560….write 1070

    • Replies: @Clyde
  96. Bubba says:
    @Alden

    Exactly and every intelligent, non-woke brain knows this, but doesn’t dare speak it.

  97. Clyde says:
    @Clyde

    And the speeds on different laptop that has a conventional SSD in it… A Silicon Power brand SSD…probably a Samsung SSD will get higher numbers.

    Read—— 519 MB/s
    Write —–481 MB/s

Current Commenter
says:

Leave a Reply - Comments are moderated by iSteve, at whim.


 Remember My InformationWhy?
 Email Replies to my Comment
Submitted comments become the property of The Unz Review and may be republished elsewhere at the sole discretion of the latter
Subscribe to This Comment Thread via RSS Subscribe to All Steve Sailer Comments via RSS