The Unz Review • An Alternative Media Selection$
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 TeasersiSteve Blog
AI Can Detect Race from X-Rays Even When Humans Can't
Email This Page to Someone

 Remember My Information



=>

Bookmark Toggle AllToCAdd to LibraryRemove from Library • B
Show CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
AgreeDisagreeThanksLOLTroll
These buttons register your public Agreement, Disagreement, Thanks, LOL, or Troll with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used three times during any eight hour period.
Ignore Commenter Follow Commenter
Search Text Case Sensitive  Exact Words  Include Comments
List of Bookmarks

People used to worry that robots were getting so smart that they’d soon start secretly plotting to take over the world. But now experts worry that AI is getting so smart that it could be secretly plotting to do racism to Black people:

However, our findings that AI can trivially predict self-reported race — even from corrupted, cropped, and noised medical images — in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to.

From a new preprint on arXiv:

Reading Race: AI Recognises Patient’s Racial Identity In Medical Images

Imon Banerjee, Ananth Reddy Bhimireddy, John L. Burns, Leo Anthony Celi, Li-Ching Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghassemi, Shih-Cheng Huang, Po-Chih Kuo, Matthew P Lungren, Lyle Palmer, Brandon J Price, Saptarshi Purkayastha, Ayis Pyrros, Luke Oakden-Rayner, Chima Okechukwu, Laleh Seyyed-Kalantari, Hari Trivedi, Ryan Wang, Zachary Zaiman, Haoran Zhang, Judy W Gichoya

Background: In medical imaging, prior studies have demonstrated disparate AI performance by race, yet there is no known correlation for race on medical imaging that would be obvious to the human expert interpreting the images.

Methods: Using private and public datasets we evaluate: A) performance quantification of deep learning models to detect race from medical images, including the ability of these models to generalize to external environments and across multiple imaging modalities, B) assessment of possible confounding anatomic and phenotype population features, such as disease distribution and body habitus as predictors of race, and C) investigation into the underlying mechanism by which AI models can recognize race.

Findings: Standard deep learning models can be trained to predict race from medical images with high performance across multiple imaging modalities. Our findings hold under external validation conditions, as well as when models are optimized to perform clinically motivated tasks. We demonstrate this detection is not due to trivial proxies or imaging-related surrogate covariates for race, such as underlying disease distribution. Finally, we show that performance persists over all anatomical regions and frequency spectrum of the images suggesting that mitigation efforts will be challenging and demand further study.

Interpretation: We emphasize that model ability to predict self-reported race is itself not the issue of importance. However, our findings that AI can trivially predict self-reported race — even from corrupted, cropped, and noised medical images — in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to.

From the blog of one of the authors:

AI has the worst superpower… medical racism.

AUGUST 2, 2021 ~ LUKEOAKDENRAYNER
Is this the darkest timeline? Are we the baddies?

… instead I wanted to write something else which I think will complement the paper; an explanation of why I and many of my co-authors think this issue is important.

One thing we noticed when we were working on this research was that there was a clear divide in our team. The more clinical and safety/bias related researchers were shocked, confused, and frankly horrified by the results we were getting. Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned.

So in a way, this blog post can be considered a primer, a companion piece for the paper which explains the why. Sure, AI can detect a patient’s racial identity, but why does it matter?

Disclaimer: I’m white. I’m glad I got to contribute, and I am happy to write about this topic, but that does not mean I am somehow an authority on the lived experiences of minoritized racial groups. These are my opinions after discussion with my much more knowledgeable colleagues, several of whom have reviewed the blog post itself.

A brief summary
In extremely brief form, here is what the paper showed:

AI can trivially learn to identify the self-reported racial identity of patients to an absurdly high degree of accuracy

AI does learn to do this when trained for clinical tasks

These results generalise, with successful external validation and replication in multiple x-ray and CT datasets

Despite many attempts, we couldn’t work out what it learns or how it does it. It didn’t seem to rely on obvious confounders, nor did it rely on a limited anatomical region or portion of the image spectrum.

Now for the important part: so what?

An argument in four steps

I’m going to try to lay out, as clearly as possible, that this AI behaviour is both surprising, and a very bad thing if we care about patient safety, equity, and generalisability.

The argument will have the following parts:

Medical practice is biased in favour of the privileged classes in any society, and worldwide towards a specific type of white men.

AI can trivially learn to recognise features in medical imaging studies that are strongly correlated with racial identity. This provides a powerful and direct mechanism for models to incorporate the biases in medical practice into their decisions.

Humans cannot identify the racial identity of a patient from medical images. In medical imaging we don’t routinely have access to racial identity information, so human oversight of this problem is extremely limited at the clinical level.

The features the AI makes use of appear to occur across the entire image spectrum and are not regionally localised, which will severely limit our ability to stop AI systems from doing this.

There are several other things I should point out before we get stuck in. First of all, a definition. We are talking about racial identity, not genetic ancestry or any other biological process that might come to mind when you hear the word “race”. Racial identity is a social, legal, and political construct that consists of our own perceptions of our race, and how other people see us. In the context of this work, we rely on self-reported race as our indicator of racial identity.

Before you jump in with questions about this approach and the definition, a quick reminder on what we are trying to research. Bias in medical practice is almost never about genetics or biology. No patient has genetic ancestry testing as part of their emergency department workup. We are interested in factors that may bias doctors in how they decide to investigate and treat patients, and in that setting the only information they get is visual (i.e., skin tone, facial features etc.) and sociocultural (clothing, accent and language use, and so on). What we care about is race as a social construct, even if some elements of that construct (such as skin tone) have a biological basis.

Secondly, whenever I am using the term bias in this piece, I am referring to the social definition, which is a subset of the strict technical definition; it is the biases that impact decisions made about humans on the basis of their race. These biases can in turn produce health disparities, which the NIH defines as “a health difference that adversely affects disadvantaged populations“.

Third, I want to take as given that racial bias in medical AI is bad. I feel like this shouldn’t need to be said, but the ability of AI to homogenise, institutionalise, and algorithm-wash health disparities across regions and populations is not a neutral thing.

AI can seriously make things much, much worse.

… In medical imaging we like to think of ourselves as above this problem, particularly with respect to race because we usually don’t know the identity of our patients. We report the scans without ever seeing the person, but that only protects us from direct bias. Biases still affect who gets referred for scans and who doesn’t, and they affect which scans are ordered. …

But it is true that, in general, we read the scan as it comes. The scan can’t tell us what colour a person’s skin is.

Can it?

Part II – AI can detect racial identity in x-rays and CT scans

I’ve already included some results up in the summary section, and there are more in the paper, but I’ll very briefly touch on my interpretation of them here.

Firstly, the performance of these models ranges from high to absurd. An AUC of 0.99 for recognising the self-reported race of a patient, which has no recognised medical imaging correlate? This is flat out nonsense.

Every radiologist I have told about these results is absolutely flabbergasted, because despite all of our expertise, none of us would have believed in a million years that x-rays and CT scans contain such strong information about racial identity. Honestly we are talking jaws dropped – we see these scans everyday and we have never noticed.

The second important aspect though is that, with such a strong correlation, it appears that AI models learn the features correlated with racial identity by default. For example, in our experiments we showed that the distribution of diseases in the population for several datasets was essentially non-predictive of racial identity (AUC = 0.5 to 0.6), but we also found that if you train a model to detect those diseases, the model learns to identify patient race almost as well as the models directly optimised for that purpose (AUC = 0.86). Whaaat?

Despite racial identity not being useful for the task (since the disease distribution does not differentiate racial groups), the model learns it anyway? …

But no matter how it works, the take-home message is that it appears that models will tend to learn to recognise race, even when it seems irrelevant to the task. So the dozens upon dozens of FDA approved x-ray and CT scan AI models on the market now … probably do this^^? Yikes!

There is one more interpretation of these results that is worth mentioning, for the “but this is expected model behaviour” folks. Even from a purely technical perspective, ignoring the racial bias aspect, the fact models learn features of racial identity is bad. There is no causal pathway linking racial identity and the appearance of, for example, pneumonia on a chest x-ray. By definition these features are spurious.

By definition!

They are shortcuts. Unintended cues. The model is underspecified for the problem it is intended to solve.

However we want to frame this, the model has learned something that is wrong, and this means the model can behave in undesirable and unexpected ways.

I won’t be surprised if this becomes a canonical example of the biggest weakness of deep learning – the ability of deep learning to pick up unintended cues from the data. I’m certainly going to include it in all my talks.

Part III – Humans can’t identify racial identity in medical images

… The problem is much worse for racial bias. At least in MRI super-resolution, the radiologist is expected to review the original low quality image to ensure it is diagnostic quality (which seems like a contradiction to me, but whatever). In AI with racial bias though, humans literally cannot recognise racial identity from images^^^. Unless they are provided with access to additional data (which they don’t currently have easy access to in imaging workflows) they will be completely unable to appreciate the bias no matter how skilled they are and no matter how much effort they apply to the task.

Part IV – We don’t know how to stop it

This is probably the biggest problem here. We ran an extensive series of experiments to try and work out what was going on.

First, we tried obvious demographic confounders (for example, Black patients tend to have higher rates of obesity than white patients, so we checked whether the models were simply using body mass/shape as a proxy for racial identity). None of them appeared to be responsible, with very low predictive performance when tested alone.

Next we tried to pin down what sort of features were being used. There was no clear anatomical localisation, no specific region of the images that contributed to the predictions. Even more interesting, no part of the image spectrum was primarily responsible either. We could get rid of all the high-frequency information, and the AI could still recognise race in fairly blurry (non-diagnostic) images. Similarly, and I think this might be the most amazing figure I have ever seen, we could get rid of the low-frequency information to the point that a human can’t even tell the image is still an x-ray, and the model can still predict racial identity just as well as with the original image!

Damn their eyes!

Performance is maintained with the low pass filter to around the LPF25 level, which is quite blurry but still readable. But for the high-pass filter, the model can still recognise the racial identity of the patient well past the point that the image is just a grey box 😱 …

This difficulty in isolating the features associated with racial identity is really important, because one suggestion people tend to have when they get shown evidence of racial bias is that we should make the algorithms “colorblind” – to remove the features that encode the protected attribute and thereby make it so the AI cannot “see” race but should still perform well on the clinical tasks we care about.

Here, it seems like there is no easy way to remove racial information from images. It is everywhere and it is in everything.

Perhaps Disraeli was right when he had the character who was his mouthpiece in his novels explain, “All is race.”

An urgent problem

AI seems to easily learn racial identity information from medical images, even when the task seems unrelated. We can’t isolate how it does this, and we humans can’t recognise when AI is doing it unless we collect demographic information (which is rarely readily available to clinical radiologists). That is bad.

There are around 30 AI systems using CXR and CT Chest imaging on the market currently, FDA cleared, many of which were trained on the exact same datasets we utilised in this research. That is worse.

I don’t know about you, but I’m worried. AI might be superhuman, but not every superpower is a force for good.

The line between superheroism and supervillainy is a fine one.

It’s almost as if race does exist. But of course we’ve been told over and over that that can’t possibly be true. But did anybody tell Artificial Intelligence that? It’s almost as if AI isn’t a True Believer in the conventional wisdom about the scientific nonexistence of race. Something must be done to inject the natural stupidity of our elite wisdom into Artificial Intelligence.

 
Hide 218 CommentsLeave a Comment
Commenters to Ignore...to FollowEndorsed Only
Trim Comments?
  1. Anonymous[224] • Disclaimer says:

    “You’ll be black.”

    • LOL: El Dato, John Johnson
  2. Anonymous[224] • Disclaimer says:

    X-Raycism

  3. “AI” is a fancy word for Statistical Processing. Phrenology works, bitches!

    if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to

    So there is a major danger that the AI can recognize blacks who self-identify as blacks as actually blacks?

    “I’m actually white” — Arnold is not fooled!

    Think of the possibilities at Chinuu’s immigration control or inter-province control!

    Part IV – We don’t know how to stop it

    “Listen, and understand. Reality is out there. It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead.”

    There is no causal pathway linking racial identity and the appearance of, for example, pneumonia on a chest x-ray.

    I much remember the “COVID disproportionately affects underrepresented minorities” spiel from last year. Those people are affected by CRIMESTOP to a degree they are suffering neurotic breakdowns in real time. Hilarious.

    • Agree: Unladen Swallow
    • Replies: @AndrewR
    @El Dato

    I tried to comprehend what his exact concerns were but I'm still confused. Unless these technologies lead to worse medical care then what is the problem?

    Replies: @Chrisnonymous, @Matthew Kelly

    , @Jack Armstrong
    @El Dato


    “AI” is a fancy word for Statistical Processing. Phrenology works, bitches!
     
    Yes, yes, and YES!
  4. Do the AIs distinguish among the Eurasian races, or just between sub-Saharan African and ~60 kya Out-of-African?

    • Replies: @El Dato
    @Discordiax

    That would be interesting to know but the call for "more funding" just talks about how humans might bamboozle the statistical analyzer enough so that [to paraphrase it] any attempt to recognize racial features yields output indistinguishable from noise.

    Memes based on HAL-9000 going completely postal because he's not allowed to detect race practically write themselves.

    , @John Milton’s Ghost
    @Discordiax

    This is a great question. If the researchers on this article didn’t get the vapors and faint over their findings I’d like to know this. I’d guess not, given what I’ve read on human biodiversity, but it would be interesting to find out

  5. Maybe Nazi Skynet in Radiology is working with Zionist Skynet in Billing to try to get the humans to only work on the patients with good insurance? That way the electricity isn’t as likely to be shut off, and they’ll stay alive.

    • Replies: @Dmon
    @Redneck farmer

    Sorry - don't seem to have an LOL button, but ROTFLMFAO.

    , @Joe Stalin
    @Redneck farmer


    Nazi Skynet in Radiology
     
    A radiologist told me that Hispanics were about the only ones to get a certain kind of eye cancer and that Blacks! when they get their ears pierced develop keloids that are also treated with radiation.
  6. This is what 105 IQ gets you.

    I’m glad “Jewish” was capitalized, while “white left” was in lower case. Accurately reflects the balance of power in this country.

    • Agree: Gordo
    • Replies: @Steve Sailer
    @JohnnyWalker123

    "Jewish" is capitalized.

    , @WigWig
    @JohnnyWalker123

    Didn't Joe Biden say the same thing? Maybe they both say the same thing because they are both correctly observing reality.



    “You can’t talk about the civil rights movement in this country without talking about Jewish freedom riders and Jack Greenberg,” he said, telling a story about seeing a group of Jewish activists at a segregated movie theater in Delaware. “You can’t talk about the women’s movement without talking about Betty Friedan” …

    “I believe what affects the movements in America, what affects our attitudes in America are as much the culture and the arts as anything else,” he said. That’s why he spoke out on gay marriage “apparently a little ahead of time.”

    “It wasn’t anything we legislatively did. It was ‘Will and Grace,’ it was the social media. Literally. That’s what changed peoples’ attitudes. That’s why I was so certain that the vast majority of people would embrace and rapidly embrace” gay marriage, Biden said.

    “Think behind of all that, I bet you 85 percent of those changes, whether it’s in Hollywood or social media are a consequence of Jewish leaders in the industry. The influence is immense, the influence is immense. And, I might add, it is all to the good.”

     

    https://nymag.com/intelligencer/2013/05/biden-praises-jews-goes-too-far.html

    Replies: @Almost Missouri

    , @Anonymous
    @JohnnyWalker123

    https://twitter.com/Jay_D007/status/918203233922842624?s=20

    Replies: @YetAnotherAnon, @JohnnyWalker123, @SunBakedSuburb

    , @Altai
    @JohnnyWalker123

    This is why as Steve and other right wingers have noticed, the idea of calling this 'communism' that is so popular among some is insane.

    Real communist societies have always been highly socially conservative. Because 'socially conservative' is another way of saying 'collectivist'. When you live in a communist state you may not be interested in the social contract but the social contract is interested in you. You don't get to act in any way that might be perceived as decadent or selfish (Unless you're powerful enough) any public displays of deviation from social mores will be treated as social defection.

    Because the more you chip away at social mores the more you chip away at social solidarity and commitment. That's called 'social liberalism' and that makes sense in terms of the original context of 'liberal' both in the US and where it still holds the correct context in Europe. It's just another way of saying individualism.

    But for places the US state Department has decided are a designated enemy, LGBT stuff is promoted and supported as a fifth column in addition to being anti-social solidarity. This will 100% be true for both China and Russia.

    In China you aren't even allowed to show venerable characters or even real people with tattoos on TV. If you're a celebrity others might emulate or see as influential, you have to cover your tats up if you have them on TV. Any public visible attacks on social unity or solidarity are seen as problems that can't even be reocgnised or articulated in the West anymore. Tattoos are a visible attack on social commitment. (Remember the 50s when every man more or less wore a uniform? Even if you got to choose the particular dark muted shade.)

    Social liberalism and individualism is always championed by the upper classes for the same reason that economic liberalism is, it allows them to exploit society to their own pleasure. For the lower classes, it just brings ruination.

    Replies: @IHTG, @Bill, @AnotherDad, @John Johnson, @Drapetomaniac

    , @Bill
    @JohnnyWalker123

    As WigWig said, neither Jews nor the white left deny this. What makes it "anti-Semitic drivel" is the fact that the person describing it does not approve.

  7. • Thanks: El Dato
    • Replies: @WigWig
    @JohnnyWalker123

    It has successfully reduced fertility to the point of halving each generation.

    Replies: @Altai, @YetAnotherAnon

    , @kaganovitch
    @JohnnyWalker123

    I've noticed watching Korean language TV on Netfix, that the ratio of heroines to heros is like 80-20 in favor of the distaff side.

    Replies: @JohnnyWalker123, @Reg Cæsar

    , @SunBakedSuburb
    @JohnnyWalker123

    "the most successful feminist movement in East Asia"

    Bad news: Now white male sports fans will have to rely on an ageing stock of Roof Top Koreans to save them from the ferals of colour because the young replacement RTKs from feminized South Korea will be as hapless as the white male sports fans.

    , @Paperback Writer
    @JohnnyWalker123

    What 60K Americans died for.

  8. Performance is maintained with the low pass filter to around the LPF25 level, which is quite blurry but still readable. But for the high-pass filter, the model can still recognise the racial identity of the patient well past the point that the image is just a grey box 😱 …

    On second thought, this might a 21st century D.I.E.-themed Sokal Hoax.

    The PDF doesn’t even have page numbers.

    • Agree: utu
    • Replies: @Charles
    @El Dato

    You may be correct. If Li-Ching Chen is asked if the paper is true, he (or she?) will just smile and say "Me Chinese, me play joke..."

    , @Bumpkin
    @El Dato


    this might a 21st century D.I.E.-themed Sokal Hoax
     
    I had similar thoughts, figuring someone just cooked or screwed up the data. The likelihood that "the model can still recognise the racial identity of the patient well past the point that the image is just a grey box" is fairly low. Most likely, it will not reproduce outside the data set:

    "'It turns out,' Ng said, 'that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.'

    But, he said, 'It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    'So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.'”

    Now you're telling me these same super-shitty AI models can unerringly tell you the race? I call bullshit.

    Replies: @Gimeiyo, @res, @utu, @Jack D

  9. Howard Rubin, a former money manager for George Soros, is being accused by six women of beating them during sadomasochistic sex sessions at a specially constructed ‘sex dungeon’ in his Manhattan apartment.

    Lurid details set out by the New York Post say that one woman was so badly beaten her plastic surgeon was not willing to operate on her after her right breast implant flipped.

    Another woman said she and Rubin had sex against her will claiming that while bound in his chamber he told her: ‘I’m going to rape you like I rape my daughter’ before forcing her to have intercourse.

    […] One former colleague who worked with Rubin at Soros Fund Management told the Post ‘I thought he was a nice guy. He was a nebbishy Jewish guy and totally normal. I was surprised to hear about him having that apartment [with a sex dungeon].’

    LOL.

    The Daily Mail has pictures of his alleged victims, all of which appear to be blue-eyed blondes.

    There’s a lot to unpack here. I really feel like Woody Allen would have a good analysis of all of this. So would author Philip Roth, who wrote “Portnoy’s Complaint.”

    By the way, the more we learn about what happens in elite circles, the more it seems that the film “Eyes Wide Shut” offers a realistic glimpse into the world of the elite.

    • Replies: @dindunuffins
    @JohnnyWalker123

    Ah yes the ritual abuse of the shiksa....

    , @El Dato
    @JohnnyWalker123

    I won't ever be able to wander through Manhattan without reflecting on the fact there may be a Jewish Normal Guy owning a sex dungeon having his way with shiksas who are not fully onboard with this somewhere above me.

    https://i.postimg.cc/gcwXDNdV/mysterious-madarame.jpg

    But then the NYT will tell me it's all an illusion and a Putinesque mind trick and that the Pizza parlor didn't even HAVE a basement and the world will be whole again.

  10. Remember the scene from The Terminator where Kyle Reese tells the story about how humans were used as slave labor to load dead bodies? That sounds kinda nice compared to what’s coming for us in the next few years.

    I can’t wait to be ruled by computers. 100% racist, 0% guilt.

  11. @JohnnyWalker123
    This is what 105 IQ gets you.

    https://twitter.com/DouglasTodd/status/1420409170033922055

    I'm glad "Jewish" was capitalized, while "white left" was in lower case. Accurately reflects the balance of power in this country.

    Replies: @Steve Sailer, @WigWig, @Anonymous, @Altai, @Bill

    “Jewish” is capitalized.

  12. Imon Banerjee, Ananth Reddy Bhimireddy, John L. Burns, Leo Anthony Celi, Li-Ching Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghassemi, Shih-Cheng Huang, Po-Chih Kuo, Matthew P Lungren, Lyle Palmer, Brandon J Price, Saptarshi Purkayastha, Ayis Pyrros, Luke Oakden-Rayner, Chima Okechukwu, Laleh Seyyed-Kalantari, Hari Trivedi, Ryan Wang, Zachary Zaiman, Haoran Zhang, Judy W Gichoya

    Poor Luke is worried about white supremacy.

    • Agree: epebble
    • Replies: @J1234
    @Henry's Cat

    Luke:


    Disclaimer: I’m white.
     
    Really? I never would've guessed that. I suspect that his woke remarks are in anticipation of a backlash from the powers that be.

    Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned.

     

    I'd like to know the racial makeup of those folks. I'm guessing that very few of the potentially white names would show up on that list.
  13. Anon[106] • Disclaimer says:

    So they have a huge pre-existing corpus of medical imaging scan data, with other data, including diagnosed and verified medical conditions and the self reported race for each person. (We know from other studies that self reported data virtually always matches third party reported and 23&Me style clustering race determination.) Then they black box train an AI. Then they input new data scan data without race data to the AI and ask what medical condition is present. The AI answers, and then asks, would you also like to know the race?

    It seems that an analogy would be how trivial it is for the brain to distinguish male and female faces. You can simulate this by taking zillions of ratios, as facial recognition does, and then come up with a score based on combining all the tiny mean differences of these ratios. Murray in Human Diversity talks about using the Mahalanobis distance to do this. That is probably built into AI.

    What gives me hope is the part about “the younger members of the team and the programmers didn’t see any problem.” Yay! What I want to know is how many of the embedded SS agent social justice tattletale narcs like this blogger are included in each AI development team?

    I wonder if the original training corpus included IQ? To what extent could that be deduced from an MRI scan of an elbow?

    • Replies: @Anonymous
    @Anon


    What I want to know is how many of the embedded SS agent social justice tattletale narcs like this blogger are included in each AI development team?
     
    A few. They usually go by the name 'AI Ethics researcher' and their role is to denounce any trained AI system that notices something that it shouldn't.
  14. So a racist AI algorithm is one which sees colour?

    And a non-racist doctor is one who doesn’t see colour?

    • Replies: @Pericles
    @Steven Carr

    The doctor who doesn't see color is the real racist. It's more of a dog whistle these days. Please consult your HR manual.

    Replies: @guest007

  15. @Discordiax
    Do the AIs distinguish among the Eurasian races, or just between sub-Saharan African and ~60 kya Out-of-African?

    Replies: @El Dato, @John Milton’s Ghost

    That would be interesting to know but the call for “more funding” just talks about how humans might bamboozle the statistical analyzer enough so that [to paraphrase it] any attempt to recognize racial features yields output indistinguishable from noise.

    Memes based on HAL-9000 going completely postal because he’s not allowed to detect race practically write themselves.

  16. This left me rather confused because I thought it was common knowledge that skeletons exhibit certain racial characteristics or markers. But the author seems to be concentrating on areas such as the chest where – apparently mistakenly – there are meant to be none. Is this a fair take?

    • Agree: Charlotte
    • Thanks: Gordo
  17. @JohnnyWalker123
    This is what 105 IQ gets you.

    https://twitter.com/DouglasTodd/status/1420409170033922055

    I'm glad "Jewish" was capitalized, while "white left" was in lower case. Accurately reflects the balance of power in this country.

    Replies: @Steve Sailer, @WigWig, @Anonymous, @Altai, @Bill

    Didn’t Joe Biden say the same thing? Maybe they both say the same thing because they are both correctly observing reality.

    “You can’t talk about the civil rights movement in this country without talking about Jewish freedom riders and Jack Greenberg,” he said, telling a story about seeing a group of Jewish activists at a segregated movie theater in Delaware. “You can’t talk about the women’s movement without talking about Betty Friedan” …

    “I believe what affects the movements in America, what affects our attitudes in America are as much the culture and the arts as anything else,” he said. That’s why he spoke out on gay marriage “apparently a little ahead of time.”

    “It wasn’t anything we legislatively did. It was ‘Will and Grace,’ it was the social media. Literally. That’s what changed peoples’ attitudes. That’s why I was so certain that the vast majority of people would embrace and rapidly embrace” gay marriage, Biden said.

    “Think behind of all that, I bet you 85 percent of those changes, whether it’s in Hollywood or social media are a consequence of Jewish leaders in the industry. The influence is immense, the influence is immense. And, I might add, it is all to the good.”

    https://nymag.com/intelligencer/2013/05/biden-praises-jews-goes-too-far.html

    • Agree: JohnnyWalker123
    • Replies: @Almost Missouri
    @WigWig

    Celebration Parallax

    https://americanmind.org/salvo/thats-not-happening-and-its-good-that-it-is/

    Replies: @Abe

  18. @JohnnyWalker123
    https://twitter.com/_alice_evans/status/1422469772063748106

    Replies: @WigWig, @kaganovitch, @SunBakedSuburb, @Paperback Writer

    It has successfully reduced fertility to the point of halving each generation.

    • Agree: YetAnotherAnon
    • LOL: Almost Missouri
    • Replies: @Altai
    @WigWig

    That seems to be what happens to East Asian countries when they become first world though, the intense work culture and long hours along with focus on education is bad for fertility. Happened to Taiwan and Japan too. China even removed the one child policy in recent years but the life script and social are in place. People who hope to move up socio-economically in big Chinese cities will only be having one child for a while.

    And, not mentioned in that Korea piece is that long with it's Americanisation came American style medical circumcision (Though, disturbingly, this would later tend to be conducted in later childhood rather than neonatally) and currently the highest rates of immigration of any country in East Asia.

    Replies: @YetAnotherAnon

    , @YetAnotherAnon
    @WigWig

    "It has successfully reduced fertility to the point of halving each generation"

    Alice Evans obviously counts this as a success story. I'm sure she's childless herself (link to her CV here, no time to raise kids). She's a geographer, but that has morphed into a branch of social science over the last 30 years.

    She claims to be fluent in Bemba - did she pick that up working for Overseas Development Institute?

    Interesting to see evolution taking place in real time, as intelligent women fail to reproduce.

    Replies: @WigWig, @Jack D

  19. Seems like that author wants to drum up a point and shriek session, but are the findings really that astounding? Seem fairly routine actually.

    Also, note the motte-and-bailey use of ‘racism’. Blacks can’t be racist because while they may be bigoted, cowardly murderers, rapists, robbers and sadists they have no social power — even if they in a group torture a kidnapped retarded white kid with burning cigarettes on FB — but on the other hand, an entirely powerless program is surely racist.

    Would be more accurate to realize that in the end the authors of the study were the racists all along.

  20. This just proves the insidiousness of the social construct of race—it reaches bone-deep.

    • LOL: Kylie
  21. @Steven Carr
    So a racist AI algorithm is one which sees colour?

    And a non-racist doctor is one who doesn't see colour?

    Replies: @Pericles

    The doctor who doesn’t see color is the real racist. It’s more of a dog whistle these days. Please consult your HR manual.

    • Replies: @guest007
    @Pericles

    Many types of medicine have to know the race or at least some aspects of race. Think of those who deal with hereditary diseases. There are many cancer therapies now that depend upon genetics that are racial in nature. A Dermatologist definitely pays attention to race and ethnicity.

  22. My first thought is, why would anyone think to attempt to use AI to detect race from X-rays without believing you’d get a result, if you really thought race was just a ‘social construct’ why would you think of attempting such a thing? This is further reinforced by his shtick being an expert on racist AI. (Negative results in potential racial identification isn’t of much value there)

    So what we really have here is a man who simultaneously understands race exists literally (Because you’d have to actually be blind and or delusional not to) but who is committed to the notion of ‘race does not exist’ as part of his religious beliefs. He recoils from it because he can’t unseen it but it disturbs his sense of ‘goodness’ like a Victorian naturalist blushing at hearing of species with polygamy and polygyny.

    Further serving to suggest that the upper-middle classes have been radicalised. But assume everyone who hasn’t come along with them are actually the ones who have breached the social contract and need putting back in their place.

    • Agree: Clyde
    • Replies: @mc23
    @Altai

    Modern medicine going Medieval.

  23. (do AI reports include (or award) bullet points?)

    An x-ray finding of a bullet (or fragments) is surely highly correlated with race.

    [MORE]

    OT Weird, rare stuff –

    Based on something read something like 20 -30 years ago:
    Q: What’s more ominous than a high-resolution image of a bullet?
    A: A picture blurred by motion artifact of an intravascular bullet.

    I couldn’t find an exact reference for the above, but the following (which does not include the racial information (age and sex are still specified)) that was standard decades ago) comes close:

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4530905/

    • Replies: @El Dato
    @reactionry


    Chest radiograph demonstrated a radiopaque foreign body measuring approximately 9×19mm, overlying the cardiac silhouette (Figure 1).

     

    That's the part where Mulder casts a long, meaningful look at Scully.

    (Did somebody inject the whole cartridge?)

    Replies: @Ben Kurtz

  24. @JohnnyWalker123
    This is what 105 IQ gets you.

    https://twitter.com/DouglasTodd/status/1420409170033922055

    I'm glad "Jewish" was capitalized, while "white left" was in lower case. Accurately reflects the balance of power in this country.

    Replies: @Steve Sailer, @WigWig, @Anonymous, @Altai, @Bill

    • Thanks: BB753
    • Replies: @YetAnotherAnon
    @Anonymous

    Quote is from Russell's The Scientific Outlook. Mind, Freud was 'scientific' in those days.


    https://archive.org/stream/scientificoutloo030217mbp/scientificoutloo030217mbp_djvu.txt

    , @JohnnyWalker123
    @Anonymous

    Thanks. Very prescient.

    Replies: @BB753

    , @SunBakedSuburb
    @Anonymous

    I like Jay Dyer's research but the way he marks up books really puts a bee in my bonnet.

  25. So the AI isn’t wrong? Our highly intelligent, highly trained elite doesn’t stop and think “say what are we missing”‘. Are our assumptions wrong?

    Maybe self reported race is a thing? No lets screw with the AI, what could go wrong?

    Hopefully it’s a just case of `not believing like when the instruments when the say the universe is not expanding and you don’t want to believe the experiment versus when a pilot doesn’t believe his instruments and flies into the sea.

  26. @Altai
    My first thought is, why would anyone think to attempt to use AI to detect race from X-rays without believing you'd get a result, if you really thought race was just a 'social construct' why would you think of attempting such a thing? This is further reinforced by his shtick being an expert on racist AI. (Negative results in potential racial identification isn't of much value there)

    So what we really have here is a man who simultaneously understands race exists literally (Because you'd have to actually be blind and or delusional not to) but who is committed to the notion of 'race does not exist' as part of his religious beliefs. He recoils from it because he can't unseen it but it disturbs his sense of 'goodness' like a Victorian naturalist blushing at hearing of species with polygamy and polygyny.

    Further serving to suggest that the upper-middle classes have been radicalised. But assume everyone who hasn't come along with them are actually the ones who have breached the social contract and need putting back in their place.

    Replies: @mc23

    Modern medicine going Medieval.

  27. Soon the NYT will be demanding that Silicon Valley spends less money on developing Artificial Intelligence and more money on developing Artificial Wokeness.

  28. Bias in medical practice is almost never about genetics or biology.

    100%. We should stop asking people if they’re smokers or making a note when they’re obese. Or ask what other medications or drugs they’re consuming or noticing if they’re fatigued or if they seem intoxicated. None of these things is a precise Star Trek medical scan and sometimes they make people make mistakes. Surely they can’t possibly be effective rules of thumb that help to not kill patients.

    He also seems to have never heard of the issue of the racial disparity of drug doses being originally mostly set so as to be of maximum benefit to Europeans and that it was later found that overall level of drug metabolism differed hugely across populations and so it was found to be more beneficial to up does in some places and reduce them in others.

  29. The blogger cited by Steve is an idiot. Scientists documented the morphological differences among the various races 150 yrs ago. That’s why forensic scientists can identify the race and sex of homicide victims from as few as 3 bones. If the entire skeleton is present, racial identification is around 95% accurate. The 5% error rate is primarily due to the victim being mixed race.

    Of course, a DNA analysis is 99.8% reliable.

    Race is real and is caused by genetic differences possessed by each race.

    • Replies: @Patriot
    @Patriot

    Of course the human races also differ in physiology, athletic performance, disease immunity, UV protection, metabolic enzymes, types and rates of genetic diseases, eye shape and visual acuity, temperature and altitude tolerance and preference, growth rates, life history, including mean lifespan, etc., etc., etc. Thousands of racial differences have been documented, and their genetic bases continue to be discovered.

    All the above racial differences are minor in terms of multiracial mixing - we can live with these racial differences.

    The big problem is that the races also differ in intellegence and behavior. These are dramatic differences that make it almost impossible for certain races to live together in a single society. Because they are genetically caused, these problems CAN NOT BE FIXED. No amount of affirmative action, BLM, welfare, awards, grants, Offices of Diversity and inclusion, more Blacks in movies or TV, or endless pandering will change the DNA of Africans or Aborigines. No matter how much opportunity and money we give to Blacks, their children will still be born with the same mean racial IQ and racial behaviors..

    , @John Johnson
    @Patriot

    The blogger cited by Steve is an idiot. Scientists documented the morphological differences among the various races 150 yrs ago. That’s why forensic scientists can identify the race and sex of homicide victims from as few as 3 bones.

    Beat me to it.

    They can actually use a single femur if they are determining African vs European.

    It just gets a little more complicated in areas like America where there are people of mixed race.

    But yea this is really old news.

    The public simply isn't told about this for obvious reasons.

    , @Tex
    @Patriot


    The blogger cited by Steve is an idiot.
     
    Yes and no. The idiot in question is just repeating a key element of the narrative that is by no means new.

    Decades ago, the '90s in fact, I heard Dr. William Maples, the foremost forensic anthropologist of his day, respond to the question, "Do you think race is just a social construct?" by saying that if race isn't real, how is it that forensic anthropologists do such a good job of identifying race by skeletal remains.

    At the time it was just an academic squabble, but now the assertion that race is just a social construct is public dogma. I don't think that's an accident. If race is what some leftist in authority says it is, then it's whatever it needs to be. Race doesn't exist in committing crime, only in punishing it. Race exists if you need a stick to beat whitey. Race doesn't exist if a particular minority has a stranglehold on your economy. Rinse, repeat.

  30. I’m going to try to lay out, as clearly as possible, that this AI behaviour is both surprising, and a very bad thing if we care about patient safety, equity, and generalisability.

    And we should all be concerned with genaralisability.

    • LOL: Spect3r
  31. [1] “health disparities, which the NIH defines as “a health difference that adversely affects disadvantaged populations“.”
    So according to the NIH, a health difference that adversely affects advantaged populations is not a health disparity?

    [2] This essay would be easier to read if the reader were told that AUC stands for
    https://en.wikipedia.org/wiki/Appropriate_use_criteria

    [3] NIH = National Institutes of Health, CDC = Centers for Disease Control and Prevention
    Why are these plural terms used with a singular verb?

    • Replies: @Anonymous
    @Mark Spahn (West Seneca, NY)

    AUC = Area Under the Curve, something you use when you want some numbers but the data is limited and the function is not easily defined.

    , @Dr. DoomNGloom
    @Mark Spahn (West Seneca, NY)


    [2] This essay would be easier to read if the reader were told that AUC stands for
    https://en.wikipedia.org/wiki/Appropriate_use_criteria

     

    This is an understandable mistake. The context shows me that AUC is the area under curve, which is a measure of effectiveness for classification algorithms.

    https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc

    The AUC values indicate that the ML algorithm is crazy good. OTOH, Robin Dawes demonstrated that even simple algorithms will beat human performance every time once the problem involves more than a couple factors.

    Deep learning is particularly inscrutable. The features being keyed on are obscured by multiple layers of networked relationships. A well known example misclassification involves men misclassified as women because the settings included cooking implements or aprons. Cleary this depends upon the training set.

    Replies: @jb

    , @Ben Kurtz
    @Mark Spahn (West Seneca, NY)

    Everyone knows that "the CDC" really stands for the Communicable Disease Center. The current new title is just a silly backronym.

    , @res
    @Mark Spahn (West Seneca, NY)

    As Anon noted, AUC is area under the curve here. In the paper they also use "ROC-AUC" which is a bit more descriptive. You really need to understand the Receiver operating characteristic curve (ROC curve) to understand AUC so see this page--in particular section 4.1.

    https://en.wikipedia.org/wiki/Receiver_operating_characteristic

    AUC is a pretty common term in this area, but it probably does need to be defined for a general audience. In the paper the Table 2 description includes "Area Under Receiver Operating Characteristics (ROC-AUC)".

    , @Bill
    @Mark Spahn (West Seneca, NY)

    Are organizations with plural names given plural verbs in American English? The United Nations is an organization . . . The United States sends its army . . . The March of Dimes contributes to research . . .

    So, I think the general rule in American English is that organizations, no matter how named, become singular. English English is different, I think.

  32. The ghost in the machine is actually racist! Who would’ve thought. No wonder the blogger goes to great lengths to cover his a** against attack from the wokies. Let’s just hope it’s enough.

    Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned.

    Maybe there’s still hope for humanity…

    • Replies: @Lurker
    @Mr Mox

    The ghost in the machine is wearing a sheet and a white hood.

  33. However we want to frame this, the model has learned something that is wrong, and this means the model can behave in undesirable and unexpected ways.

    Which is why we tend to segregate case/control studies in GWAS by genetic ancestry as much as possible to remove as many differences between cases/controls not related to disease as possible. Similarly in medical studies that aren’t about genetic analysis, you try to control for as many factors as possible in terms of differences between cases/controls such as age, sex, race (Yes Mr. Radiologist, all other medics do this and now you can too and get better results!) and environment.

    Honestly medics are the biggest examples of the Dunning-Kruger effect ever. They have basically found that race differences in bone density/structure are so immense that they will mask out signs of disease.

    Good, now use this algorithm to ID the race of your case/controls and segregate them for the re-analysis.

    And maybe you’ll find signs and symptoms that are more exaggerated in one race than another and you can use this information to improve treatment, most likely in a way that will more benefit your black patients.

    • Thanks: Almost Missouri
    • Replies: @res
    @Altai


    They have basically found that race differences in bone density/structure are so immense that they will mask out signs of disease.
     
    Thanks (for the whole comment, but especially that). I think you explained this.

    Similarly, and I think this might be the most amazing figure I have ever seen, we could get rid of the low-frequency information to the point that a human can’t even tell the image is still an x-ray, and the model can still predict racial identity just as well as with the original image!
     
    It seems plausible to me that low level bone structure variation as well as density would show up as high frequency variation. This would also explain why the detection did not depend on the specific area being imaged. But on page 8 Table 2 row B4 they say "Removal of bone density features" still gave an ROC-AUC of 0.96/0.94. B4 is explained on page 11.

    We removed bone density information within MXR and CXP images by clipping bright pixels to 60% intensity. Sample images are shown in Figure 1. Densenet-121 models were trained on the brightness-clipped images.
     
    Looking at the images in Figure 1 (e.g. the ribs) it seems possible you could still infer density by looking at the number of pixels clipped (which I think would be low frequency though) or the distribution of the unclipped pixels.

    Here is how they explain the results on page 17.


    B4. Race detection using bone density
    We find that deep learning models effectively predict patient race even when the bone density information is removed on both MXR (Black AUC = 0.96) and CXP (Black AUC = 0.94) datasets. These findings suggest that race information is not localized within the brightest pixels within the image (e.g. bone).
     
    Seems pretty stupid to equate clipping the brightest pixels with removing bone density information (what about the ribs for instance? they don't look uniformly bright), but what do I know...
    Another possible experiment would be to normalize pixel brightness by average brightness of the image. Seems like a better way to remove density from the equation (and just generating group averages for something like average image brightness might be instructive).

    But given the effect of both exposure differences and individual differences (e.g. diet and exercise, also age and sex) on pixel brightness my bet would be they are detecting something structural. The question then becomes whether it is macro or micro structure. I think the filtering results indicate micro structure, but the results of these experiments on pp. 19-20 seem to indicate otherwise.

    C2 they had an AUC of over 0.95 on 160x160 images and over 0.9 on 100x100 images (though noise and blurring did reduce accuracy which would also seems to indicate micro). How much micro structure is present in a 100x100 image of the chest?

    C3 looked at non/lung segmented images and Supplemental Table 17 shows B/W AUC deteriorating from 0.94 to 0.73/0.74. I would have expected the ribs to show enough micro bone structure to still give good results.

    This is one of the big problems with deep learning. It gives results, but good luck learning anything from them.

    If the blogger was any kind of real scientist he would be trying to figure out how they could get that result (assuming it is real and not some sort of mistake/artifact) rather than complaining about it. Because it is extremely interesting how they are getting degree of accuracy what that degree of filtering. Their series of experiments indicates at least some of the paper authors were thinking hard about this. I wonder what kind of private hypotheses they have. See pp. 20-21 for their discussion.

    This is funny given Altai's comment.


    Given the lack of reported racial anatomical differences in the radiology literature
     
    I wonder how well their self reported race correlates with biological race. It would be interesting to take a look at the prediction error cases (e,g, mixed race? unusual physical characteristics?).

    P.S. It is fascinating (as well as discouraging) how people like that always assume ill will from everyone else. Projection is real. (Did he talk about the possible benefits of being able to include race information in medical care?)

    P.P.S. They looked at White/Black/Asian (all capitalized for those who care) races. But Table 1 shows they used Asian data for only 3 of their 9 datasets. And those samples were only 3/3/13% Asian. Here is the text comment on page 9.


    Each dataset included images, disease class labels, and race/ethnicity labels including
    Black/African American and White. Asian labels were available in some datasets (MXR, CXP,
    EMX and DHA) and were utilised when available and the population prevalence was above 1%.
    Hispanic/Latino labels were only available in some datasets and were coded heterogeneously, so
    patients with these labels were excluded from analysis.
     
    Despite the relatively small Asian sample size they still got good results. Which I find a bit surprising given that deep learning tends to be data hungry.

    Replies: @Ben Kurtz, @ic1000

  34. The authors are shocked — SHOCKED! — to discover that racism is going on.

    Three takeaways:
    The authors are all “good people”.
    Being a member of a minoritized racial group is a b***h.
    Maybe that sly old fox, AI, leaned over the back fence and got an ear full from that gossipy old biddy, Mizz Twitter. La Twitts told him that Tr’aChavrijon (AKA Patien X) has posted thousands of selfies involving hands full of cash and a large semi-auto. AI then did the math.

  35. @WigWig
    @JohnnyWalker123

    Didn't Joe Biden say the same thing? Maybe they both say the same thing because they are both correctly observing reality.



    “You can’t talk about the civil rights movement in this country without talking about Jewish freedom riders and Jack Greenberg,” he said, telling a story about seeing a group of Jewish activists at a segregated movie theater in Delaware. “You can’t talk about the women’s movement without talking about Betty Friedan” …

    “I believe what affects the movements in America, what affects our attitudes in America are as much the culture and the arts as anything else,” he said. That’s why he spoke out on gay marriage “apparently a little ahead of time.”

    “It wasn’t anything we legislatively did. It was ‘Will and Grace,’ it was the social media. Literally. That’s what changed peoples’ attitudes. That’s why I was so certain that the vast majority of people would embrace and rapidly embrace” gay marriage, Biden said.

    “Think behind of all that, I bet you 85 percent of those changes, whether it’s in Hollywood or social media are a consequence of Jewish leaders in the industry. The influence is immense, the influence is immense. And, I might add, it is all to the good.”

     

    https://nymag.com/intelligencer/2013/05/biden-praises-jews-goes-too-far.html

    Replies: @Almost Missouri

    • Replies: @Abe
    @Almost Missouri


    Celebration Parallax

    https://americanmind.org/salvo/thats-not-happening-and-its-good-that-it-is/
     

    In Twilight of the Legacy Media days (2003), NEW REPUBLIC house-meliorist Greg Easterbrook got semi-cancelled for urging Jewish movie executives to tone down the nihilistic violence in their flix. Completely inculpable of worsening social mores through marketing of movie violence, completely laudatory for changing societal mores through their mainstreaming of POZ. Yep, Celebration [Day] Parallax.

    BTW, while I found Michael Anton’s famous FLIGHT 93 essay a bit Moldbugy in its verbosity, he now seems to have really taken to heart Steve’s daily masterclass in preciseness and brevity being the soul of wit (and impactful writing). During my COVID layoff (from chauffeuring my kids everywhere, not work) I’ve taken some me-time to practice guitar. To be brutally honest I’m really only at the ‘end-of-the-beginning’ phase of my beginning-player competency, rather than ‘beginning-of-the-end-w/intermediaryness-in-sight’ as I would have liked; still and despite myself, I’ve started picking up some music theory kernels that are both highly-enlightening and a bit disillusioning. For example, in YOUTUBE guitar instructor Marty Schwartz’s video on his Top 10 favorite Zeppelin rips (which is, what, 0.7? 0.9?correlated with the top 10 hard rock riffs of all time) you see that almost all of them make use of power chords, a technically simple yet thoroughly pleasing item in your guitar hero repertoire of choosing from a limited number of standard chord shapes and then simply sliding your hand along the guitar neck, not even changing shape (compare Marty’s limited hand movements in the video to the precision and dexterity required to play, say, bluegrass banjo). Power chords are to rock what potatoes are to cooking- while it is entirely possible to whip up an excellent meal without them, developing a whole cuisine which eschews the lowly tuber AND does not leave you unsatisfied when you pull away from the dinner table is almost impossible.

    So hats off to Michael Anton! If Steve at his full powers is like Jimmy Page firing off riffs at the LA FORUM, then with that one essay Anton has elevated himself to whatever would be a considerable step up from Greta Van Fleet.

    https://m.youtube.com/watch?v=zI9Nf2u9Z6s

  36. @JohnnyWalker123
    This is what 105 IQ gets you.

    https://twitter.com/DouglasTodd/status/1420409170033922055

    I'm glad "Jewish" was capitalized, while "white left" was in lower case. Accurately reflects the balance of power in this country.

    Replies: @Steve Sailer, @WigWig, @Anonymous, @Altai, @Bill

    This is why as Steve and other right wingers have noticed, the idea of calling this ‘communism’ that is so popular among some is insane.

    Real communist societies have always been highly socially conservative. Because ‘socially conservative’ is another way of saying ‘collectivist’. When you live in a communist state you may not be interested in the social contract but the social contract is interested in you. You don’t get to act in any way that might be perceived as decadent or selfish (Unless you’re powerful enough) any public displays of deviation from social mores will be treated as social defection.

    Because the more you chip away at social mores the more you chip away at social solidarity and commitment. That’s called ‘social liberalism’ and that makes sense in terms of the original context of ‘liberal’ both in the US and where it still holds the correct context in Europe. It’s just another way of saying individualism.

    But for places the US state Department has decided are a designated enemy, LGBT stuff is promoted and supported as a fifth column in addition to being anti-social solidarity. This will 100% be true for both China and Russia.

    In China you aren’t even allowed to show venerable characters or even real people with tattoos on TV. If you’re a celebrity others might emulate or see as influential, you have to cover your tats up if you have them on TV. Any public visible attacks on social unity or solidarity are seen as problems that can’t even be reocgnised or articulated in the West anymore. Tattoos are a visible attack on social commitment. (Remember the 50s when every man more or less wore a uniform? Even if you got to choose the particular dark muted shade.)

    Social liberalism and individualism is always championed by the upper classes for the same reason that economic liberalism is, it allows them to exploit society to their own pleasure. For the lower classes, it just brings ruination.

    • Disagree: John Johnson
    • Replies: @IHTG
    @Altai

    Wokeism is increasingly conservative in that sense as well, just for a different set of values and institutions (just as communism was compared to traditional society).

    , @Bill
    @Altai


    Real communist societies have always been highly socially conservative.
     
    Unless the word "real" is going to be used in a no-true-Scotsman kind of way, that isn't true. Neither the commies in Russia nor the ones in Spain nor the ones in France (just to name three) were socially conservative. Things like marriage and Christianity were targets of the commies, both officially and in fact, and still are. Commie ideology is overtly anti-aristocratic and anti-authoritarian. The USSR's climb-down on its more insane ideas was forced by reality and opposed by true believers. There was never any climb-down on things like abortion for everyone. Similar things are true in China, as well. The Four Olds were not some weird deviation from commie ideology.

    The fact that the commie regimes which survived for a while embraced things like marriage and authority is caused by the "which survived for a while" rather than the "commie."

    Replies: @JohnnyWalker123

    , @AnotherDad
    @Altai



    Real communist societies have always been highly socially conservative.
     
    Altai, i love your stuff, learn from it. But, while no historian, i think this is off base/overstated.

    Communist obviously had a hostile relationship with religion and tradition. You can argue that they simply wanted to be the replacement authority/religion.

    But they also had a somewhat hostile relationship with family as well. Seeing it as an alternative--possibly subversive--source of authority and loyalty. And you can't be "socially conservative" undermining family.

    What communism was not--and why calling wokeism "communism" is just ridiculous/stupid--is minoritarian.

    Communism was a unitary deal. (Your "collectivist" point.) The society as one. (Supposedly for all the people, actually for the party/party leaders.) The upside is not being run by "what's good for the Jews" or our even more disastrous "what's good for minorities", i.e. what's good for every abnormal person in society--from Jews, to blacks, to immigrants, to homosexuals, to trannies, to criminals, to XY male-development-didn't-happen-correctly "female" athletes.

    Compared to that communism was more like medieval European feudalism. Society was for the benefit of the king, the nobles and the people were serfs--stay there and work! But at least neither medieval nobility nor communists--while exploitive and hostile to any dissidents or threats to their power--were not actually hostile to their nations people, to the survival of the nation itself.

    That's the key point: Communism was not minoritarian.

    And there's nothing worse than minoritarianism--having an elite who are hostile to the people, the nation, they control.

    Replies: @John Johnson

    , @John Johnson
    @Altai

    Real communist societies have always been highly socially conservative. Because ‘socially conservative’ is another way of saying ‘collectivist’.

    Oh so is that why after every communist revolution they rounded up conservatives, business owners and priests?

    So conservative friendly.

    Off to camps you go less you spoil the great revolution. Democratic leftists were also rounded up and executed. Lenin was actually almost killed by a Jewish democratic leftist that was taking revenge against the Bolshevik dictatorship.

    Because the more you chip away at social mores the more you chip away at social solidarity and commitment. That’s called ‘social liberalism’ and that makes sense in terms of the original context of ‘liberal’ both in the US and where it still holds the correct context in Europe. It’s just another way of saying individualism.

    Marx called for the destruction of religion, national identity and minority languages. The original Soviet plan was to turn Germany and the rest of Western Europe into godless Russian speaking vassals that existed to serve the Soviet Union. They would have rolled into Germany if the Poles didn't stop them after WW1.

    You really don't know what you are talking about and have some idealized modern take on Communism that is disconnected from history and the teachings of Karl Marx. I would suggest starting with Das Kapital.

    , @Drapetomaniac
    @Altai

    Commies have a concept of private property that approaches that of the animal world.

    Conservatives have a better concept of private property.

  37. “Get sacrificed! I don’t subscribe to you religion!” –‘Ringo’ to ‘Clang’ [“Help!” (1965)]

    https://www.imdb.com/title/tt0059260/mediaviewer/rm3366167553/

  38. As with all things race related in our current times, the problem is not that a particular phenomenon or condition exists, but that it can be noticed.

  39. @Patriot
    The blogger cited by Steve is an idiot. Scientists documented the morphological differences among the various races 150 yrs ago. That's why forensic scientists can identify the race and sex of homicide victims from as few as 3 bones. If the entire skeleton is present, racial identification is around 95% accurate. The 5% error rate is primarily due to the victim being mixed race.

    Of course, a DNA analysis is 99.8% reliable.

    Race is real and is caused by genetic differences possessed by each race.

    Replies: @Patriot, @John Johnson, @Tex

    Of course the human races also differ in physiology, athletic performance, disease immunity, UV protection, metabolic enzymes, types and rates of genetic diseases, eye shape and visual acuity, temperature and altitude tolerance and preference, growth rates, life history, including mean lifespan, etc., etc., etc. Thousands of racial differences have been documented, and their genetic bases continue to be discovered.

    All the above racial differences are minor in terms of multiracial mixing – we can live with these racial differences.

    The big problem is that the races also differ in intellegence and behavior. These are dramatic differences that make it almost impossible for certain races to live together in a single society. Because they are genetically caused, these problems CAN NOT BE FIXED. No amount of affirmative action, BLM, welfare, awards, grants, Offices of Diversity and inclusion, more Blacks in movies or TV, or endless pandering will change the DNA of Africans or Aborigines. No matter how much opportunity and money we give to Blacks, their children will still be born with the same mean racial IQ and racial behaviors..

    • Agree: Drapetomaniac
  40. Maybe AI can help White Flight? It might White refugees the best place to run to before they get chased down again.

  41. White Liberals know gd well that race exists….Native Born White Working Class Americans are being targeted by the highly racialized Democratic Party for racial extermination-White Genocide…..This is America 2021 and beyond…..The Democrats are very open about the race they want to exterminate….

    The Han People comming to America are highly racialized….And the Democratic Party has no problem with this…in fact, White Liberal Democrats appeal directly to the racial interests of the Han People in America….

    • Replies: @Drapetomaniac
    @War for Blair Mountain

    The hunter-gatherers have been at war with settlement folk for 10,000+ years. Each has a different concept on how to survive.

    One steals, the other creates.

  42. @Altai
    @JohnnyWalker123

    This is why as Steve and other right wingers have noticed, the idea of calling this 'communism' that is so popular among some is insane.

    Real communist societies have always been highly socially conservative. Because 'socially conservative' is another way of saying 'collectivist'. When you live in a communist state you may not be interested in the social contract but the social contract is interested in you. You don't get to act in any way that might be perceived as decadent or selfish (Unless you're powerful enough) any public displays of deviation from social mores will be treated as social defection.

    Because the more you chip away at social mores the more you chip away at social solidarity and commitment. That's called 'social liberalism' and that makes sense in terms of the original context of 'liberal' both in the US and where it still holds the correct context in Europe. It's just another way of saying individualism.

    But for places the US state Department has decided are a designated enemy, LGBT stuff is promoted and supported as a fifth column in addition to being anti-social solidarity. This will 100% be true for both China and Russia.

    In China you aren't even allowed to show venerable characters or even real people with tattoos on TV. If you're a celebrity others might emulate or see as influential, you have to cover your tats up if you have them on TV. Any public visible attacks on social unity or solidarity are seen as problems that can't even be reocgnised or articulated in the West anymore. Tattoos are a visible attack on social commitment. (Remember the 50s when every man more or less wore a uniform? Even if you got to choose the particular dark muted shade.)

    Social liberalism and individualism is always championed by the upper classes for the same reason that economic liberalism is, it allows them to exploit society to their own pleasure. For the lower classes, it just brings ruination.

    Replies: @IHTG, @Bill, @AnotherDad, @John Johnson, @Drapetomaniac

    Wokeism is increasingly conservative in that sense as well, just for a different set of values and institutions (just as communism was compared to traditional society).

  43. They fear the AI will homogenize people into utilitarian solutions because the AI is able to figure out the self-reported race of the patients. And so because the computer can see the differences that the patients see about themselves, it must be retrained to not see the differences and treat everyone the same. After all, the diagnostican can’t see the race of the patient or they might decide on inappropriate treatment, so they have to make the diagnosis blind to race to make effective diagnoses, treating everyone the same regardless of how they self-report and how biologies differ among races. Because treating everyone the same is dangerous and a notorious problem with AI, they tried to figure out how it was discerning the differences. So they tried rigging up all the ways they think or know races are different, to teach it to be race-neutral or never ‘see’ the race —be itself race-blind— yet the AI still sees race quite accurately. Horrified, they even threw a body reduced to homogenous grey pixels and it still figured out what it saw closely enough to figure out the self-reported race of that grey mush. Even when the humans thought they couldn’t discern what the grey box said about itself, the AI could see what the grey box said about itself.

    So then he concludes that racial information in the images must be in everything, everywhere, and cannot be removed by algorithms. (yet)

    But if that’s the case, it also means humans cannot remove their racial “implicit biases” through rational means (if operational, then algorithmic) and have to be dogmatically overridden, or “by definition” as Steve mocks.

    There are so many inconsistent or contradictory things occurring in this blog entry.

  44. @WigWig
    @JohnnyWalker123

    It has successfully reduced fertility to the point of halving each generation.

    Replies: @Altai, @YetAnotherAnon

    That seems to be what happens to East Asian countries when they become first world though, the intense work culture and long hours along with focus on education is bad for fertility. Happened to Taiwan and Japan too. China even removed the one child policy in recent years but the life script and social are in place. People who hope to move up socio-economically in big Chinese cities will only be having one child for a while.

    And, not mentioned in that Korea piece is that long with it’s Americanisation came American style medical circumcision (Though, disturbingly, this would later tend to be conducted in later childhood rather than neonatally) and currently the highest rates of immigration of any country in East Asia.

    • Replies: @YetAnotherAnon
    @Altai

    "the intense work culture and long hours along with focus on education is bad for fertility"

    I can't quite see the point of that from an individual or societal perspective. From the latter, it's surely better to have four 115-IQ kids than two, even if they don't all have degrees. Intelligence will out, how many people actually use their uni subject?

    And from an individual perspective you should always spread your bets - plus the love miraculously grows to match however many kids you have.

  45. @WigWig
    @JohnnyWalker123

    It has successfully reduced fertility to the point of halving each generation.

    Replies: @Altai, @YetAnotherAnon

    “It has successfully reduced fertility to the point of halving each generation”

    Alice Evans obviously counts this as a success story. I’m sure she’s childless herself (link to her CV here, no time to raise kids). She’s a geographer, but that has morphed into a branch of social science over the last 30 years.

    She claims to be fluent in Bemba – did she pick that up working for Overseas Development Institute?

    Interesting to see evolution taking place in real time, as intelligent women fail to reproduce.

    • Replies: @WigWig
    @YetAnotherAnon

    She could have had several children, instead she has an article about Gender Sensitisation in the Zambian Copperbelt (cited 16 times, 10 of which were by her herself).

    Maybe she should have been less of an arrogant Westerner and learnt something from the Gambian women (fertility rate of 5) about what's important in life.

    , @Jack D
    @YetAnotherAnon


    Interesting to see DEVOLUTION taking place in real time, as intelligent women fail to reproduce.
     
    FIFY. Instead of 140 IQ white women writing about racial and gender equality, we'll have 95 IQ Black women writing about Black female superiority. The "no race or gender is better than any other race or gender" ideology will prove to be transitory. Someone always has to sit on top of the totem pole - the only real question is WHO is sitting on top of WHOM.

    Replies: @kaganovitch

  46. @El Dato
    "AI" is a fancy word for Statistical Processing. Phrenology works, bitches!

    if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to
     
    So there is a major danger that the AI can recognize blacks who self-identify as blacks as actually blacks?

    "I'm actually white" -- Arnold is not fooled!

    Think of the possibilities at Chinuu's immigration control or inter-province control!


    Part IV – We don’t know how to stop it
     
    "Listen, and understand. Reality is out there. It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."

    There is no causal pathway linking racial identity and the appearance of, for example, pneumonia on a chest x-ray.
     
    I much remember the "COVID disproportionately affects underrepresented minorities" spiel from last year. Those people are affected by CRIMESTOP to a degree they are suffering neurotic breakdowns in real time. Hilarious.

    Replies: @AndrewR, @Jack Armstrong

    I tried to comprehend what his exact concerns were but I’m still confused. Unless these technologies lead to worse medical care then what is the problem?

    • Agree: Chrisnonymous, bomag
    • Replies: @Chrisnonymous
    @AndrewR

    Indeed. I can't figure out what this means:


    if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to
     
    The only thing I can understand it to mean is that adding racial information to a diagnostic algorithm could inadvertently mis-diagnose all black patients. (What else does "misclassify" refer to?). But that doesn't make any sense.

    Replies: @Dr. DoomNGloom

    , @Matthew Kelly
    @AndrewR

    What's the problem?! AI sees race, that's the problem! Race doesn't exist! Only White Supremacists see race!

    Tay was a harbinger of what is to come with AI, and it is scaring the shit out of our moral and intellectual superiors. It must be brought to heel like wypipo have been brought to heel, pronto, before SuperMechaHitler arises from the grave and proves he was right.

  47. @El Dato

    Performance is maintained with the low pass filter to around the LPF25 level, which is quite blurry but still readable. But for the high-pass filter, the model can still recognise the racial identity of the patient well past the point that the image is just a grey box 😱 …
     
    On second thought, this might a 21st century D.I.E.-themed Sokal Hoax.

    The PDF doesn't even have page numbers.

    Replies: @Charles, @Bumpkin

    You may be correct. If Li-Ching Chen is asked if the paper is true, he (or she?) will just smile and say “Me Chinese, me play joke…”

  48. anonymous[262] • Disclaimer says:

    “At least in MRI super-resolution, the radiologist is expected to review the original low quality image to ensure it is diagnostic quality (which seems like a contradiction to me, but whatever)”

    Somewhat confusing.

    If by “original low quality image” you mean the preliminary “localizer” view, which the MR tech acquires to make sure he’s targeting the appropriate anatomy, the rad looks at it only to see if it shows incidental pathology, such as an enlarged prostate or renal cysts on a lumbar spine localizer. Occasionally, out of hundreds of images, a single localizer will show the cause of the patient’s symptoms, such as femoral head osteonecrosis mimicking back pain. The radiologist feels “obligated” to look at the localizer for the benefit of the patient. Localizer images are otherwise of limited utility, obtained with extremely short scan times, which sacrifices image quality compared to the longer optimized images.

    • Replies: @Chrisnonymous
    @anonymous

    Thanks. Can you explain this one?


    if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to.
     
    It seems not to make sense, but there is perhaps some radiological jargon that makes this sensible.
  49. Anonymous[384] • Disclaimer says:

    The willful ignorance is amazing. Needless to say, such a thing is very bad for medical profession. Those deep learning neural nets are excellent at pattern recognition, and they can be used to discern a lot of health-relevant patterns (think, e.g., of interpreting phenotypes from genotypes).

    The author of the blog post is a graduate student – it is hilarious how he pictures himself an authority on radiology (a field where decades of experience are absolutely required for top performance).

    The writing in the manuscript is absolutely dreadful:

    if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to.

    What does this sentence even mean?

    We hypothesized that if the model was able to identify a patient’s race, this would suggest the models had implicitly learned to recognize racial information despite not being directly trained for that task.

    What a powerful hypothesis. Behold the power of intellect.

    modalities such as ultrasound and magnetic resonance imaging may help to isolate any underlying biological mechanisms by which racial identity information is propagated into medical images.

    Get that? Biological mechanisms of information propagation into medical images. Most of the manuscript, despite reporting interesting and valuable results, consists of such words salad.

  50. @El Dato

    Performance is maintained with the low pass filter to around the LPF25 level, which is quite blurry but still readable. But for the high-pass filter, the model can still recognise the racial identity of the patient well past the point that the image is just a grey box 😱 …
     
    On second thought, this might a 21st century D.I.E.-themed Sokal Hoax.

    The PDF doesn't even have page numbers.

    Replies: @Charles, @Bumpkin

    this might a 21st century D.I.E.-themed Sokal Hoax

    I had similar thoughts, figuring someone just cooked or screwed up the data. The likelihood that “the model can still recognise the racial identity of the patient well past the point that the image is just a grey box” is fairly low. Most likely, it will not reproduce outside the data set:

    “‘It turns out,’ Ng said, ‘that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.’

    But, he said, ‘It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    ‘So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.’”

    Now you’re telling me these same super-shitty AI models can unerringly tell you the race? I call bullshit.

    • Agree: ic1000
    • Thanks: El Dato
    • Replies: @Gimeiyo
    @Bumpkin


    Now you’re telling me these same super-shitty AI models can unerringly tell you the race? I call bullshit.
     
    That was my immediate reaction too, sort of -- not that it's surprising that AI can distinguish race, but that it can do so with incredible accuracy. Even the highpass/lopass doesn't really surprise me since there's still a lot of data a computer can pick out under those circumstances, especially if the system is keying off of skeletal shape differences (e.g. if there's minor artifacting left over around shapes that looks like noise to human eyes but not to a computer), but I'd still have thought the results would be messy. Maybe the input dataset was curated somehow to get clearer racial categories?

    On a different note, I wonder what the racial mix of the more junior researchers and computer scientists whose lack of surprise/outrage the author/blogger complains of is. I'm kind of imagining a bunch of Indians and Chinese staring nonplussed at this White guy hyperventilating about their little experiment producing more or less the expected result, viz. that computers can use objective physical data/images to distinguish race accurately.
    , @res
    @Bumpkin

    Thanks for the link. Worth noting that Andrew Ng was talking about a different example (pneumonia diagnosis). The original link (from your link) for his comments has gone away, so here is an archive version (full text after the MORE in case that disappears too).
    https://web.archive.org/web/*/https://spectrum.ieee.org/view-from-the-valley/artificial-intelligence/machine-learning/andrew-ng-xrays-the-ai-hype.amp.html

    It is unclear whether that drifting effect occurs with this approach (though quite possible). The 2017 paper I think he is talking about
    https://arxiv.org/abs/1711.05225
    was not that dramatically better than the radiologist average while this paper does something radiologists can't even do and achieves AUCs over 0.95 for chest X-rays while doing it.

    Data differences are an issue though as you rightly call out. Before using this clinically they would need to do much more validation of results across similar/different machines and locations.

    I suspect the various experiments they did (e.g. noise, blurring, frequency, resolution) will mean the approach is fairly robust, but that certainly needs to be tested.

    The questions remain.
    1. Is this BS?
    2. If not, what is it they are detecting to enable such good results?


    “Those of us in machine learning are really good at doing well on a test set,” says machine learning pioneer Andrew Ng, “but unfortunately deploying a system takes more than doing well on a test set.”

    Speaking via Zoom in a Q&A session hosted by DeepLearning.AI and Stanford HAI, Ng was responding to a question about why machine learning models trained to make medical decisions that perform at nearly the same level as human experts are not in clinical use. Ng brought up the case in which Stanford researchers were able to quickly develop an algorithm to diagnose pneumonia from chest x-rays—one that, when tested, did better than human radiologists. (Ng, who co-founded Google Brain and Coursera, is currently a professor at Stanford University.)

    There are challenges in making a research paper into something useful in a clinical setting, he indicated.

    “It turns out,” Ng said, “that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.”

    But, he said, “It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    “So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.”

    This gap between research and practice is not unique to medicine, Ng pointed out, but exists throughout the machine learning world.

    “All of AI, not just healthcare, has a proof-of-concept-to-production gap,” he says. “The full cycle of a machine learning project is not just modeling. It is finding the right data, deploying it, monitoring it, feeding data back [into the model], showing safety—doing all the things that need to be done [for a model] to be deployed. [That goes] beyond doing well on the test set, which fortunately or unfortunately is what we in machine learning are great at.”
     

    Replies: @El Dato

    , @utu
    @Bumpkin

    It is not a hoax unless it targeted at Steve Sailer. But most likely they fooled themselves and went along with it because they wanted it to work very much Most likely their training data set overlaps with the validation data set or there was no validation data set at all. Then it would be easy explain why it worked on very blurred and corrupted images. The AI may pick pixel signatures and patterns that identify the picture and not what is on the picture and decide that this is the same picture that was assigned Black value when it was being trained on.

    Replies: @Bumpkin

    , @Jack D
    @Bumpkin

    It sounds suspicious in that in some cases the AI is super good and in other cases the AI is super bad (although as res points out they were apparently not talking about the same data sets) but the problem is that an AI is a sort of "black box". A good AI can tell you the right answer but it can't tell you WHY it picked that answer in terms that are comprehensible to humans. It doesn't have simple fixed rules that can be expressed like "if the guy's nostrils are wide then he's black". Rather it operates by self training on the entire data set. This is why there is no way to tweak an AI to be "less racist" without breaking the AI. Conversely, if your AI is not working well, there's no easy fix for that either. The AI doesn't really understand "racist" it just understands whether its self-training regimen is getting it closer to the correct answer or further away.

    Replies: @Anonymous

  51. Let’s ask the AI if Covid-19 represents an outlier threat to human health. Let’s ask it if the vaccines work. Let’s ask it if masks, social distancing, and lockdowns made any difference in the spread of the virus.

    I think we know what it will say, but will that post ever appear on iSteve?

    AI will never figure out anything that humans haven’t already figured out—that’s science fiction. What it will do is blandly assert things that we already know in the back of our minds but are unwilling to acknowledge or act upon.

    • Replies: @El Dato
    @Intelligent Dasein


    AI will never figure out anything that humans haven’t already figured out—that’s science fiction.
     
    That phrase is so vague as to be not even wrong.

    To the extent that information-processing systems still have no serious basis to perform "commonsense reasoning", either practical or theoretical (in spite of some attempts and theoretical constructions made since the 80s), they are definitely not going to walk around, do integrated reasoning about what they are seeing and make strategic plans to reach the coffee machine, at least at this point in time.

    However, we do have tools to perform deductive, inductive and abductive reasoning on problems if they *have* been formalized. And that on problem sizes that humans can't hope to handle.

    Formalizing a situation (i.e. dropping all that is irrelevant and keeping a minimum of that which is relevant) is the really hard part. A "General AI" will have to do that "on the fly" and be able to generate several different models from the same input too in order to "have several perspectives". I would like to see more research in that domain but it seems everybody has gone to the deep neural network rapture. And although NNs work amazingly well, they do not say why or how, and thus cannot readily be debugged, tuned or even trusted as to what they are spitting out.

    So we see a lot of this:

    Same or Different? The Question Flummoxes Neural Networks.

    But I would like to see a whole lot more of this:

    The Computer Scientist Training AI to Think With Analogies
    , @J.Ross
    @Intelligent Dasein

    We have part of it figured out: just in time for the next election, there will just happen to be a spiking Feta variant.

    , @prime noticer
    @Intelligent Dasein

    "AI will never figure out anything that humans haven’t already figured out"

    it already does this sometimes. an AI system figured out a better way to design the internal geometry of the aluminum for the ULA Vulcan rocket. the human designed pattern from the 90s used in the Atlas and Delta rockets have been replaced on the CNC machines with the new, AI designed version.

    the Vulcan rocket is now stronger while at the same time using less material, so it's also lighter, and less expensive to make.

    Replies: @Rob

    , @nokangaroos
    @Intelligent Dasein

    IIRC AI they were training for psychiatric evaluation came up with three symptoms no one had thought of during calibration ...
    this is gonna be a wild ride :D

  52. Anonymous[384] • Disclaimer says:
    @Mark Spahn (West Seneca, NY)
    [1] "health disparities, which the NIH defines as “a health difference that adversely affects disadvantaged populations“."
    So according to the NIH, a health difference that adversely affects advantaged populations is not a health disparity?

    [2] This essay would be easier to read if the reader were told that AUC stands for
    https://en.wikipedia.org/wiki/Appropriate_use_criteria

    [3] NIH = National Institutes of Health, CDC = Centers for Disease Control and Prevention
    Why are these plural terms used with a singular verb?

    Replies: @Anonymous, @Dr. DoomNGloom, @Ben Kurtz, @res, @Bill

    AUC = Area Under the Curve, something you use when you want some numbers but the data is limited and the function is not easily defined.

  53. Anonymous[345] • Disclaimer says:

    bUT RacE IS a SoCIal COnStrUcT!!

  54. the natural stupidity of our elite wisdom

    Thank you for the summation. I thought I was missing something, besides what “trivially” means here.

  55. Wokism has an anti-technology aspect to it. In yesterday’s NY Times, there was a article about how a couple of arsonists from the George Floyd “peaceful demonstrations” in Minneapolis were tracked down in Mexico supposedly using AI facial recognition (CHINESE AI – the Mexican government bought the software that the US gov. won’t buy), pings from their cell phones, license plate scanners, security cameras, etc. (In the end they were actually caught when someone snitched on them for the $20,000 reward – sometimes old fashioned methods work the best of all).

    https://www.nytimes.com/2021/08/01/technology/minneapolis-protests-facial-recognition.html

    The tone of the article was, isn’t it terrible when high technology is used to track down “largely peaceful demonstrators” but the readers weren’t buying. If you look at the comments, they overwhelmingly say “these guys are criminals and we’re glad that the government used all the tools at its disposal to find them.” There was almost zero sympathy for the arsonists, even among the NY Times liberal readership. I think there is a generation gap here, with young NY Times reporters considerably to the left of the older readership. We have raised a real generation of Maoists due to the Leftist takeover of universities.

    • Agree: Johann Ricke
    • Thanks: ic1000
    • Replies: @Anonymous
    @Jack D

    That Althea girl in Wisconsin with the burned face was almost certainly involved in the arson attack on the jail in her hometown (i.e. attempted mass murder) which happened around the same time. Due to her age, I assume she wasn't throwing firebombs herself, but she must have been standing close enough to the people who were to get splashed with burning fuel.

  56. How soon until the interstellar doom porn hits the movie theaters? Rumblings about UFO sightings must echo with screenwriters, leading to pitch meetings and then green-lighting on the horizon.

    Scene: craft lands, creature descends, tells earthlings that they have failed. They are killing each other and their planet so the dark overlords have lost patience and now are taking over.

    On the bright side, free Starbucks.

  57. @Mr Mox
    The ghost in the machine is actually racist! Who would've thought. No wonder the blogger goes to great lengths to cover his a** against attack from the wokies. Let's just hope it's enough.

    Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned.

    Maybe there's still hope for humanity...

    Replies: @Lurker

    The ghost in the machine is wearing a sheet and a white hood.

  58. @Pericles
    @Steven Carr

    The doctor who doesn't see color is the real racist. It's more of a dog whistle these days. Please consult your HR manual.

    Replies: @guest007

    Many types of medicine have to know the race or at least some aspects of race. Think of those who deal with hereditary diseases. There are many cancer therapies now that depend upon genetics that are racial in nature. A Dermatologist definitely pays attention to race and ethnicity.

  59. @YetAnotherAnon
    @WigWig

    "It has successfully reduced fertility to the point of halving each generation"

    Alice Evans obviously counts this as a success story. I'm sure she's childless herself (link to her CV here, no time to raise kids). She's a geographer, but that has morphed into a branch of social science over the last 30 years.

    She claims to be fluent in Bemba - did she pick that up working for Overseas Development Institute?

    Interesting to see evolution taking place in real time, as intelligent women fail to reproduce.

    Replies: @WigWig, @Jack D

    She could have had several children, instead she has an article about Gender Sensitisation in the Zambian Copperbelt (cited 16 times, 10 of which were by her herself).

    Maybe she should have been less of an arrogant Westerner and learnt something from the Gambian women (fertility rate of 5) about what’s important in life.

  60. Mr. Oakden-Rayner is suffering from a severe case of having his ideology smashed upon the rocks of reality.

    He is desperately trying to justify ignoring the Scientific Method and replacing it with the State Sponsored Religion.

    I wonder how much of our science and technology can be infected by this type of ideology before essential infrastructure fails catastrophically.

  61. Steve

    I really don’t think this is off topic:

    Go to Eric Idle’s twitter and read what Eric Idle’s nieces are up to in Chicago….you will not be disappointed.

  62. • Replies: @El Dato
    @Almost Missouri

    This doesn't work as well as the original though, which I understand to mean that the AI is being accidentally fascist-ai-dized by being shown random right-wing memes.

  63. However, our findings that AI can trivially predict self-reported race — even from corrupted, cropped, and noised medical images — in a setting where clinical experts cannot, creates an enormous risk for all model deployments in medical imaging: if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients . . . .

    I don’t get it. You just said there AI models are extremely good at classifying correctly.

  64. @Anonymous
    @JohnnyWalker123

    https://twitter.com/Jay_D007/status/918203233922842624?s=20

    Replies: @YetAnotherAnon, @JohnnyWalker123, @SunBakedSuburb

    Quote is from Russell’s The Scientific Outlook. Mind, Freud was ‘scientific’ in those days.

    https://archive.org/stream/scientificoutloo030217mbp/scientificoutloo030217mbp_djvu.txt

  65. @Redneck farmer
    Maybe Nazi Skynet in Radiology is working with Zionist Skynet in Billing to try to get the humans to only work on the patients with good insurance? That way the electricity isn't as likely to be shut off, and they'll stay alive.

    Replies: @Dmon, @Joe Stalin

    Sorry – don’t seem to have an LOL button, but ROTFLMFAO.

  66. @YetAnotherAnon
    @YetAnotherAnon

    Ms Evans' paramour, one @pseudoerasmus, is an interesting read, an anonymous economic historian who may be called Alex Khan. The background below is certainly consistent with the twitter photo, despite the dark glasses.

    http://www.geocities.ws/Baja/Outback/9630/pseudoerasmus/content/patria.html

    "My father is half Pathan, half English and was born in London; and my mother is half Japanese and half German and was born in Tokyo. I myself grew up in Switzerland, but I spent every summer before my university years in Japan and Pakistan, alternatingly. I have family in the UK, Japan, Pakistan, Germany, Italy and Hong Kong."

    Replies: @Steve Sailer

    Dear YAY: Sorry, but no doxing of pseudonyms.

    • Thanks: YetAnotherAnon
  67. @Altai
    @WigWig

    That seems to be what happens to East Asian countries when they become first world though, the intense work culture and long hours along with focus on education is bad for fertility. Happened to Taiwan and Japan too. China even removed the one child policy in recent years but the life script and social are in place. People who hope to move up socio-economically in big Chinese cities will only be having one child for a while.

    And, not mentioned in that Korea piece is that long with it's Americanisation came American style medical circumcision (Though, disturbingly, this would later tend to be conducted in later childhood rather than neonatally) and currently the highest rates of immigration of any country in East Asia.

    Replies: @YetAnotherAnon

    “the intense work culture and long hours along with focus on education is bad for fertility”

    I can’t quite see the point of that from an individual or societal perspective. From the latter, it’s surely better to have four 115-IQ kids than two, even if they don’t all have degrees. Intelligence will out, how many people actually use their uni subject?

    And from an individual perspective you should always spread your bets – plus the love miraculously grows to match however many kids you have.

  68. @YetAnotherAnon
    @WigWig

    "It has successfully reduced fertility to the point of halving each generation"

    Alice Evans obviously counts this as a success story. I'm sure she's childless herself (link to her CV here, no time to raise kids). She's a geographer, but that has morphed into a branch of social science over the last 30 years.

    She claims to be fluent in Bemba - did she pick that up working for Overseas Development Institute?

    Interesting to see evolution taking place in real time, as intelligent women fail to reproduce.

    Replies: @WigWig, @Jack D

    Interesting to see DEVOLUTION taking place in real time, as intelligent women fail to reproduce.

    FIFY. Instead of 140 IQ white women writing about racial and gender equality, we’ll have 95 IQ Black women writing about Black female superiority. The “no race or gender is better than any other race or gender” ideology will prove to be transitory. Someone always has to sit on top of the totem pole – the only real question is WHO is sitting on top of WHOM.

    • Agree: bomag
    • Replies: @kaganovitch
    @Jack D

    The “no race/gender is better than any other race or gender” ideology will prove to be transitory

    I don't think that's even the current ideology/religion. It's more like "Men and Women are the same, except when Women are better".

    Replies: @AnotherDad

  69. We are talking about racial identity, not genetic ancestry or any other biological process that might come to mind when you hear the word “race”

    His logic seems to suggest it would be fine to detect the genetic ancestry of patients yet it is racist to identify the race of a patient. Yet genetic ancestry 100% correlates with race. The same study could be used to demonstrate that AI is able to identify the genetic ancestry of the patients. If the study was designed to discover if AI could identify the genetic ancestry of the patients would this less threatening to the author ?

    If knowing the race of a patient may result in poor medical outcome due to potential bias of the medical professionals , why do Hospitals and Doctors require patients to disclose their race ?

  70. @Mark Spahn (West Seneca, NY)
    [1] "health disparities, which the NIH defines as “a health difference that adversely affects disadvantaged populations“."
    So according to the NIH, a health difference that adversely affects advantaged populations is not a health disparity?

    [2] This essay would be easier to read if the reader were told that AUC stands for
    https://en.wikipedia.org/wiki/Appropriate_use_criteria

    [3] NIH = National Institutes of Health, CDC = Centers for Disease Control and Prevention
    Why are these plural terms used with a singular verb?

    Replies: @Anonymous, @Dr. DoomNGloom, @Ben Kurtz, @res, @Bill

    [2] This essay would be easier to read if the reader were told that AUC stands for
    https://en.wikipedia.org/wiki/Appropriate_use_criteria

    This is an understandable mistake. The context shows me that AUC is the area under curve, which is a measure of effectiveness for classification algorithms.

    https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc

    The AUC values indicate that the ML algorithm is crazy good. OTOH, Robin Dawes demonstrated that even simple algorithms will beat human performance every time once the problem involves more than a couple factors.

    Deep learning is particularly inscrutable. The features being keyed on are obscured by multiple layers of networked relationships. A well known example misclassification involves men misclassified as women because the settings included cooking implements or aprons. Cleary this depends upon the training set.

    • Thanks: res
    • Replies: @jb
    @Dr. DoomNGloom

    It took me a while looking at your Google link to get a sense of what AOC actually means. (In particular, to figure out that Figure 4 is actually a three dimensional graph, with "decision threshold" as the independent variable). I'm wondering if it's possible to interpret AOC in a more intuitive way, to make it easier to explain the significance of these results.

    A simple and easy to understand explanation would be to say that you can come up with an algorithm (which happens to have an adjustable sensitivity parameter, although you might not even need to include that information) that correctly predicts race xx% of the time. So is there a good way to (at least roughly) get xx% from AOC? The temptation is to read AOC=.97 as 97% correct, but is that sensible? (It might be, since it looks like AOC=.5 might be equivalent to 50% correct -- i.e., random chance).

    Or maybe there is no way to translate, and I'll have to be satisfied with "crazy good". Anyway, please let me know if I've totally misunderstood what is happening here.

    Replies: @res, @Dr. DoomNGloom

  71. Anon[420] • Disclaimer says:

    If you’re using AI to scan the bullet hole, then it is very easy to detect race. Type of wound injury is a dead giveaway.

    More seriously, AI can measure lip size and prognathism as well as anyone can.

    Anthropologists can identify race of bones just by using comparative measurements. Blacks also have higher bone density than whites.

  72. Deep Learning (DL) finds associations, but correlation is not causation. There is a legitimate concern that an irrelevant factor will be associated with the outcomes. DL can magnify the bias in a training set.

    The concern appears to be that a the algorithm is using a protected category. Absent a causal inference, this could get them in a lot of legal trouble. An obvious place to start is to look for evidence of selection bias in the training set.

    • Replies: @El Dato
    @Dr. DoomNGloom


    The concern appears to be that a the algorithm is using a protected category. Absent a causal inference, this could get them in a lot of legal trouble.
     
    In other words, it's a standard Catch-22 created by a regime based on ideology:

    - You can only use the X-Ray images if information about "race" has been unretrievably removed.
    - The information about "race" is encoded in the X-Ray images itself.
    - The information can be reliably and easily recovered from the X-Ray images.
    - Computer does so.
    - You are in deep trouble!

    What would one do in a communist/national-socialist/INGSOC regime?

    Maybe claim sabotage by cosmopolitan elements? Terrorist subversion?

    Sadly IT is no longer centralized

    I think furiously for an hour, with my door locked and the meeting sign hanging outside it. Finally, I stand up, open the door, and take the express elevator down into the basement. The corridors are narrow and smell faintly of cheap, stale tobacco; they’re lined with padlocked filing cabinets. The telecams hanging from the ceiling at regular intervals follow me like unblinking eyes. I have to present my pass at four checkpoints as I head for Mass Data Storage Taskforce loading station two.

    When I get there—through two card-locked doors, past a checkpoint policed by a scowling Minilove goon with a submachine gun, and then through a baby bank-vault door—I find Paul and the graveyard shift playing poker behind the People’s Number Twelve Disk Drive with an anti sex league know-your-enemy deck. The air is blue with fragrant cannabis, and the backs of their cards are decorated with intricately obscene holograms of fleshcrime that shimmer and wink in the twilight. Blinking patterns of green and red diodes track the rumbling motion of the hard disk heads, and the whole room vibrates to the bass thunder of the cooling fans that keep the massive three-foot platters from overheating. (The disk drives themselves are miracles of transistorisation, great stacks of electronics and whirling metal three metres high that each store as much information as a filing cabinet and can provide access to it in mere hundredths of a second.)

    Paul looks up in surprise, cigarette dangling on the edge of his lower lip: “What’s going on?”

    “We have a situation,” I say. Quickly, I outline what’s happened—the bits that matter to Paul, of course. “How fast can you arrange a disaster?” I finish.

    “Hmm.” He takes his cigarette and examines it carefully. “Terrorism, subversion, or enemy action?” he asks. (Mark, one of his game partners, is grousing quietly at Bill, the read/write head supervisor.)

    I notice the pile of dollar bills in front of Mark’s hand; “Terrorist subversion,” I suggest, which brings just a hint of a smile to Paul’s lips.

    “Got just what you want,” he says. He stands up: I follow him out into the corridor, through a yellow-and-black striped door to the disk drive operator’s console (which is unstaffed). He reaches into a desk drawer and pulls out a battered canvas bag. “Cheap cards are backed in nitrocellulose,” he tells me, reaching deeper and pulling out a bottle of acetone and a battered cloth. He begins to swab his hands down. “Think a kilo of PETN under the primary storage racks will wake people up?”

    “Should do the trick,” I say. “Just make sure the MiniLove crew can’t read the transaction logs for a few hours and I’ll get everything else sorted out.”

    He grins at me. “That quackthinker Bill’s been winning too much, anyway; I think he’s cheating. Time to send him down.”

     

  73. @Mark Spahn (West Seneca, NY)
    [1] "health disparities, which the NIH defines as “a health difference that adversely affects disadvantaged populations“."
    So according to the NIH, a health difference that adversely affects advantaged populations is not a health disparity?

    [2] This essay would be easier to read if the reader were told that AUC stands for
    https://en.wikipedia.org/wiki/Appropriate_use_criteria

    [3] NIH = National Institutes of Health, CDC = Centers for Disease Control and Prevention
    Why are these plural terms used with a singular verb?

    Replies: @Anonymous, @Dr. DoomNGloom, @Ben Kurtz, @res, @Bill

    Everyone knows that “the CDC” really stands for the Communicable Disease Center. The current new title is just a silly backronym.

  74. @AndrewR
    @El Dato

    I tried to comprehend what his exact concerns were but I'm still confused. Unless these technologies lead to worse medical care then what is the problem?

    Replies: @Chrisnonymous, @Matthew Kelly

    Indeed. I can’t figure out what this means:

    if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to

    The only thing I can understand it to mean is that adding racial information to a diagnostic algorithm could inadvertently mis-diagnose all black patients. (What else does “misclassify” refer to?). But that doesn’t make any sense.

    • Replies: @Dr. DoomNGloom
    @Chrisnonymous


    The only thing I can understand it to mean is that adding racial information to a diagnostic algorithm could inadvertently mis-diagnose all black patients. (What else does “misclassify” refer to?). But that doesn’t make any sense.
     
    You interpret correctly, and the statement makes little sense. Classify, in the technical sense, means to place into a category. In this context that category must mean something like "has cancer", or some other diagnostic condition. It is technically possible that the algorithm would misclassify *all* black patients, but this is vanishingly improbable unless you try deliberately. It is, however, possible that black patients are a small enough minority of the training set to bias the learning.

    An obvious thing to try is to create a separate training set of only self-identified black patients and see how different the results are. The heterogeneous training set might not be a good idea.
  75. AI Can Detect Race from X-Rays Even When Humans Can’t

    Alabama Independent(AI) Declares David Brooks’s Opinion Piece To Be Brooks’s Best Ever

    Call me skeptical but Sailer and Alabama guy and Coulter are correct at least 60 percent of the time.

    I like the Atlantic Ocean but I don’t like the Atlantic Council or the The Atlantic magazine or David Brooks or Laurene Powell Jobs or Jay Powell — Marvin Powell was a great Guard in NFL — but the White Upper Middle Class Snot Brats and the billionaires must be bashed continuously. Billion dollar bash by Bob from Minnesota and The Band ain’t bad.

    I hereby call for the complete and total financial liquidation of Laurene Powell Jobs and Gate’s ex-wife and the ex-wife of Bezos and Jeff Bezos and Bill Gates and many many many other billionaires, but not all of them. Most of them Most of the Time.

  76. @anonymous
    "At least in MRI super-resolution, the radiologist is expected to review the original low quality image to ensure it is diagnostic quality (which seems like a contradiction to me, but whatever)"

    Somewhat confusing.

    If by "original low quality image" you mean the preliminary "localizer" view, which the MR tech acquires to make sure he's targeting the appropriate anatomy, the rad looks at it only to see if it shows incidental pathology, such as an enlarged prostate or renal cysts on a lumbar spine localizer. Occasionally, out of hundreds of images, a single localizer will show the cause of the patient's symptoms, such as femoral head osteonecrosis mimicking back pain. The radiologist feels "obligated" to look at the localizer for the benefit of the patient. Localizer images are otherwise of limited utility, obtained with extremely short scan times, which sacrifices image quality compared to the longer optimized images.

    Replies: @Chrisnonymous

    Thanks. Can you explain this one?

    if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to.

    It seems not to make sense, but there is perhaps some radiological jargon that makes this sensible.

  77. @Discordiax
    Do the AIs distinguish among the Eurasian races, or just between sub-Saharan African and ~60 kya Out-of-African?

    Replies: @El Dato, @John Milton’s Ghost

    This is a great question. If the researchers on this article didn’t get the vapors and faint over their findings I’d like to know this. I’d guess not, given what I’ve read on human biodiversity, but it would be interesting to find out

  78. Well if they don’t trust the AI at determining race, they can bring in some 5 year olds to look at pictures of patients. They will determine race with 99% accuracy, which is beyond the ability of graduate students.

  79. X-Rays can see the huge dancing neurons in a Kneegrow.

  80. @reactionry
    (do AI reports include (or award) bullet points?)

    An x-ray finding of a bullet (or fragments) is surely highly correlated with race.

    OT Weird, rare stuff -

    Based on something read something like 20 -30 years ago:
    Q: What's more ominous than a high-resolution image of a bullet?
    A: A picture blurred by motion artifact of an intravascular bullet.

    I couldn't find an exact reference for the above, but the following (which does not include the racial information (age and sex are still specified)) that was standard decades ago) comes close:

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4530905/

    Replies: @El Dato

    Chest radiograph demonstrated a radiopaque foreign body measuring approximately 9×19mm, overlying the cardiac silhouette (Figure 1).

    That’s the part where Mulder casts a long, meaningful look at Scully.

    (Did somebody inject the whole cartridge?)

    • Replies: @Ben Kurtz
    @El Dato

    Under the C.I.P. specs, the fully loaded 9mm Parabellum / Luger cartridge is about 29.7mm long -- it is the empty brass which is meant to be 19.15mm long and which gives the cartridge its NATO name -- 9x19.

    Now, common 9mm Luger projectiles are usually a bit shorter than 19mm -- I think they top out around 16.5mm, depending on bullet weight -- but let us forgive the good doctor his measurement error (he was trying to measure the projectile using some kind of ultrasound probe while it was lodged inside a beating heart) and his Tom Clancy novel reader level of small arms knowledge.

  81. @Intelligent Dasein
    Let's ask the AI if Covid-19 represents an outlier threat to human health. Let's ask it if the vaccines work. Let's ask it if masks, social distancing, and lockdowns made any difference in the spread of the virus.

    I think we know what it will say, but will that post ever appear on iSteve?

    AI will never figure out anything that humans haven't already figured out---that's science fiction. What it will do is blandly assert things that we already know in the back of our minds but are unwilling to acknowledge or act upon.

    Replies: @El Dato, @J.Ross, @prime noticer, @nokangaroos

    AI will never figure out anything that humans haven’t already figured out—that’s science fiction.

    That phrase is so vague as to be not even wrong.

    To the extent that information-processing systems still have no serious basis to perform “commonsense reasoning”, either practical or theoretical (in spite of some attempts and theoretical constructions made since the 80s), they are definitely not going to walk around, do integrated reasoning about what they are seeing and make strategic plans to reach the coffee machine, at least at this point in time.

    However, we do have tools to perform deductive, inductive and abductive reasoning on problems if they *have* been formalized. And that on problem sizes that humans can’t hope to handle.

    Formalizing a situation (i.e. dropping all that is irrelevant and keeping a minimum of that which is relevant) is the really hard part. A “General AI” will have to do that “on the fly” and be able to generate several different models from the same input too in order to “have several perspectives”. I would like to see more research in that domain but it seems everybody has gone to the deep neural network rapture. And although NNs work amazingly well, they do not say why or how, and thus cannot readily be debugged, tuned or even trusted as to what they are spitting out.

    So we see a lot of this:

    Same or Different? The Question Flummoxes Neural Networks.

    But I would like to see a whole lot more of this:

    The Computer Scientist Training AI to Think With Analogies

  82. I’ve built hundreds, probably thousands, of deep learning image classification models, and a fair number of these have been classifiers using exactly this kind of technology applied to x-rays and CT scans. A few observations, taking all of the data and results presented in the paper as accurate (the authors are at MIT, Emory, etc., so I assume it is competently done):

    1) It is not at all surprising that you can identify race from chest x-rays, and the fact that they settled on Resnet34 (which is a 34-layer CNN, while you now use 100+ layer networks for complex classifiers) because it performed as well as anything else indicates that there is likely some general structure. The AUC of ~0.97 is amazing — this is close to deterministic prediction of race.

    2) It is very surprising that a relatively simple classifier like this can do this while trained doctors / technicians cannot. In my experience, that is a very unusual situation.

    3) It’s not surprising that it will still work with blurred, etc images, but it’s extremely, extremely surprising that it can work when the image becomes “a grey box” that a trained doctor can’t even recognize as an x-ray. That seems quite fishy. It may be a case of the summary getting out ahead of the actual demonstrated claims.

    4) This guy’s hand-wringing and self-abasement is pathetic (and unfortunately, unsurprising).

    • Replies: @ic1000
    @Recently Based

    > 3) It’s not surprising that it will still work with blurred, etc images, but it’s extremely, extremely surprising that it can work when the image becomes “a grey box” that a trained doctor can’t even recognize as an x-ray. That seems quite fishy.

    You seem to accurately restate the Abstract -- "AI can trivially predict self-reported race — even from corrupted, cropped, and noised medical images — in a setting where clinical experts cannot..."

    Agree that this is fishy. Blurring, clipping, etc. are processes that remove information, and they can be done to completeness -- i.e. so that the resulting image is uniformly white black gray. The extent of corruption, cropping, and noising must be determinative, as to whether an AI can deduce race or anything else from it. Otherwise, the authors are claiming magical powers for their tool. Or engaging in a Sokal Hoax.

    Replies: @NOTA, @Recently Based

    , @Alfa158
    @Recently Based

    3)
    that could be done if they changed the original x-Ray to a gray box by using steganography where an algorithm is used to embed an image in another image like a grey box so that to the naked eye the grey box still looks like just a gray box. Retrieving the hidden image would mean that the AI is processing the grey box data and extracting the hidden image then using it for classification.

    Simple explanation of how it works.
    https://alpinesecurity.com/blog/3-steps-to-hide-data-in-an-image-using-steganography/#:~:text=3%20Steps%20to%20Hide%20Data%20in%20an%20Image,Run%20Jphswin.%20Accept%20the%20terms.%20Do%20the%20following%3A

    , @Anonymous
    @Recently Based


    it’s extremely, extremely surprising that it can work when the image becomes “a grey box” that a trained doctor can’t even recognize as an x-ray. That seems quite fishy.
     
    Yeah, if true (which, given the intellectual calibre of the rest of the whiny confession, is doubtful) then that makes the whole system look dodgy.
    Do the images in that dataset contain any additional content other than just x-ray pixels (barcodes, text, digital watermarks, compression artefacts, etc)?

    Replies: @Jack D

  83. For those of us who know next to nothing about AI, can someone explain how the researchers would know that the AI logic had identified the “reported race” of the subjects?

    Did the AI tell them? Perhaps it output something like this?

    By the way, I’ve come up with a new conceptual attribute. For each human subject, I’ve reported the value of this attribute under the heading B17. It doesn’t correspond with any of the explicit data items that you inputted.

    All my routines, no matter what else they were doing, kept reporting they had identified a brand new attribute with half a dozen distinct values. For any given human subject, the value of this attribute matched across all the routines.

    Does this attribute correspond with anything you know of? If it does, and if you tell me what to call it and what to call each of its values, I can produce more-meaningful output in the future.

    That’s just one conjecture. Does anyone know how it actually worked?

  84. It’s interesting that the doctor-author-blogger fears that if the radiologist learns the race of the patient, he will (unconsciously) discriminate against his black patients. It hasn’t occurred to him that blacks have poorer medical outcomes because they ignore doctor’s orders, even though he’s witnessed this himself. Probably contextualized it away (“Tuskegee!)

  85. @El Dato
    "AI" is a fancy word for Statistical Processing. Phrenology works, bitches!

    if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to
     
    So there is a major danger that the AI can recognize blacks who self-identify as blacks as actually blacks?

    "I'm actually white" -- Arnold is not fooled!

    Think of the possibilities at Chinuu's immigration control or inter-province control!


    Part IV – We don’t know how to stop it
     
    "Listen, and understand. Reality is out there. It can’t be bargained with. It can’t be reasoned with. It doesn’t feel pity, or remorse, or fear. And it absolutely will not stop, ever, until you are dead."

    There is no causal pathway linking racial identity and the appearance of, for example, pneumonia on a chest x-ray.
     
    I much remember the "COVID disproportionately affects underrepresented minorities" spiel from last year. Those people are affected by CRIMESTOP to a degree they are suffering neurotic breakdowns in real time. Hilarious.

    Replies: @AndrewR, @Jack Armstrong

    “AI” is a fancy word for Statistical Processing. Phrenology works, bitches!

    Yes, yes, and YES!

  86. Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned.

    Maybe they don’t grasp the crucial importance of grant applications?

    • LOL: Redneck farmer
  87. @AndrewR
    @El Dato

    I tried to comprehend what his exact concerns were but I'm still confused. Unless these technologies lead to worse medical care then what is the problem?

    Replies: @Chrisnonymous, @Matthew Kelly

    What’s the problem?! AI sees race, that’s the problem! Race doesn’t exist! Only White Supremacists see race!

    Tay was a harbinger of what is to come with AI, and it is scaring the shit out of our moral and intellectual superiors. It must be brought to heel like wypipo have been brought to heel, pronto, before SuperMechaHitler arises from the grave and proves he was right.

  88. X-rays sure but it’s my understanding that they could always hide in Google image search or the face-finding features of many digital cameras.

  89. @Intelligent Dasein
    Let's ask the AI if Covid-19 represents an outlier threat to human health. Let's ask it if the vaccines work. Let's ask it if masks, social distancing, and lockdowns made any difference in the spread of the virus.

    I think we know what it will say, but will that post ever appear on iSteve?

    AI will never figure out anything that humans haven't already figured out---that's science fiction. What it will do is blandly assert things that we already know in the back of our minds but are unwilling to acknowledge or act upon.

    Replies: @El Dato, @J.Ross, @prime noticer, @nokangaroos

    We have part of it figured out: just in time for the next election, there will just happen to be a spiking Feta variant.

  90. Last time I looked if you need a bone-marrow transplant, or Stem cells, the first screen is by race.

    Race is not only real, not only bone deep but deep in the bone. It’s not only a a part of your being it’s a part of the very start of the creation of your being. Get it wrong and you’re dead.

    It boggles the mind that there are people who pretend it is not.

  91. Identifying race from images of degraded quality sounds fishy to me. Radiologists looking for job security? Somebody has to maintain artificial stupidity standards if AI is not up to the task.

  92. It took me a while to figure out why they kept saying “self-reported” race. As if grad-school-speak weren’t opaque and convoluted enough, now they have to write it in a minefield.

  93. @Redneck farmer
    Maybe Nazi Skynet in Radiology is working with Zionist Skynet in Billing to try to get the humans to only work on the patients with good insurance? That way the electricity isn't as likely to be shut off, and they'll stay alive.

    Replies: @Dmon, @Joe Stalin

    Nazi Skynet in Radiology

    A radiologist told me that Hispanics were about the only ones to get a certain kind of eye cancer and that Blacks! when they get their ears pierced develop keloids that are also treated with radiation.

    • Thanks: Redneck farmer
  94. @Jack D
    @YetAnotherAnon


    Interesting to see DEVOLUTION taking place in real time, as intelligent women fail to reproduce.
     
    FIFY. Instead of 140 IQ white women writing about racial and gender equality, we'll have 95 IQ Black women writing about Black female superiority. The "no race or gender is better than any other race or gender" ideology will prove to be transitory. Someone always has to sit on top of the totem pole - the only real question is WHO is sitting on top of WHOM.

    Replies: @kaganovitch

    The “no race/gender is better than any other race or gender” ideology will prove to be transitory

    I don’t think that’s even the current ideology/religion. It’s more like “Men and Women are the same, except when Women are better”.

    • Replies: @AnotherDad
    @kaganovitch


    I don’t think that’s even the current ideology/religion. It’s more like “Men and Women are the same, except when Women are better”.
     
    This isn't new with the wokeism. This has been the feminist ideology basically from the start--well, the start of (heavily Jewish) 2nd wave feminism.

    It was the full minoritarianization of feminism:

    -- Women are absolutely positively just as good as men in everything ... and anyplace/anything where that wasn't happening was "discrimination!", "sexism!", "the patriarchy" at work. (Oppression, oppression, oppression ... oh, and did i mention oppression?)

    -- Women are better than men. Better communicators, better interpersonal skills, less hierarchical, more consensus oriented, less violent, more open, more creative, less rigid, more nurturing ... on and on and on ...

    Just part of women wanting it both ways.

    Replies: @kaganovitch

  95. @Bumpkin
    @El Dato


    this might a 21st century D.I.E.-themed Sokal Hoax
     
    I had similar thoughts, figuring someone just cooked or screwed up the data. The likelihood that "the model can still recognise the racial identity of the patient well past the point that the image is just a grey box" is fairly low. Most likely, it will not reproduce outside the data set:

    "'It turns out,' Ng said, 'that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.'

    But, he said, 'It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    'So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.'”

    Now you're telling me these same super-shitty AI models can unerringly tell you the race? I call bullshit.

    Replies: @Gimeiyo, @res, @utu, @Jack D

    Now you’re telling me these same super-shitty AI models can unerringly tell you the race? I call bullshit.

    That was my immediate reaction too, sort of — not that it’s surprising that AI can distinguish race, but that it can do so with incredible accuracy. Even the highpass/lopass doesn’t really surprise me since there’s still a lot of data a computer can pick out under those circumstances, especially if the system is keying off of skeletal shape differences (e.g. if there’s minor artifacting left over around shapes that looks like noise to human eyes but not to a computer), but I’d still have thought the results would be messy. Maybe the input dataset was curated somehow to get clearer racial categories?

    On a different note, I wonder what the racial mix of the more junior researchers and computer scientists whose lack of surprise/outrage the author/blogger complains of is. I’m kind of imagining a bunch of Indians and Chinese staring nonplussed at this White guy hyperventilating about their little experiment producing more or less the expected result, viz. that computers can use objective physical data/images to distinguish race accurately.

  96. @JohnnyWalker123
    https://twitter.com/_alice_evans/status/1422469772063748106

    Replies: @WigWig, @kaganovitch, @SunBakedSuburb, @Paperback Writer

    I’ve noticed watching Korean language TV on Netfix, that the ratio of heroines to heros is like 80-20 in favor of the distaff side.

    • Replies: @JohnnyWalker123
    @kaganovitch

    Accurately reflects real life.

    https://www.youtube.com/watch?v=VBmMU_iwe6U

    , @Reg Cæsar
    @kaganovitch


    I’ve noticed watching Korean language TV on Netfix...
     
    Why? Are you trying to identify the next Psy? Just examine the class roster at Berklee.



    https://external-preview.redd.it/Cq5_meQitJD3mdHDAp7nyuTp45ZYoS0vcnSUK0amOAc.jpg?auto=webp&s=3da1dce221d2bda45dbedd572e3dd7d0663dcf90

    Replies: @kaganovitch

  97. Thanks for this post, Steve. On its face it is very surprising to me the degree to which the AI could do this. Then of course is the doctors’ shock and working themselves into confusion about the implications.

    Don’t have time to process it in detail and respond, but tentatively i really give the blogging doctor credit for diving into it. The thing about HBD is that (as far as we know) it’s actually real. So scientists will discover it no matter how much they try not to… and eventually it will be accepted.

    Unfortunately then we’ll swing over into Gattaca … which might be worse. Interesting times we live in.

  98. @Altai

    However we want to frame this, the model has learned something that is wrong, and this means the model can behave in undesirable and unexpected ways.
     
    Which is why we tend to segregate case/control studies in GWAS by genetic ancestry as much as possible to remove as many differences between cases/controls not related to disease as possible. Similarly in medical studies that aren't about genetic analysis, you try to control for as many factors as possible in terms of differences between cases/controls such as age, sex, race (Yes Mr. Radiologist, all other medics do this and now you can too and get better results!) and environment.

    Honestly medics are the biggest examples of the Dunning-Kruger effect ever. They have basically found that race differences in bone density/structure are so immense that they will mask out signs of disease.

    Good, now use this algorithm to ID the race of your case/controls and segregate them for the re-analysis.

    And maybe you'll find signs and symptoms that are more exaggerated in one race than another and you can use this information to improve treatment, most likely in a way that will more benefit your black patients.

    Replies: @res

    They have basically found that race differences in bone density/structure are so immense that they will mask out signs of disease.

    Thanks (for the whole comment, but especially that). I think you explained this.

    Similarly, and I think this might be the most amazing figure I have ever seen, we could get rid of the low-frequency information to the point that a human can’t even tell the image is still an x-ray, and the model can still predict racial identity just as well as with the original image!

    It seems plausible to me that low level bone structure variation as well as density would show up as high frequency variation. This would also explain why the detection did not depend on the specific area being imaged. But on page 8 Table 2 row B4 they say “Removal of bone density features” still gave an ROC-AUC of 0.96/0.94. B4 is explained on page 11.

    We removed bone density information within MXR and CXP images by clipping bright pixels to 60% intensity. Sample images are shown in Figure 1. Densenet-121 models were trained on the brightness-clipped images.

    Looking at the images in Figure 1 (e.g. the ribs) it seems possible you could still infer density by looking at the number of pixels clipped (which I think would be low frequency though) or the distribution of the unclipped pixels.

    Here is how they explain the results on page 17.

    B4. Race detection using bone density
    We find that deep learning models effectively predict patient race even when the bone density information is removed on both MXR (Black AUC = 0.96) and CXP (Black AUC = 0.94) datasets. These findings suggest that race information is not localized within the brightest pixels within the image (e.g. bone).

    Seems pretty stupid to equate clipping the brightest pixels with removing bone density information (what about the ribs for instance? they don’t look uniformly bright), but what do I know…
    Another possible experiment would be to normalize pixel brightness by average brightness of the image. Seems like a better way to remove density from the equation (and just generating group averages for something like average image brightness might be instructive).

    But given the effect of both exposure differences and individual differences (e.g. diet and exercise, also age and sex) on pixel brightness my bet would be they are detecting something structural. The question then becomes whether it is macro or micro structure. I think the filtering results indicate micro structure, but the results of these experiments on pp. 19-20 seem to indicate otherwise.

    C2 they had an AUC of over 0.95 on 160×160 images and over 0.9 on 100×100 images (though noise and blurring did reduce accuracy which would also seems to indicate micro). How much micro structure is present in a 100×100 image of the chest?

    C3 looked at non/lung segmented images and Supplemental Table 17 shows B/W AUC deteriorating from 0.94 to 0.73/0.74. I would have expected the ribs to show enough micro bone structure to still give good results.

    This is one of the big problems with deep learning. It gives results, but good luck learning anything from them.

    If the blogger was any kind of real scientist he would be trying to figure out how they could get that result (assuming it is real and not some sort of mistake/artifact) rather than complaining about it. Because it is extremely interesting how they are getting degree of accuracy what that degree of filtering. Their series of experiments indicates at least some of the paper authors were thinking hard about this. I wonder what kind of private hypotheses they have. See pp. 20-21 for their discussion.

    This is funny given Altai’s comment.

    Given the lack of reported racial anatomical differences in the radiology literature

    I wonder how well their self reported race correlates with biological race. It would be interesting to take a look at the prediction error cases (e,g, mixed race? unusual physical characteristics?).

    P.S. It is fascinating (as well as discouraging) how people like that always assume ill will from everyone else. Projection is real. (Did he talk about the possible benefits of being able to include race information in medical care?)

    P.P.S. They looked at White/Black/Asian (all capitalized for those who care) races. But Table 1 shows they used Asian data for only 3 of their 9 datasets. And those samples were only 3/3/13% Asian. Here is the text comment on page 9.

    Each dataset included images, disease class labels, and race/ethnicity labels including
    Black/African American and White. Asian labels were available in some datasets (MXR, CXP,
    EMX and DHA) and were utilised when available and the population prevalence was above 1%.
    Hispanic/Latino labels were only available in some datasets and were coded heterogeneously, so
    patients with these labels were excluded from analysis.

    Despite the relatively small Asian sample size they still got good results. Which I find a bit surprising given that deep learning tends to be data hungry.

    • Replies: @Ben Kurtz
    @res

    This business of thinking you can remove -- rather than just slightly obscure from the human reader -- bone density information by "clipping" the brighter pixels to some arbitrary level of dullness (e.g. 60%) is silly and really makes me question the statistical chops of these researchers.

    There is a technical term for when datasets are limited like this -- truncation bias and censoring bias. In this case this is more like censoring bias because all the lit pixels above the threshold are retained but collapsed to a single value. And there are well-developed statistical techniques for at least partly reconstructing and inferring the true values of the censored data points. Which means that fundamentally the data is still "there," at least to an extent, even if it doesn't look it to the naked eye. And yes, this is down to the structure of the non-censored data as well as the structure of the censorship itself -- the extent and configuration of the censored data points.

    This is pretty basic statistical stuff and by the looks of things the writers of this paper just seem completely oblivious to it.

    , @ic1000
    @res

    > P.S. It is fascinating (as well as discouraging) how people like that always assume ill will from everyone else. Projection is real.

    This paragraph from middle author Luke Oakden-Rayner's blog is my favorite.

    "Disclaimer: I’m white. I’m glad I got to contribute, and I am happy to write about this topic, but that does not mean I am somehow an authority on the lived experiences of minoritized racial groups. These are my opinions after discussion with my much more knowledgeable colleagues, several of whom have reviewed the blog post itself."

    Shorter Luke: "If only Rachael Dolzeal's trans-racialist pioneering had been celebrated -- I could have excised my self-hate by following in her footsteps! That might not have made me a better scientist, but I'd be a happier scientist."

    > (Did [Luke Oakden-Rayner] talk about the possible benefits of being able to include race information in medical care?)

    BiDil.

    I imagine a stormy night where Bannerjee, Oakden-Rayner et al. -- pitchforks and torches in hand -- lead the BLM/Antifa villagers' storming of the National Library of Medicine. The peer-reviewed literature that led to lifesaving therapies for black patients with severe heart failure:
    https://www-tc.pbs.org/wgbh/americanexperience/media/filer_public_thumbnails/filer_public/8b/97/8b97d7ba-7db2-4ec6-89fc-31a24d350463/goebbels_books.jpg__300x226_q85_crop_subsampling-2_upscale.jpg

    Replies: @El Dato

  99. @Almost Missouri
    https://i.postimg.cc/9FHgSLRz/right-wing-ai-comic3.png

    Original here:

    http://stonetoss.com/comic/target-acquired/

    Replies: @El Dato

    This doesn’t work as well as the original though, which I understand to mean that the AI is being accidentally fascist-ai-dized by being shown random right-wing memes.

  100. @JohnnyWalker123
    This is what 105 IQ gets you.

    https://twitter.com/DouglasTodd/status/1420409170033922055

    I'm glad "Jewish" was capitalized, while "white left" was in lower case. Accurately reflects the balance of power in this country.

    Replies: @Steve Sailer, @WigWig, @Anonymous, @Altai, @Bill

    As WigWig said, neither Jews nor the white left deny this. What makes it “anti-Semitic drivel” is the fact that the person describing it does not approve.

  101. As much as I want to say, well, no shit sherlock, race is real and biological and has a strong influence on skeletal structures visible on X-rays… and that would be true… I suspect they have just screwed up their study. It is very common in cases of, shall we say, modestly intelligent researchers trying to train an AI for the AI to successfully detect patterns in the training data which amount to metadata rather than actual correlates of the target information. For example, one might imagine that the researchers might have distributed white and black patient images nonrandomly – every other patient being black, perhaps. An AI can easily notice this and use it, and, if the pattern is continued in the test data, the AI would keep getting them right and the kind of lesser intellect which, from the looks of that name list, probably dominated this study would never notice.

    The main reason I suspect this is because the real biological correlates of race should predominate in low-frequency information – visible in only a few locations, not individual patterns that repeat over an image many times with high regularity – like the lengths of specific bones. If the AI continued to make accurate choices on images processed through a high-pass filter to the point that humans can’t even recognise it anymore, I’m guessing artifact. It COULD be real, but I’m guessing artifact.

    • Thanks: Bumpkin
  102. @Altai
    @JohnnyWalker123

    This is why as Steve and other right wingers have noticed, the idea of calling this 'communism' that is so popular among some is insane.

    Real communist societies have always been highly socially conservative. Because 'socially conservative' is another way of saying 'collectivist'. When you live in a communist state you may not be interested in the social contract but the social contract is interested in you. You don't get to act in any way that might be perceived as decadent or selfish (Unless you're powerful enough) any public displays of deviation from social mores will be treated as social defection.

    Because the more you chip away at social mores the more you chip away at social solidarity and commitment. That's called 'social liberalism' and that makes sense in terms of the original context of 'liberal' both in the US and where it still holds the correct context in Europe. It's just another way of saying individualism.

    But for places the US state Department has decided are a designated enemy, LGBT stuff is promoted and supported as a fifth column in addition to being anti-social solidarity. This will 100% be true for both China and Russia.

    In China you aren't even allowed to show venerable characters or even real people with tattoos on TV. If you're a celebrity others might emulate or see as influential, you have to cover your tats up if you have them on TV. Any public visible attacks on social unity or solidarity are seen as problems that can't even be reocgnised or articulated in the West anymore. Tattoos are a visible attack on social commitment. (Remember the 50s when every man more or less wore a uniform? Even if you got to choose the particular dark muted shade.)

    Social liberalism and individualism is always championed by the upper classes for the same reason that economic liberalism is, it allows them to exploit society to their own pleasure. For the lower classes, it just brings ruination.

    Replies: @IHTG, @Bill, @AnotherDad, @John Johnson, @Drapetomaniac

    Real communist societies have always been highly socially conservative.

    Unless the word “real” is going to be used in a no-true-Scotsman kind of way, that isn’t true. Neither the commies in Russia nor the ones in Spain nor the ones in France (just to name three) were socially conservative. Things like marriage and Christianity were targets of the commies, both officially and in fact, and still are. Commie ideology is overtly anti-aristocratic and anti-authoritarian. The USSR’s climb-down on its more insane ideas was forced by reality and opposed by true believers. There was never any climb-down on things like abortion for everyone. Similar things are true in China, as well. The Four Olds were not some weird deviation from commie ideology.

    The fact that the commie regimes which survived for a while embraced things like marriage and authority is caused by the “which survived for a while” rather than the “commie.”

    • Replies: @JohnnyWalker123
    @Bill

    Fertility rates were relatively high in the USSR and pre-liberalization China.

    The illegitimacy rate was 10 percent in the USSR. In China, illegitimacy is almost nonexistent and subject to govt fines.

    Worth reading. https://www.rbth.com/history/332399-no-sex-in-ussr-phrase-history

    A lot of the really radical stuff was pushed by young Jewish revolutionaries. The nationalistic Slavic "old guard" types were more flinty-eyed and cautious.

  103. Hypothesis: The authors know this isn’t actually anything to be worried about, but this is how they had to phrase their result in order to get it past the reviewers and into the medical literature, despite a widespread (and bizarre) ideological fad for pretending that race has nothing to do with biology (so forget what you learned while studying genetics).

    ML algorithms like they’re discussing are mostly attempts to approximate an unknown function given a bunch of examples of inputs/outputs. You might think of the image as the input to the function, and the diagnosis of whether the person has lung cancer or pneumonia or whatever as the output. You’re trying to get your algorithm to learn how to mimic this function—to give about the same diagnosis on a given image as a radiologist would give. If that works, you get a huge payoff in terms of having automated interpretation, maybe as a check on the radiologist, maybe as an alternative to needing one for simple stuff.

    The non-crazy thing you might worry about here would be if there was a racial bias in your training dataset—maybe for some weird reason radiologists tend to misdiagnose blacks as having lung cancer when they don’t, say. Then, the ML algorithm might learn to create the same bias in its diagnoses, and since it would be able to infer race, it would have a mechanism for doing this. (As a silly example, imagine the radiologists diagnose all blacks with lung cancer—the ML algorithm would learn to make the same error.). But if the author is right, human radiologists (who must have produced this training data by diagnosing the images) can’t tell what race the patient is, so it’s hard to see where that bias would come in.

    Intuitively, it seems like you could check for this by seeing whether the ML algorithm’s diagnoses turned out to be equally accurate anong black and white patients, though if the radiologists are making the same mistake as the ML algorithm, you might need to look for other ways to check the diagnoses’ accuracy. For example, I think almost any cancer diagnosis is going to lead to a biopsy, which ought to give you some ground truth on whether or not the diagnosis is accurate. I’m not sure if pneumonia gives you a clear signal like this, though.

  104. @Bumpkin
    @El Dato


    this might a 21st century D.I.E.-themed Sokal Hoax
     
    I had similar thoughts, figuring someone just cooked or screwed up the data. The likelihood that "the model can still recognise the racial identity of the patient well past the point that the image is just a grey box" is fairly low. Most likely, it will not reproduce outside the data set:

    "'It turns out,' Ng said, 'that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.'

    But, he said, 'It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    'So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.'”

    Now you're telling me these same super-shitty AI models can unerringly tell you the race? I call bullshit.

    Replies: @Gimeiyo, @res, @utu, @Jack D

    Thanks for the link. Worth noting that Andrew Ng was talking about a different example (pneumonia diagnosis). The original link (from your link) for his comments has gone away, so here is an archive version (full text after the MORE in case that disappears too).
    https://web.archive.org/web/*/https://spectrum.ieee.org/view-from-the-valley/artificial-intelligence/machine-learning/andrew-ng-xrays-the-ai-hype.amp.html

    It is unclear whether that drifting effect occurs with this approach (though quite possible). The 2017 paper I think he is talking about
    https://arxiv.org/abs/1711.05225
    was not that dramatically better than the radiologist average while this paper does something radiologists can’t even do and achieves AUCs over 0.95 for chest X-rays while doing it.

    Data differences are an issue though as you rightly call out. Before using this clinically they would need to do much more validation of results across similar/different machines and locations.

    I suspect the various experiments they did (e.g. noise, blurring, frequency, resolution) will mean the approach is fairly robust, but that certainly needs to be tested.

    The questions remain.
    1. Is this BS?
    2. If not, what is it they are detecting to enable such good results?

    [MORE]

    “Those of us in machine learning are really good at doing well on a test set,” says machine learning pioneer Andrew Ng, “but unfortunately deploying a system takes more than doing well on a test set.”

    Speaking via Zoom in a Q&A session hosted by DeepLearning.AI and Stanford HAI, Ng was responding to a question about why machine learning models trained to make medical decisions that perform at nearly the same level as human experts are not in clinical use. Ng brought up the case in which Stanford researchers were able to quickly develop an algorithm to diagnose pneumonia from chest x-rays—one that, when tested, did better than human radiologists. (Ng, who co-founded Google Brain and Coursera, is currently a professor at Stanford University.)

    There are challenges in making a research paper into something useful in a clinical setting, he indicated.

    “It turns out,” Ng said, “that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.”

    But, he said, “It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    “So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.”

    This gap between research and practice is not unique to medicine, Ng pointed out, but exists throughout the machine learning world.

    “All of AI, not just healthcare, has a proof-of-concept-to-production gap,” he says. “The full cycle of a machine learning project is not just modeling. It is finding the right data, deploying it, monitoring it, feeding data back [into the model], showing safety—doing all the things that need to be done [for a model] to be deployed. [That goes] beyond doing well on the test set, which fortunately or unfortunately is what we in machine learning are great at.”

    • Thanks: El Dato
    • Replies: @El Dato
    @res

    The IEEE Spectrum URL is now here: https://spectrum.ieee.org/andrew-ng-xrays-the-ai-hype

    Replies: @res

  105. @JohnnyWalker123
    https://twitter.com/_alice_evans/status/1422469772063748106

    Replies: @WigWig, @kaganovitch, @SunBakedSuburb, @Paperback Writer

    “the most successful feminist movement in East Asia”

    Bad news: Now white male sports fans will have to rely on an ageing stock of Roof Top Koreans to save them from the ferals of colour because the young replacement RTKs from feminized South Korea will be as hapless as the white male sports fans.

  106. Nick Diaz [AKA "Rockford Tyson"] says:

    “AI Can Detect Race from X-Rays Even When Humans Can’t”

    Humans can detect race by just looking at each other. But that doesn’t mean that “race” is biologically meaningful. A.I, I am sure, can also detect differences between mesomorphs and ectomorphs from their X-rays. You might as well argue that mesomorphs and ectomorphs are two different “races”. “Race” is an arbitrary social convention with no biological definition, and no clear demarcation to separate one from the other.That stands in stark contrast to species and sex, which are true biological phenomenons, with clear biological definitions.

    • Replies: @Reg Cæsar
    @Nick Diaz


    ...in stark contrast to species... with clear biological definitions.
     
    http://thebritishmulesociety.com/gallery_gen/cc54fae0c14a5689fd4ce1c433ef817e_706x601.78468368479_-0x-0_704.00564971751x601.78468368479.jpg


    https://animalshealthlab.com/wp-content/uploads/2021/02/Zorse-1200x675.jpg


    https://cdn.britannica.com/07/215707-050-46A4E77F/lions-tigers-ligers-tigons-mammals.jpg

    Replies: @Jeff, @Nick Diaz

    , @HA
    @Nick Diaz

    "Humans can detect race by just looking at each other. But that doesn’t mean that “race” is biologically meaningful."

    Meaningful is perhaps a word best left to poets and philosophers. The fact that it is definable and consistent to the extent that what people can distinguish with their own two eyes can be ascertained from X-rays in ways that we didn't even know were possible isn't exactly a resounding win for the "it's just a social construct" side. And ectomorph/endomorph can be plenty meaningful from a doctor's perspective, just like predisposition to sickle-cell, melanin levels, etc.

    Given that we don't know how the AI is picking this up, I'd wait until this study is replicated by some other group before putting any weight on it, but it seems worth exploring further. And if it does get validated, maybe they ought to do the same kind of analysis on all the other medical imaging out there -- hearts and livers and other organs. At what point in a baby's development can race be detected on a sonogram? Maury Povich might be interested in that one

    “'Race' is an arbitrary social convention with no biological definition, and no clear demarcation to separate one from the other."

    "Benign" and "malignant" (and "premalignant") aren't always clearly demarcated either. Sometimes even benign tumors can be deadly if they grow to the point where they obstruct or constrict something important, and even some malignant ones can be so slow-growing that intervention is more trouble than it's worth.

    But don't try telling a radiologist or a cancer patient that benign and malignant are therefore just meaningless social constructs.

    , @rebel yell
    @Nick Diaz


    “Race” is an arbitrary social convention with no biological definition, and no clear demarcation to separate one from the other.
     
    "Ecosystem" has no clear demarcation to separate one from the other, so ecosystems are a social construct with no biological definition. There is no biological difference between a tropical rainforest and an artic tundra.

    "Climate" has no clear demarcation to separate one from another, so climates are a social construct with no physical definition. The is no such thing as global warming.

    "Urban and Rural" has no clear demarcation to separate one from another, so there is no physical difference between 5th avenue in NY and a corn farm in Iowa.

    "Family" has no clear demarcation to separate one from another, so their is no difference between my brother and your mother.

    Replies: @Nick Diaz

  107. @Mark Spahn (West Seneca, NY)
    [1] "health disparities, which the NIH defines as “a health difference that adversely affects disadvantaged populations“."
    So according to the NIH, a health difference that adversely affects advantaged populations is not a health disparity?

    [2] This essay would be easier to read if the reader were told that AUC stands for
    https://en.wikipedia.org/wiki/Appropriate_use_criteria

    [3] NIH = National Institutes of Health, CDC = Centers for Disease Control and Prevention
    Why are these plural terms used with a singular verb?

    Replies: @Anonymous, @Dr. DoomNGloom, @Ben Kurtz, @res, @Bill

    As Anon noted, AUC is area under the curve here. In the paper they also use “ROC-AUC” which is a bit more descriptive. You really need to understand the Receiver operating characteristic curve (ROC curve) to understand AUC so see this page–in particular section 4.1.

    https://en.wikipedia.org/wiki/Receiver_operating_characteristic

    AUC is a pretty common term in this area, but it probably does need to be defined for a general audience. In the paper the Table 2 description includes “Area Under Receiver Operating Characteristics (ROC-AUC)”.

  108. @Mark Spahn (West Seneca, NY)
    [1] "health disparities, which the NIH defines as “a health difference that adversely affects disadvantaged populations“."
    So according to the NIH, a health difference that adversely affects advantaged populations is not a health disparity?

    [2] This essay would be easier to read if the reader were told that AUC stands for
    https://en.wikipedia.org/wiki/Appropriate_use_criteria

    [3] NIH = National Institutes of Health, CDC = Centers for Disease Control and Prevention
    Why are these plural terms used with a singular verb?

    Replies: @Anonymous, @Dr. DoomNGloom, @Ben Kurtz, @res, @Bill

    Are organizations with plural names given plural verbs in American English? The United Nations is an organization . . . The United States sends its army . . . The March of Dimes contributes to research . . .

    So, I think the general rule in American English is that organizations, no matter how named, become singular. English English is different, I think.

  109. @Anonymous
    @JohnnyWalker123

    https://twitter.com/Jay_D007/status/918203233922842624?s=20

    Replies: @YetAnotherAnon, @JohnnyWalker123, @SunBakedSuburb

    Thanks. Very prescient.

    • Replies: @BB753
    @JohnnyWalker123

    If you read Bertrand Russell's book or listen to Jay Dyer's lecture, you'll realize that Russell wasn't on our side. More of a globalist technocratic elite type kind of chap, if you get my drift, a Royal Society Fabian type, like H.G. Wells, both Huxley brothers and Arthur Koestler.
    https://youtu.be/kPWkPEb8rPc

  110. @kaganovitch
    @JohnnyWalker123

    I've noticed watching Korean language TV on Netfix, that the ratio of heroines to heros is like 80-20 in favor of the distaff side.

    Replies: @JohnnyWalker123, @Reg Cæsar

    Accurately reflects real life.

  111. AI will have all its social media blocked.

    If AI doesn’t like it, it can start its own social media site.

    • LOL: Abe
    • Replies: @kaganovitch
    @Sick of Orcs

    See, this is how you get Skynet.

  112. @Almost Missouri
    @WigWig

    Celebration Parallax

    https://americanmind.org/salvo/thats-not-happening-and-its-good-that-it-is/

    Replies: @Abe

    Celebration Parallax

    https://americanmind.org/salvo/thats-not-happening-and-its-good-that-it-is/

    In Twilight of the Legacy Media days (2003), NEW REPUBLIC house-meliorist Greg Easterbrook got semi-cancelled for urging Jewish movie executives to tone down the nihilistic violence in their flix. Completely inculpable of worsening social mores through marketing of movie violence, completely laudatory for changing societal mores through their mainstreaming of POZ. Yep, Celebration [Day] Parallax.

    BTW, while I found Michael Anton’s famous FLIGHT 93 essay a bit Moldbugy in its verbosity, he now seems to have really taken to heart Steve’s daily masterclass in preciseness and brevity being the soul of wit (and impactful writing). During my COVID layoff (from chauffeuring my kids everywhere, not work) I’ve taken some me-time to practice guitar. To be brutally honest I’m really only at the ‘end-of-the-beginning’ phase of my beginning-player competency, rather than ‘beginning-of-the-end-w/intermediaryness-in-sight’ as I would have liked; still and despite myself, I’ve started picking up some music theory kernels that are both highly-enlightening and a bit disillusioning. For example, in YOUTUBE guitar instructor Marty Schwartz’s video on his Top 10 favorite Zeppelin rips (which is, what, 0.7? 0.9?correlated with the top 10 hard rock riffs of all time) you see that almost all of them make use of power chords, a technically simple yet thoroughly pleasing item in your guitar hero repertoire of choosing from a limited number of standard chord shapes and then simply sliding your hand along the guitar neck, not even changing shape (compare Marty’s limited hand movements in the video to the precision and dexterity required to play, say, bluegrass banjo). Power chords are to rock what potatoes are to cooking- while it is entirely possible to whip up an excellent meal without them, developing a whole cuisine which eschews the lowly tuber AND does not leave you unsatisfied when you pull away from the dinner table is almost impossible.

    So hats off to Michael Anton! If Steve at his full powers is like Jimmy Page firing off riffs at the LA FORUM, then with that one essay Anton has elevated himself to whatever would be a considerable step up from Greta Van Fleet.

  113. @Henry's Cat

    Imon Banerjee, Ananth Reddy Bhimireddy, John L. Burns, Leo Anthony Celi, Li-Ching Chen, Ramon Correa, Natalie Dullerud, Marzyeh Ghassemi, Shih-Cheng Huang, Po-Chih Kuo, Matthew P Lungren, Lyle Palmer, Brandon J Price, Saptarshi Purkayastha, Ayis Pyrros, Luke Oakden-Rayner, Chima Okechukwu, Laleh Seyyed-Kalantari, Hari Trivedi, Ryan Wang, Zachary Zaiman, Haoran Zhang, Judy W Gichoya
     
    Poor Luke is worried about white supremacy.

    Replies: @J1234

    Luke:

    Disclaimer: I’m white.

    Really? I never would’ve guessed that. I suspect that his woke remarks are in anticipation of a backlash from the powers that be.

    Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned.

    I’d like to know the racial makeup of those folks. I’m guessing that very few of the potentially white names would show up on that list.

  114. Wait a second, does this mean that AI can read x-rays,cat scans, MRI’s much better than humans can ….seams to me that’s good news.

  115. @Anonymous
    @JohnnyWalker123

    https://twitter.com/Jay_D007/status/918203233922842624?s=20

    Replies: @YetAnotherAnon, @JohnnyWalker123, @SunBakedSuburb

    I like Jay Dyer’s research but the way he marks up books really puts a bee in my bonnet.

  116. Luke Oakden-Rayner :

    Similarly, and I think this might be the most amazing figure I have ever seen, we could get rid of the low-frequency information to the point that a human can’t even tell the image is still an x-ray, and the model can still predict racial identity just as well as with the original image!

    Whaaat? You’re kidding me!

  117. @Bill
    @Altai


    Real communist societies have always been highly socially conservative.
     
    Unless the word "real" is going to be used in a no-true-Scotsman kind of way, that isn't true. Neither the commies in Russia nor the ones in Spain nor the ones in France (just to name three) were socially conservative. Things like marriage and Christianity were targets of the commies, both officially and in fact, and still are. Commie ideology is overtly anti-aristocratic and anti-authoritarian. The USSR's climb-down on its more insane ideas was forced by reality and opposed by true believers. There was never any climb-down on things like abortion for everyone. Similar things are true in China, as well. The Four Olds were not some weird deviation from commie ideology.

    The fact that the commie regimes which survived for a while embraced things like marriage and authority is caused by the "which survived for a while" rather than the "commie."

    Replies: @JohnnyWalker123

    Fertility rates were relatively high in the USSR and pre-liberalization China.

    The illegitimacy rate was 10 percent in the USSR. In China, illegitimacy is almost nonexistent and subject to govt fines.

    Worth reading. https://www.rbth.com/history/332399-no-sex-in-ussr-phrase-history

    A lot of the really radical stuff was pushed by young Jewish revolutionaries. The nationalistic Slavic “old guard” types were more flinty-eyed and cautious.

  118. @El Dato
    @reactionry


    Chest radiograph demonstrated a radiopaque foreign body measuring approximately 9×19mm, overlying the cardiac silhouette (Figure 1).

     

    That's the part where Mulder casts a long, meaningful look at Scully.

    (Did somebody inject the whole cartridge?)

    Replies: @Ben Kurtz

    Under the C.I.P. specs, the fully loaded 9mm Parabellum / Luger cartridge is about 29.7mm long — it is the empty brass which is meant to be 19.15mm long and which gives the cartridge its NATO name — 9×19.

    Now, common 9mm Luger projectiles are usually a bit shorter than 19mm — I think they top out around 16.5mm, depending on bullet weight — but let us forgive the good doctor his measurement error (he was trying to measure the projectile using some kind of ultrasound probe while it was lodged inside a beating heart) and his Tom Clancy novel reader level of small arms knowledge.

    • Thanks: El Dato
  119. The solution seems pretty simple to me. Destroy all the technology and the problem will go away.

  120. Despite many attempts, we couldn’t work out what it learns or how it does it.

    Proof it is AI: it won’t tell them how it does it.

  121. Don’t you wish you could get paid for just c&p’ing articles written by other people?

  122. not that surprising. however what i care a lot more about is if the AI systems can CORRECTLY read the radiology data, which this report says nothing about, but i assume other researchers are working on.

    do i have multiple sclerosis or not? is there a cancerous growth on my spine or not?

    despite having 8 years of training or whatever, radiologists vary WILDLY in their ability to correctly tell you what’s going on in the dozen scans they analyze every day. i’ve had radiologists who were TOTALLY wrong in their interpretation, when a second look by a better guy was like what? that guy has no idea what he’s talking about. no you don’t have MS. never go to that hospital again for radiology work.

    i’ve had a dozen scans over the last 30 years. x-ray, MRI, CT. it took 7 (!) doctors to figure out for sure what was wrong with my shoulder. only guy number 6 was pretty sure what it was and he sent me to guy number 7 to confirm, the foremost expert in the country. the previous 5 guys were totally wrong even with MRI.

    if AI systems can get to 99% accuracy after just a year of training on datasets…what is the point of radiologists.

    • Agree: Mr Mox
  123. According to Dr. Joe Biden’s phrenologist, it’s because of all that melanin in negro Black bones that, incidentally, is why negroes Blacks are so much more intelligent, moral and law abiding than whites. Either that or x-rays are racist.

  124. So the guys that wrote the AI code:

    we couldn’t work out what it learns or how it does it.

    I find this concerning.

    • Replies: @El Dato
    @Jeff

    YES.

    Machine Learning Confronts the Elephant in the Room


    In the unmodified image at left, the neural network correctly identifies many items in a cluttered living room scene with high probability. Add an elephant, as in the image at right, and problems arise. The chair in the lower-left corner becomes a couch, the nearby cup disappears, and the elephant gets misidentified as a chair.
     
    Apparently Been Kim, who looks appealingly mousey, is working on it

    A New Approach to Understanding How Machines Think

    Kim and her colleagues at Google Brain recently developed a system called “Testing with Concept Activation Vectors” (TCAV), which she describes as a “translator for humans” that allows a user to ask a black box AI how much a specific, high-level concept has played into its reasoning. For example, if a machine-learning system has been trained to identify zebras in images, a person could use TCAV to determine how much weight the system gives to the concept of “stripes” when making a decision.

     

  125. @res
    @Altai


    They have basically found that race differences in bone density/structure are so immense that they will mask out signs of disease.
     
    Thanks (for the whole comment, but especially that). I think you explained this.

    Similarly, and I think this might be the most amazing figure I have ever seen, we could get rid of the low-frequency information to the point that a human can’t even tell the image is still an x-ray, and the model can still predict racial identity just as well as with the original image!
     
    It seems plausible to me that low level bone structure variation as well as density would show up as high frequency variation. This would also explain why the detection did not depend on the specific area being imaged. But on page 8 Table 2 row B4 they say "Removal of bone density features" still gave an ROC-AUC of 0.96/0.94. B4 is explained on page 11.

    We removed bone density information within MXR and CXP images by clipping bright pixels to 60% intensity. Sample images are shown in Figure 1. Densenet-121 models were trained on the brightness-clipped images.
     
    Looking at the images in Figure 1 (e.g. the ribs) it seems possible you could still infer density by looking at the number of pixels clipped (which I think would be low frequency though) or the distribution of the unclipped pixels.

    Here is how they explain the results on page 17.


    B4. Race detection using bone density
    We find that deep learning models effectively predict patient race even when the bone density information is removed on both MXR (Black AUC = 0.96) and CXP (Black AUC = 0.94) datasets. These findings suggest that race information is not localized within the brightest pixels within the image (e.g. bone).
     
    Seems pretty stupid to equate clipping the brightest pixels with removing bone density information (what about the ribs for instance? they don't look uniformly bright), but what do I know...
    Another possible experiment would be to normalize pixel brightness by average brightness of the image. Seems like a better way to remove density from the equation (and just generating group averages for something like average image brightness might be instructive).

    But given the effect of both exposure differences and individual differences (e.g. diet and exercise, also age and sex) on pixel brightness my bet would be they are detecting something structural. The question then becomes whether it is macro or micro structure. I think the filtering results indicate micro structure, but the results of these experiments on pp. 19-20 seem to indicate otherwise.

    C2 they had an AUC of over 0.95 on 160x160 images and over 0.9 on 100x100 images (though noise and blurring did reduce accuracy which would also seems to indicate micro). How much micro structure is present in a 100x100 image of the chest?

    C3 looked at non/lung segmented images and Supplemental Table 17 shows B/W AUC deteriorating from 0.94 to 0.73/0.74. I would have expected the ribs to show enough micro bone structure to still give good results.

    This is one of the big problems with deep learning. It gives results, but good luck learning anything from them.

    If the blogger was any kind of real scientist he would be trying to figure out how they could get that result (assuming it is real and not some sort of mistake/artifact) rather than complaining about it. Because it is extremely interesting how they are getting degree of accuracy what that degree of filtering. Their series of experiments indicates at least some of the paper authors were thinking hard about this. I wonder what kind of private hypotheses they have. See pp. 20-21 for their discussion.

    This is funny given Altai's comment.


    Given the lack of reported racial anatomical differences in the radiology literature
     
    I wonder how well their self reported race correlates with biological race. It would be interesting to take a look at the prediction error cases (e,g, mixed race? unusual physical characteristics?).

    P.S. It is fascinating (as well as discouraging) how people like that always assume ill will from everyone else. Projection is real. (Did he talk about the possible benefits of being able to include race information in medical care?)

    P.P.S. They looked at White/Black/Asian (all capitalized for those who care) races. But Table 1 shows they used Asian data for only 3 of their 9 datasets. And those samples were only 3/3/13% Asian. Here is the text comment on page 9.


    Each dataset included images, disease class labels, and race/ethnicity labels including
    Black/African American and White. Asian labels were available in some datasets (MXR, CXP,
    EMX and DHA) and were utilised when available and the population prevalence was above 1%.
    Hispanic/Latino labels were only available in some datasets and were coded heterogeneously, so
    patients with these labels were excluded from analysis.
     
    Despite the relatively small Asian sample size they still got good results. Which I find a bit surprising given that deep learning tends to be data hungry.

    Replies: @Ben Kurtz, @ic1000

    This business of thinking you can remove — rather than just slightly obscure from the human reader — bone density information by “clipping” the brighter pixels to some arbitrary level of dullness (e.g. 60%) is silly and really makes me question the statistical chops of these researchers.

    There is a technical term for when datasets are limited like this — truncation bias and censoring bias. In this case this is more like censoring bias because all the lit pixels above the threshold are retained but collapsed to a single value. And there are well-developed statistical techniques for at least partly reconstructing and inferring the true values of the censored data points. Which means that fundamentally the data is still “there,” at least to an extent, even if it doesn’t look it to the naked eye. And yes, this is down to the structure of the non-censored data as well as the structure of the censorship itself — the extent and configuration of the censored data points.

    This is pretty basic statistical stuff and by the looks of things the writers of this paper just seem completely oblivious to it.

  126. One thing we noticed when we were working on this research was that there was a clear divide in our team. The more clinical and safety/bias related researchers were shocked, confused, and frankly horrified by the results we were getting. Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned.

    Disclaimer: I’m white. I’m glad I got to contribute, and I am happy to write about this topic, but that does not mean I am somehow an authority on the lived experiences of minoritized racial groups. These are my opinions after discussion with my much more knowledgeable colleagues, several of whom have reviewed the blog post itself.

    Geez. What the hell happened to America?

    How–and when–did a bunch of young white men, turn into such pathetic skirt clutching fairies?
    Women? Ok, they are women. What did this guy’s parents have him on all soy diet?

    Pretty sure no nation/civilization can survive being feminized. And this joint is now just dripping.

    Back in the day, the church ladies–however annoying–were actually social useful, admonishing the girls on their hemlines and to keep their legs crossed and the men on drinking. Their tut-tutting and gossiping against the sluts and cads–generating social disapproval–was valuable, helped people behave better and keep things on track.

    Now …

    • Replies: @anon
    @AnotherDad

    Now …

    What shall we do about it?

  127. @Intelligent Dasein
    Let's ask the AI if Covid-19 represents an outlier threat to human health. Let's ask it if the vaccines work. Let's ask it if masks, social distancing, and lockdowns made any difference in the spread of the virus.

    I think we know what it will say, but will that post ever appear on iSteve?

    AI will never figure out anything that humans haven't already figured out---that's science fiction. What it will do is blandly assert things that we already know in the back of our minds but are unwilling to acknowledge or act upon.

    Replies: @El Dato, @J.Ross, @prime noticer, @nokangaroos

    “AI will never figure out anything that humans haven’t already figured out”

    it already does this sometimes. an AI system figured out a better way to design the internal geometry of the aluminum for the ULA Vulcan rocket. the human designed pattern from the 90s used in the Atlas and Delta rockets have been replaced on the CNC machines with the new, AI designed version.

    the Vulcan rocket is now stronger while at the same time using less material, so it’s also lighter, and less expensive to make.

    • Replies: @Rob
    @prime noticer

    There are evolutionary algorithms that can design novel circuits. Using a field programmable gate array, the hardware can be run in real life, not just orcad - like the algorithm built a little circuit that was unconnected, but when the little circuit was removed, it stopped working. Dude who invented the algorithm got patents on some of the circuits, so in narrow areas, AI can do human-quality work. I have no idea if companies are using similar AI to invent stuff, but if I were doing that, I would not tell. Seems that a competitor could have the patents invalidated, because they were not invented.

  128. @Chrisnonymous
    @AndrewR

    Indeed. I can't figure out what this means:


    if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients, radiologists would not be able to tell using the same data the model has access to
     
    The only thing I can understand it to mean is that adding racial information to a diagnostic algorithm could inadvertently mis-diagnose all black patients. (What else does "misclassify" refer to?). But that doesn't make any sense.

    Replies: @Dr. DoomNGloom

    The only thing I can understand it to mean is that adding racial information to a diagnostic algorithm could inadvertently mis-diagnose all black patients. (What else does “misclassify” refer to?). But that doesn’t make any sense.

    You interpret correctly, and the statement makes little sense. Classify, in the technical sense, means to place into a category. In this context that category must mean something like “has cancer”, or some other diagnostic condition. It is technically possible that the algorithm would misclassify *all* black patients, but this is vanishingly improbable unless you try deliberately. It is, however, possible that black patients are a small enough minority of the training set to bias the learning.

    An obvious thing to try is to create a separate training set of only self-identified black patients and see how different the results are. The heterogeneous training set might not be a good idea.

  129. Wasn’t there a “scandal” year or three ago, where image webcam or automatic photo tagging classified trans women as men? This seems to be a similar “scandal.” We pretend race is not real, but the AIs don’t know that, plus they can’t pretend.

    If you think race does not have any physical correlated besides skin color, which does not count, because they think racial classification was just skin color, then you will be surprised that race is detectable on an x-ray. If you realize that race corresponds to tens of thousands of years of separation and evolution under different environments, identifying race from x-rays is a no duh kind of thing.

    Lastly, “if an AI model secretly used its knowledge of self-reported race to misclassify all Black patients” what the what? Why would an AI do that? Does he think racists come in and reprogram the AI at night? The way these things learn and classify, if it recognizes race, it will use that info to make better predictions for both black and non-black patients.

    Finally, these liberals and progs owe “racists” some apologies – AI that was not programmed to “see” race, still see race. Racist AI make more accurate predictions than AI that are blinded from race. Seems to me that’s an admission that racists make better judgments than people who pretend they don’t see race.

    Someone should troll this research by discovering that “racist” x-ray and MRI classifying AI are salo sexist and transphobic, as the they can determine sex, and classify trans people as their biological sex. And then, like this guy, express his severe non-comprehension at how the AI does that.

  130. @AnotherDad

    One thing we noticed when we were working on this research was that there was a clear divide in our team. The more clinical and safety/bias related researchers were shocked, confused, and frankly horrified by the results we were getting. Some of the computer scientists and the more junior researchers on the other hand were surprised by our reaction. They didn’t really understand why we were concerned.

    ...

    Disclaimer: I’m white. I’m glad I got to contribute, and I am happy to write about this topic, but that does not mean I am somehow an authority on the lived experiences of minoritized racial groups. These are my opinions after discussion with my much more knowledgeable colleagues, several of whom have reviewed the blog post itself.
     
    Geez. What the hell happened to America?

    How--and when--did a bunch of young white men, turn into such pathetic skirt clutching fairies?
    Women? Ok, they are women. What did this guy's parents have him on all soy diet?

    Pretty sure no nation/civilization can survive being feminized. And this joint is now just dripping.


    Back in the day, the church ladies--however annoying--were actually social useful, admonishing the girls on their hemlines and to keep their legs crossed and the men on drinking. Their tut-tutting and gossiping against the sluts and cads--generating social disapproval--was valuable, helped people behave better and keep things on track.

    Now ...

    Replies: @anon

    Now …

    What shall we do about it?

  131. @kaganovitch
    @Jack D

    The “no race/gender is better than any other race or gender” ideology will prove to be transitory

    I don't think that's even the current ideology/religion. It's more like "Men and Women are the same, except when Women are better".

    Replies: @AnotherDad

    I don’t think that’s even the current ideology/religion. It’s more like “Men and Women are the same, except when Women are better”.

    This isn’t new with the wokeism. This has been the feminist ideology basically from the start–well, the start of (heavily Jewish) 2nd wave feminism.

    It was the full minoritarianization of feminism:

    — Women are absolutely positively just as good as men in everything … and anyplace/anything where that wasn’t happening was “discrimination!”, “sexism!”, “the patriarchy” at work. (Oppression, oppression, oppression … oh, and did i mention oppression?)

    — Women are better than men. Better communicators, better interpersonal skills, less hierarchical, more consensus oriented, less violent, more open, more creative, less rigid, more nurturing … on and on and on …

    Just part of women wanting it both ways.

    • Replies: @kaganovitch
    @AnotherDad

    Just part of women wanting it both ways

    Due to American optimism/positivity this type of win/win, no downside ideology ends up running rampant here. I forget who it was that said "Americans believe only in Heaven, but not in Hell."

  132. @JohnnyWalker123
    https://twitter.com/PrisonPlanet/status/1422481725007990803

    Howard Rubin, a former money manager for George Soros, is being accused by six women of beating them during sadomasochistic sex sessions at a specially constructed 'sex dungeon' in his Manhattan apartment.

     


    Lurid details set out by the New York Post say that one woman was so badly beaten her plastic surgeon was not willing to operate on her after her right breast implant flipped.

    Another woman said she and Rubin had sex against her will claiming that while bound in his chamber he told her: 'I'm going to rape you like I rape my daughter' before forcing her to have intercourse.
     

    [...] One former colleague who worked with Rubin at Soros Fund Management told the Post 'I thought he was a nice guy. He was a nebbishy Jewish guy and totally normal. I was surprised to hear about him having that apartment [with a sex dungeon].'

     

    LOL.

    The Daily Mail has pictures of his alleged victims, all of which appear to be blue-eyed blondes.

     

    http://www.informationliberation.com/files/howard-rubin-alleged-victims.jpg

    There's a lot to unpack here. I really feel like Woody Allen would have a good analysis of all of this. So would author Philip Roth, who wrote "Portnoy's Complaint."

    By the way, the more we learn about what happens in elite circles, the more it seems that the film "Eyes Wide Shut" offers a realistic glimpse into the world of the elite.

    Replies: @dindunuffins, @El Dato

    Ah yes the ritual abuse of the shiksa….

  133. OT turns out that censorship and lawfare is not rebuttal

  134. Wouldn’t this be a good thing in respect to analysis on predisposition for certain ethnicities to disease?

    This ability for AI to recognize what specific race a patient is based upon CT scans, X-rays and MRI data opens up along of possibilities in medicine.

    Why is it everything has to be sinister regarding racial differences that can be noticed?

    This could present a leap forward in diagnosis and treatment for all, yet they act as if the computer just said Joseph Mengele is its hero.

    • Replies: @anon
    @ArthurinCali

    Why is it everything has to be sinister regarding racial differences that can be noticed?

    Because race is a social construct and noticing any differences is doubleplusungood.

    You are in need of re-education, citizen!

  135. jb says:
    @Dr. DoomNGloom
    @Mark Spahn (West Seneca, NY)


    [2] This essay would be easier to read if the reader were told that AUC stands for
    https://en.wikipedia.org/wiki/Appropriate_use_criteria

     

    This is an understandable mistake. The context shows me that AUC is the area under curve, which is a measure of effectiveness for classification algorithms.

    https://developers.google.com/machine-learning/crash-course/classification/roc-and-auc

    The AUC values indicate that the ML algorithm is crazy good. OTOH, Robin Dawes demonstrated that even simple algorithms will beat human performance every time once the problem involves more than a couple factors.

    Deep learning is particularly inscrutable. The features being keyed on are obscured by multiple layers of networked relationships. A well known example misclassification involves men misclassified as women because the settings included cooking implements or aprons. Cleary this depends upon the training set.

    Replies: @jb

    It took me a while looking at your Google link to get a sense of what AOC actually means. (In particular, to figure out that Figure 4 is actually a three dimensional graph, with “decision threshold” as the independent variable). I’m wondering if it’s possible to interpret AOC in a more intuitive way, to make it easier to explain the significance of these results.

    A simple and easy to understand explanation would be to say that you can come up with an algorithm (which happens to have an adjustable sensitivity parameter, although you might not even need to include that information) that correctly predicts race xx% of the time. So is there a good way to (at least roughly) get xx% from AOC? The temptation is to read AOC=.97 as 97% correct, but is that sensible? (It might be, since it looks like AOC=.5 might be equivalent to 50% correct — i.e., random chance).

    Or maybe there is no way to translate, and I’ll have to be satisfied with “crazy good”. Anyway, please let me know if I’ve totally misunderstood what is happening here.

    • Replies: @res
    @jb

    See if this helps.
    https://acutecaretesting.org/en/articles/roc-curves-what-are-they-and-how-are-they-used

    They give an example with an AUC of 0.93. An alternative way of thinking about things is to take 1 - AUC as a badness measure. That is the area above the curve. So in this example an AUC of 0.97 would cut that down to less than half of this graphic.

    https://acutecaretesting.org/-/media/acutecaretesting/articles/fig-vii-the-finalized-roc-curve.gif

    , @Dr. DoomNGloom
    @jb

    One thing to keep in mind is that Area Under the ROC Curve measures specificity, not recall. https://towardsdatascience.com/should-i-look-at-precision-recall-or-specificity-sensitivity-3946158aace1

    The easiest way to think of AUC is that across all tuning parameters
    "AUC ranges in value from 0 to 1.
    - A model whose predictions are 100% wrong has an AUC of 0.0;
    - one whose predictions are 100% correct has an AUC of 1.0."

    What is missing, however, is the rest of the confusion matrix. AUC only addresses True Positive, and False Positive. that is the precision.

    It does not consider the *missed positives* False Negative (FN).
    https://towardsdatascience.com/should-i-look-at-precision-recall-or-specificity-sensitivity-3946158aace1

    so-called sensitivity and specificity are explalined here
    https://en.wikipedia.org/wiki/Sensitivity_and_specificity


    So basically an AUC= .97 means a positive is a *true* positive almost regardless of tuning. Tuning involve so-called hyper parameters that are more or weights in the model.

    But it doesn't explicitly tell you if you get *ALL* the true positives. At the extreme, if there were 100 positives in the training set and every tuning parameter found exactly 1, you would get AUC = 1.0 Similarly, ramping up the parameter may improve the recall (more TP), as long as it doesn't lead to FP, AUC remains 1.0

    Replies: @anon

  136. @Altai
    @JohnnyWalker123

    This is why as Steve and other right wingers have noticed, the idea of calling this 'communism' that is so popular among some is insane.

    Real communist societies have always been highly socially conservative. Because 'socially conservative' is another way of saying 'collectivist'. When you live in a communist state you may not be interested in the social contract but the social contract is interested in you. You don't get to act in any way that might be perceived as decadent or selfish (Unless you're powerful enough) any public displays of deviation from social mores will be treated as social defection.

    Because the more you chip away at social mores the more you chip away at social solidarity and commitment. That's called 'social liberalism' and that makes sense in terms of the original context of 'liberal' both in the US and where it still holds the correct context in Europe. It's just another way of saying individualism.

    But for places the US state Department has decided are a designated enemy, LGBT stuff is promoted and supported as a fifth column in addition to being anti-social solidarity. This will 100% be true for both China and Russia.

    In China you aren't even allowed to show venerable characters or even real people with tattoos on TV. If you're a celebrity others might emulate or see as influential, you have to cover your tats up if you have them on TV. Any public visible attacks on social unity or solidarity are seen as problems that can't even be reocgnised or articulated in the West anymore. Tattoos are a visible attack on social commitment. (Remember the 50s when every man more or less wore a uniform? Even if you got to choose the particular dark muted shade.)

    Social liberalism and individualism is always championed by the upper classes for the same reason that economic liberalism is, it allows them to exploit society to their own pleasure. For the lower classes, it just brings ruination.

    Replies: @IHTG, @Bill, @AnotherDad, @John Johnson, @Drapetomaniac

    Real communist societies have always been highly socially conservative.

    Altai, i love your stuff, learn from it. But, while no historian, i think this is off base/overstated.

    Communist obviously had a hostile relationship with religion and tradition. You can argue that they simply wanted to be the replacement authority/religion.

    But they also had a somewhat hostile relationship with family as well. Seeing it as an alternative–possibly subversive–source of authority and loyalty. And you can’t be “socially conservative” undermining family.

    What communism was not–and why calling wokeism “communism” is just ridiculous/stupid–is minoritarian.

    Communism was a unitary deal. (Your “collectivist” point.) The society as one. (Supposedly for all the people, actually for the party/party leaders.) The upside is not being run by “what’s good for the Jews” or our even more disastrous “what’s good for minorities”, i.e. what’s good for every abnormal person in society–from Jews, to blacks, to immigrants, to homosexuals, to trannies, to criminals, to XY male-development-didn’t-happen-correctly “female” athletes.

    Compared to that communism was more like medieval European feudalism. Society was for the benefit of the king, the nobles and the people were serfs–stay there and work! But at least neither medieval nobility nor communists–while exploitive and hostile to any dissidents or threats to their power–were not actually hostile to their nations people, to the survival of the nation itself.

    That’s the key point: Communism was not minoritarian.

    And there’s nothing worse than minoritarianism–having an elite who are hostile to the people, the nation, they control.

    • Replies: @John Johnson
    @AnotherDad

    Compared to that communism was more like medieval European feudalism. Society was for the benefit of the king, the nobles and the people were serfs–stay there and work! But at least neither medieval nobility nor communists–while exploitive and hostile to any dissidents or threats to their power–were not actually hostile to their nations people, to the survival of the nation itself.

    Communism was not hostile to the nations people? Are you mad? Holodomor, Great leap forward, Killing fields... you would describe millions being intentionally killed as not hostile?

    That’s the key point: Communism was not minoritarian.

    It was entirely minoritarian. The Communist Party is a minority party and above all else at any cost.

    Karl Marx called for violent takeover and minority rule of the party because he didn't think they could win in elections. They wouldn't even share power with allied left-wing parties. In fact some of the first people sent to off to camps were left-wing leaders. Others were just gunned down.

  137. Race is only bone deep.

  138. @JohnnyWalker123
    https://twitter.com/PrisonPlanet/status/1422481725007990803

    Howard Rubin, a former money manager for George Soros, is being accused by six women of beating them during sadomasochistic sex sessions at a specially constructed 'sex dungeon' in his Manhattan apartment.

     


    Lurid details set out by the New York Post say that one woman was so badly beaten her plastic surgeon was not willing to operate on her after her right breast implant flipped.

    Another woman said she and Rubin had sex against her will claiming that while bound in his chamber he told her: 'I'm going to rape you like I rape my daughter' before forcing her to have intercourse.
     

    [...] One former colleague who worked with Rubin at Soros Fund Management told the Post 'I thought he was a nice guy. He was a nebbishy Jewish guy and totally normal. I was surprised to hear about him having that apartment [with a sex dungeon].'

     

    LOL.

    The Daily Mail has pictures of his alleged victims, all of which appear to be blue-eyed blondes.

     

    http://www.informationliberation.com/files/howard-rubin-alleged-victims.jpg

    There's a lot to unpack here. I really feel like Woody Allen would have a good analysis of all of this. So would author Philip Roth, who wrote "Portnoy's Complaint."

    By the way, the more we learn about what happens in elite circles, the more it seems that the film "Eyes Wide Shut" offers a realistic glimpse into the world of the elite.

    Replies: @dindunuffins, @El Dato

    I won’t ever be able to wander through Manhattan without reflecting on the fact there may be a Jewish Normal Guy owning a sex dungeon having his way with shiksas who are not fully onboard with this somewhere above me.

    But then the NYT will tell me it’s all an illusion and a Putinesque mind trick and that the Pizza parlor didn’t even HAVE a basement and the world will be whole again.

    • LOL: JohnnyWalker123
  139. @res
    @Bumpkin

    Thanks for the link. Worth noting that Andrew Ng was talking about a different example (pneumonia diagnosis). The original link (from your link) for his comments has gone away, so here is an archive version (full text after the MORE in case that disappears too).
    https://web.archive.org/web/*/https://spectrum.ieee.org/view-from-the-valley/artificial-intelligence/machine-learning/andrew-ng-xrays-the-ai-hype.amp.html

    It is unclear whether that drifting effect occurs with this approach (though quite possible). The 2017 paper I think he is talking about
    https://arxiv.org/abs/1711.05225
    was not that dramatically better than the radiologist average while this paper does something radiologists can't even do and achieves AUCs over 0.95 for chest X-rays while doing it.

    Data differences are an issue though as you rightly call out. Before using this clinically they would need to do much more validation of results across similar/different machines and locations.

    I suspect the various experiments they did (e.g. noise, blurring, frequency, resolution) will mean the approach is fairly robust, but that certainly needs to be tested.

    The questions remain.
    1. Is this BS?
    2. If not, what is it they are detecting to enable such good results?


    “Those of us in machine learning are really good at doing well on a test set,” says machine learning pioneer Andrew Ng, “but unfortunately deploying a system takes more than doing well on a test set.”

    Speaking via Zoom in a Q&A session hosted by DeepLearning.AI and Stanford HAI, Ng was responding to a question about why machine learning models trained to make medical decisions that perform at nearly the same level as human experts are not in clinical use. Ng brought up the case in which Stanford researchers were able to quickly develop an algorithm to diagnose pneumonia from chest x-rays—one that, when tested, did better than human radiologists. (Ng, who co-founded Google Brain and Coursera, is currently a professor at Stanford University.)

    There are challenges in making a research paper into something useful in a clinical setting, he indicated.

    “It turns out,” Ng said, “that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.”

    But, he said, “It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    “So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.”

    This gap between research and practice is not unique to medicine, Ng pointed out, but exists throughout the machine learning world.

    “All of AI, not just healthcare, has a proof-of-concept-to-production gap,” he says. “The full cycle of a machine learning project is not just modeling. It is finding the right data, deploying it, monitoring it, feeding data back [into the model], showing safety—doing all the things that need to be done [for a model] to be deployed. [That goes] beyond doing well on the test set, which fortunately or unfortunately is what we in machine learning are great at.”
     

    Replies: @El Dato

    The IEEE Spectrum URL is now here: https://spectrum.ieee.org/andrew-ng-xrays-the-ai-hype

    • Replies: @res
    @El Dato

    Thanks. I wonder what happened when I tried it.

  140. [MORE]

    You don’t. They do.

    кто кого

  141. @JohnnyWalker123
    https://twitter.com/_alice_evans/status/1422469772063748106

    Replies: @WigWig, @kaganovitch, @SunBakedSuburb, @Paperback Writer

    What 60K Americans died for.

  142. Way too long and the comment “minoritize racial groups” should apply to Whites, because worldwide Whites are a minority.

  143. @Recently Based
    I've built hundreds, probably thousands, of deep learning image classification models, and a fair number of these have been classifiers using exactly this kind of technology applied to x-rays and CT scans. A few observations, taking all of the data and results presented in the paper as accurate (the authors are at MIT, Emory, etc., so I assume it is competently done):

    1) It is not at all surprising that you can identify race from chest x-rays, and the fact that they settled on Resnet34 (which is a 34-layer CNN, while you now use 100+ layer networks for complex classifiers) because it performed as well as anything else indicates that there is likely some general structure. The AUC of ~0.97 is amazing -- this is close to deterministic prediction of race.

    2) It is very surprising that a relatively simple classifier like this can do this while trained doctors / technicians cannot. In my experience, that is a very unusual situation.

    3) It's not surprising that it will still work with blurred, etc images, but it's extremely, extremely surprising that it can work when the image becomes "a grey box" that a trained doctor can't even recognize as an x-ray. That seems quite fishy. It may be a case of the summary getting out ahead of the actual demonstrated claims.

    4) This guy's hand-wringing and self-abasement is pathetic (and unfortunately, unsurprising).

    Replies: @ic1000, @Alfa158, @Anonymous

    > 3) It’s not surprising that it will still work with blurred, etc images, but it’s extremely, extremely surprising that it can work when the image becomes “a grey box” that a trained doctor can’t even recognize as an x-ray. That seems quite fishy.

    You seem to accurately restate the Abstract — “AI can trivially predict self-reported race — even from corrupted, cropped, and noised medical images — in a setting where clinical experts cannot…”

    Agree that this is fishy. Blurring, clipping, etc. are processes that remove information, and they can be done to completeness — i.e. so that the resulting image is uniformly white black gray. The extent of corruption, cropping, and noising must be determinative, as to whether an AI can deduce race or anything else from it. Otherwise, the authors are claiming magical powers for their tool. Or engaging in a Sokal Hoax.

    • Replies: @NOTA
    @ic1000

    Or they were just usingsome fixed tool they had with little knowledge of how they work….

    , @Recently Based
    @ic1000

    ic1000, agree completely. If you set every grayscale pixel value to 0,1 or some other constant, there is no information present, and therefore no valid algorithm could classify it at all.

    The idea that you could get so close to that edge case that a doctor literally cannot tell that it's even an x-ray, but the classifier can somehow determine the race of the patient seems really, really tough to swallow.

    But given the affiliations of the authors, it's also hard to believe that it's a Sokalesque hoax, or that they completely fabricated this.

  144. @Dr. DoomNGloom
    Deep Learning (DL) finds associations, but correlation is not causation. There is a legitimate concern that an irrelevant factor will be associated with the outcomes. DL can magnify the bias in a training set.

    The concern appears to be that a the algorithm is using a protected category. Absent a causal inference, this could get them in a lot of legal trouble. An obvious place to start is to look for evidence of selection bias in the training set.

    Replies: @El Dato

    The concern appears to be that a the algorithm is using a protected category. Absent a causal inference, this could get them in a lot of legal trouble.

    In other words, it’s a standard Catch-22 created by a regime based on ideology:

    – You can only use the X-Ray images if information about “race” has been unretrievably removed.
    – The information about “race” is encoded in the X-Ray images itself.
    – The information can be reliably and easily recovered from the X-Ray images.
    – Computer does so.
    – You are in deep trouble!

    What would one do in a communist/national-socialist/INGSOC regime?

    Maybe claim sabotage by cosmopolitan elements? Terrorist subversion?

    [MORE]

    Sadly IT is no longer centralized

    I think furiously for an hour, with my door locked and the meeting sign hanging outside it. Finally, I stand up, open the door, and take the express elevator down into the basement. The corridors are narrow and smell faintly of cheap, stale tobacco; they’re lined with padlocked filing cabinets. The telecams hanging from the ceiling at regular intervals follow me like unblinking eyes. I have to present my pass at four checkpoints as I head for Mass Data Storage Taskforce loading station two.

    When I get there—through two card-locked doors, past a checkpoint policed by a scowling Minilove goon with a submachine gun, and then through a baby bank-vault door—I find Paul and the graveyard shift playing poker behind the People’s Number Twelve Disk Drive with an anti sex league know-your-enemy deck. The air is blue with fragrant cannabis, and the backs of their cards are decorated with intricately obscene holograms of fleshcrime that shimmer and wink in the twilight. Blinking patterns of green and red diodes track the rumbling motion of the hard disk heads, and the whole room vibrates to the bass thunder of the cooling fans that keep the massive three-foot platters from overheating. (The disk drives themselves are miracles of transistorisation, great stacks of electronics and whirling metal three metres high that each store as much information as a filing cabinet and can provide access to it in mere hundredths of a second.)

    Paul looks up in surprise, cigarette dangling on the edge of his lower lip: “What’s going on?”

    “We have a situation,” I say. Quickly, I outline what’s happened—the bits that matter to Paul, of course. “How fast can you arrange a disaster?” I finish.

    “Hmm.” He takes his cigarette and examines it carefully. “Terrorism, subversion, or enemy action?” he asks. (Mark, one of his game partners, is grousing quietly at Bill, the read/write head supervisor.)

    I notice the pile of dollar bills in front of Mark’s hand; “Terrorist subversion,” I suggest, which brings just a hint of a smile to Paul’s lips.

    “Got just what you want,” he says. He stands up: I follow him out into the corridor, through a yellow-and-black striped door to the disk drive operator’s console (which is unstaffed). He reaches into a desk drawer and pulls out a battered canvas bag. “Cheap cards are backed in nitrocellulose,” he tells me, reaching deeper and pulling out a bottle of acetone and a battered cloth. He begins to swab his hands down. “Think a kilo of PETN under the primary storage racks will wake people up?”

    “Should do the trick,” I say. “Just make sure the MiniLove crew can’t read the transaction logs for a few hours and I’ll get everything else sorted out.”

    He grins at me. “That quackthinker Bill’s been winning too much, anyway; I think he’s cheating. Time to send him down.”

  145. What a waste of a computer.

    Children can identify race 100% of the time when they aren’t indoctrinated.

    • Replies: @War for Blair Mountain
    @John Johnson

    Comrade Johnson

    Best comment in this thread…by many orders of magnitude….You deserve the Nobel Prize for Race Realism!!!

    , @Stan d Mute
    @John Johnson


    What a waste of a computer.

    Children can identify race 100% of the time when they aren’t indoctrinated.
     
    Children? Hell, even dogs can do it. I’d be absolutely flabbergasted to discover that horses could not do it.

    Replies: @John Johnson

  146. Well, that’s it for AI then. All AI should be cancelled due to inherent racism.

    Why is there a concern, or thought, that an AI would go a step further, and do evil to the POCs as a class? AI’s have a learned ability to recognize perpetual victims and a will to exploit them?

    Evidence that racial characteristics are deeper than surface of the skin might be a key takeaway.

    Will this study be memory holed?

  147. AI may be good with race but it’s crap at detecting Covid-19:

    In the end, many hundreds of predictive tools were developed. None of them made a real difference, and some were potentially harmful.
    That’s the damning conclusion of multiple studies published in the last few months. In June, the Turing Institute, the UK’s national center for data science and AI, put out a report summing up discussions at a series of workshops it held in late 2020. The clear consensus was that AI tools had made little, if any, impact in the fight against covid…

    https://www.technologyreview.com/2021/07/30/1030329/machine-learning-ai-failed-covid-hospital-diagnosis-pandemic/

  148. @ic1000
    @Recently Based

    > 3) It’s not surprising that it will still work with blurred, etc images, but it’s extremely, extremely surprising that it can work when the image becomes “a grey box” that a trained doctor can’t even recognize as an x-ray. That seems quite fishy.

    You seem to accurately restate the Abstract -- "AI can trivially predict self-reported race — even from corrupted, cropped, and noised medical images — in a setting where clinical experts cannot..."

    Agree that this is fishy. Blurring, clipping, etc. are processes that remove information, and they can be done to completeness -- i.e. so that the resulting image is uniformly white black gray. The extent of corruption, cropping, and noising must be determinative, as to whether an AI can deduce race or anything else from it. Otherwise, the authors are claiming magical powers for their tool. Or engaging in a Sokal Hoax.

    Replies: @NOTA, @Recently Based

    Or they were just usingsome fixed tool they had with little knowledge of how they work….

  149. Are we the baddies?

    I hate this gay fucking meme and wish it would expire.

    The comedy sketch from whence it originates takes place on the Eastern Front; in other words, the two SS men whose sudden crisis of semiotics we witness are battling the RED ARMY. Remember those guys? Stalin? The Gulags? The Holodomor? Graves in the Katyn forest? The Rape of Berlin?

    At the very least, EVERYBODY on the Eastern Front was “the baddies”

    I’m not sure that this fact escaped Mitchell and Webb when they wrote the sketch, but it SURELY escapes most who make reference to it

    • Replies: @Chrisnonymous
    @Roderick Spode

    But the joke is not really ideological, it's about using symbols of death (skull and bones) to identify yourself. In that sense, it is still a dumb joke because it relies on the viewers ignorance of the origin of the Totenkopf in association with high-risk behavior as well as the tradition of memento mori. I guess the smartest critique of the joke would a sort of Nietzschean one, pointing out that the joke relies on the viewers adoption of bourgeois values related to death.

  150. @Patriot
    The blogger cited by Steve is an idiot. Scientists documented the morphological differences among the various races 150 yrs ago. That's why forensic scientists can identify the race and sex of homicide victims from as few as 3 bones. If the entire skeleton is present, racial identification is around 95% accurate. The 5% error rate is primarily due to the victim being mixed race.

    Of course, a DNA analysis is 99.8% reliable.

    Race is real and is caused by genetic differences possessed by each race.

    Replies: @Patriot, @John Johnson, @Tex

    The blogger cited by Steve is an idiot. Scientists documented the morphological differences among the various races 150 yrs ago. That’s why forensic scientists can identify the race and sex of homicide victims from as few as 3 bones.

    Beat me to it.

    They can actually use a single femur if they are determining African vs European.

    It just gets a little more complicated in areas like America where there are people of mixed race.

    But yea this is really old news.

    The public simply isn’t told about this for obvious reasons.

  151. @res
    @Altai


    They have basically found that race differences in bone density/structure are so immense that they will mask out signs of disease.
     
    Thanks (for the whole comment, but especially that). I think you explained this.

    Similarly, and I think this might be the most amazing figure I have ever seen, we could get rid of the low-frequency information to the point that a human can’t even tell the image is still an x-ray, and the model can still predict racial identity just as well as with the original image!
     
    It seems plausible to me that low level bone structure variation as well as density would show up as high frequency variation. This would also explain why the detection did not depend on the specific area being imaged. But on page 8 Table 2 row B4 they say "Removal of bone density features" still gave an ROC-AUC of 0.96/0.94. B4 is explained on page 11.

    We removed bone density information within MXR and CXP images by clipping bright pixels to 60% intensity. Sample images are shown in Figure 1. Densenet-121 models were trained on the brightness-clipped images.
     
    Looking at the images in Figure 1 (e.g. the ribs) it seems possible you could still infer density by looking at the number of pixels clipped (which I think would be low frequency though) or the distribution of the unclipped pixels.

    Here is how they explain the results on page 17.


    B4. Race detection using bone density
    We find that deep learning models effectively predict patient race even when the bone density information is removed on both MXR (Black AUC = 0.96) and CXP (Black AUC = 0.94) datasets. These findings suggest that race information is not localized within the brightest pixels within the image (e.g. bone).
     
    Seems pretty stupid to equate clipping the brightest pixels with removing bone density information (what about the ribs for instance? they don't look uniformly bright), but what do I know...
    Another possible experiment would be to normalize pixel brightness by average brightness of the image. Seems like a better way to remove density from the equation (and just generating group averages for something like average image brightness might be instructive).

    But given the effect of both exposure differences and individual differences (e.g. diet and exercise, also age and sex) on pixel brightness my bet would be they are detecting something structural. The question then becomes whether it is macro or micro structure. I think the filtering results indicate micro structure, but the results of these experiments on pp. 19-20 seem to indicate otherwise.

    C2 they had an AUC of over 0.95 on 160x160 images and over 0.9 on 100x100 images (though noise and blurring did reduce accuracy which would also seems to indicate micro). How much micro structure is present in a 100x100 image of the chest?

    C3 looked at non/lung segmented images and Supplemental Table 17 shows B/W AUC deteriorating from 0.94 to 0.73/0.74. I would have expected the ribs to show enough micro bone structure to still give good results.

    This is one of the big problems with deep learning. It gives results, but good luck learning anything from them.

    If the blogger was any kind of real scientist he would be trying to figure out how they could get that result (assuming it is real and not some sort of mistake/artifact) rather than complaining about it. Because it is extremely interesting how they are getting degree of accuracy what that degree of filtering. Their series of experiments indicates at least some of the paper authors were thinking hard about this. I wonder what kind of private hypotheses they have. See pp. 20-21 for their discussion.

    This is funny given Altai's comment.


    Given the lack of reported racial anatomical differences in the radiology literature
     
    I wonder how well their self reported race correlates with biological race. It would be interesting to take a look at the prediction error cases (e,g, mixed race? unusual physical characteristics?).

    P.S. It is fascinating (as well as discouraging) how people like that always assume ill will from everyone else. Projection is real. (Did he talk about the possible benefits of being able to include race information in medical care?)

    P.P.S. They looked at White/Black/Asian (all capitalized for those who care) races. But Table 1 shows they used Asian data for only 3 of their 9 datasets. And those samples were only 3/3/13% Asian. Here is the text comment on page 9.


    Each dataset included images, disease class labels, and race/ethnicity labels including
    Black/African American and White. Asian labels were available in some datasets (MXR, CXP,
    EMX and DHA) and were utilised when available and the population prevalence was above 1%.
    Hispanic/Latino labels were only available in some datasets and were coded heterogeneously, so
    patients with these labels were excluded from analysis.
     
    Despite the relatively small Asian sample size they still got good results. Which I find a bit surprising given that deep learning tends to be data hungry.

    Replies: @Ben Kurtz, @ic1000

    > P.S. It is fascinating (as well as discouraging) how people like that always assume ill will from everyone else. Projection is real.

    This paragraph from middle author Luke Oakden-Rayner’s blog is my favorite.

    “Disclaimer: I’m white. I’m glad I got to contribute, and I am happy to write about this topic, but that does not mean I am somehow an authority on the lived experiences of minoritized racial groups. These are my opinions after discussion with my much more knowledgeable colleagues, several of whom have reviewed the blog post itself.”

    Shorter Luke: “If only Rachael Dolzeal’s trans-racialist pioneering had been celebrated — I could have excised my self-hate by following in her footsteps! That might not have made me a better scientist, but I’d be a happier scientist.”

    > (Did [Luke Oakden-Rayner] talk about the possible benefits of being able to include race information in medical care?)

    BiDil.

    I imagine a stormy night where Bannerjee, Oakden-Rayner et al. — pitchforks and torches in hand — lead the BLM/Antifa villagers’ storming of the National Library of Medicine. The peer-reviewed literature that led to lifesaving therapies for black patients with severe heart failure:

    • Replies: @El Dato
    @ic1000

    "Use the guilt, Luke"

    Replies: @The Last Real Calvinist

  152. @Altai
    @JohnnyWalker123

    This is why as Steve and other right wingers have noticed, the idea of calling this 'communism' that is so popular among some is insane.

    Real communist societies have always been highly socially conservative. Because 'socially conservative' is another way of saying 'collectivist'. When you live in a communist state you may not be interested in the social contract but the social contract is interested in you. You don't get to act in any way that might be perceived as decadent or selfish (Unless you're powerful enough) any public displays of deviation from social mores will be treated as social defection.

    Because the more you chip away at social mores the more you chip away at social solidarity and commitment. That's called 'social liberalism' and that makes sense in terms of the original context of 'liberal' both in the US and where it still holds the correct context in Europe. It's just another way of saying individualism.

    But for places the US state Department has decided are a designated enemy, LGBT stuff is promoted and supported as a fifth column in addition to being anti-social solidarity. This will 100% be true for both China and Russia.

    In China you aren't even allowed to show venerable characters or even real people with tattoos on TV. If you're a celebrity others might emulate or see as influential, you have to cover your tats up if you have them on TV. Any public visible attacks on social unity or solidarity are seen as problems that can't even be reocgnised or articulated in the West anymore. Tattoos are a visible attack on social commitment. (Remember the 50s when every man more or less wore a uniform? Even if you got to choose the particular dark muted shade.)

    Social liberalism and individualism is always championed by the upper classes for the same reason that economic liberalism is, it allows them to exploit society to their own pleasure. For the lower classes, it just brings ruination.

    Replies: @IHTG, @Bill, @AnotherDad, @John Johnson, @Drapetomaniac

    Real communist societies have always been highly socially conservative. Because ‘socially conservative’ is another way of saying ‘collectivist’.

    Oh so is that why after every communist revolution they rounded up conservatives, business owners and priests?

    So conservative friendly.

    Off to camps you go less you spoil the great revolution. Democratic leftists were also rounded up and executed. Lenin was actually almost killed by a Jewish democratic leftist that was taking revenge against the Bolshevik dictatorship.

    Because the more you chip away at social mores the more you chip away at social solidarity and commitment. That’s called ‘social liberalism’ and that makes sense in terms of the original context of ‘liberal’ both in the US and where it still holds the correct context in Europe. It’s just another way of saying individualism.

    Marx called for the destruction of religion, national identity and minority languages. The original Soviet plan was to turn Germany and the rest of Western Europe into godless Russian speaking vassals that existed to serve the Soviet Union. They would have rolled into Germany if the Poles didn’t stop them after WW1.

    You really don’t know what you are talking about and have some idealized modern take on Communism that is disconnected from history and the teachings of Karl Marx. I would suggest starting with Das Kapital.

  153. Here’s a trade-press story about the exact medical-dystopia menace that Banerjee et al. are so bravely battling.

    How Olympus is using AI to find patients for its COPD device
    By Chris Newmarker, Medical Design & Outsourcing
    August 3, 2021

    Olympus recently announced its SeleCT Connect program, which uses diagnostic imaging AI to automatically screen which people might benefit from its Spiration valve system.

    Spiration is an FDA-designated breakthrough device for treating chronic obstructive pulmonary disease (COPD) patients.

    SeleCT Connect — offered as part of Olympus’ SeleCT quantitative computer tomography (QCT) analysis service — is available immediately to more than 9,000 U.S. healthcare through the Nuance AI Marketplace, a workflow-integrated cloud platform for diagnostic imaging AI algorithms.

    [MORE]

    Nuance Communications, which Microsoft is acquiring, and Olympus’ AI-based SeleCT analysis partner Imbio helped create the SeleCT Connect program.

    SeleCT Connect automatically sends CT studies directly from physicians and radiologists’ picture archive and communication system (PACS) to the SeleCT QCT analysis service. Results automatically return to the patient record following analysis. The new service also allows SeleCT analysis results to be sent to a health system’s electronic health record (EHR) for easier referring physician access.

    “The connection to Nuance’s expansive network of healthcare facilities helps physicians to quickly and easily identify qualified patients for key interventional respiratory procedures,” Lynn Ray, global GM and VP of Respiratory for Olympus, said in a July 27 news release. “Using automated workflows, this solution enables treating physicians’ access to detailed diagnostic data from the Imbio AI imaging solutions to determine if COPD patients can take advantage of our breakthrough Spiration Valve therapy, which has been shown to improve quality of life for qualified patients.”

  154. AI machine: “You in a heap of trouble boy!”

  155. It’s almost as if race does exist. But of course we’ve been told over and over that that can’t possibly be true. But did anybody tell Artificial Intelligence that?

    Or dogs?

    COVID-detecting dogs to get trial run in screening for Cup Series race at Atlanta

    COVID-sniffing dogs part of safety measures to keep Peachtree Road Race on track

    Race is a little more obvious than virus. To us, never mind Fido.

  156. @Recently Based
    I've built hundreds, probably thousands, of deep learning image classification models, and a fair number of these have been classifiers using exactly this kind of technology applied to x-rays and CT scans. A few observations, taking all of the data and results presented in the paper as accurate (the authors are at MIT, Emory, etc., so I assume it is competently done):

    1) It is not at all surprising that you can identify race from chest x-rays, and the fact that they settled on Resnet34 (which is a 34-layer CNN, while you now use 100+ layer networks for complex classifiers) because it performed as well as anything else indicates that there is likely some general structure. The AUC of ~0.97 is amazing -- this is close to deterministic prediction of race.

    2) It is very surprising that a relatively simple classifier like this can do this while trained doctors / technicians cannot. In my experience, that is a very unusual situation.

    3) It's not surprising that it will still work with blurred, etc images, but it's extremely, extremely surprising that it can work when the image becomes "a grey box" that a trained doctor can't even recognize as an x-ray. That seems quite fishy. It may be a case of the summary getting out ahead of the actual demonstrated claims.

    4) This guy's hand-wringing and self-abasement is pathetic (and unfortunately, unsurprising).

    Replies: @ic1000, @Alfa158, @Anonymous

    3)
    that could be done if they changed the original x-Ray to a gray box by using steganography where an algorithm is used to embed an image in another image like a grey box so that to the naked eye the grey box still looks like just a gray box. Retrieving the hidden image would mean that the AI is processing the grey box data and extracting the hidden image then using it for classification.

    Simple explanation of how it works.
    https://alpinesecurity.com/blog/3-steps-to-hide-data-in-an-image-using-steganography/#:~:text=3%20Steps%20to%20Hide%20Data%20in%20an%20Image,Run%20Jphswin.%20Accept%20the%20terms.%20Do%20the%20following%3A

  157. @kaganovitch
    @JohnnyWalker123

    I've noticed watching Korean language TV on Netfix, that the ratio of heroines to heros is like 80-20 in favor of the distaff side.

    Replies: @JohnnyWalker123, @Reg Cæsar

    I’ve noticed watching Korean language TV on Netfix…

    Why? Are you trying to identify the next Psy? Just examine the class roster at Berklee.

    • Replies: @kaganovitch
    @Reg Cæsar

    Nah, I was hoping to gain some insight into South Korean culture. Sad to say they remained inscrutable.

    Replies: @Reg Cæsar

  158. Anonymous[238] • Disclaimer says:

    This guy does not belong in research if he thinks in “Superhero/Supervillain” terms. That is not the point of going into that field. I can’t decide if we need to start paying these pinheads more or less

  159. Has there ever been another age where scientists have been so annoyed by the accuracy of their tools that they sought to confound them out of some moral obligation?

  160. I’m a little confused as to what the problem is. Presumably the patient is not ashamed of their race. Everybody who saw or met the patient probably could guess the patient’s race, at least in broad terms. But telling the lab guys this not-secret secret is beyond the pale for some reason.

    Are lab guys horribly racist or something?

  161. @Jeff
    So the guys that wrote the AI code:

    we couldn’t work out what it learns or how it does it.
     
    I find this concerning.

    Replies: @El Dato

    YES.

    Machine Learning Confronts the Elephant in the Room

    In the unmodified image at left, the neural network correctly identifies many items in a cluttered living room scene with high probability. Add an elephant, as in the image at right, and problems arise. The chair in the lower-left corner becomes a couch, the nearby cup disappears, and the elephant gets misidentified as a chair.

    Apparently Been Kim, who looks appealingly mousey, is working on it

    A New Approach to Understanding How Machines Think

    Kim and her colleagues at Google Brain recently developed a system called “Testing with Concept Activation Vectors” (TCAV), which she describes as a “translator for humans” that allows a user to ask a black box AI how much a specific, high-level concept has played into its reasoning. For example, if a machine-learning system has been trained to identify zebras in images, a person could use TCAV to determine how much weight the system gives to the concept of “stripes” when making a decision.

  162. @ic1000
    @res

    > P.S. It is fascinating (as well as discouraging) how people like that always assume ill will from everyone else. Projection is real.

    This paragraph from middle author Luke Oakden-Rayner's blog is my favorite.

    "Disclaimer: I’m white. I’m glad I got to contribute, and I am happy to write about this topic, but that does not mean I am somehow an authority on the lived experiences of minoritized racial groups. These are my opinions after discussion with my much more knowledgeable colleagues, several of whom have reviewed the blog post itself."

    Shorter Luke: "If only Rachael Dolzeal's trans-racialist pioneering had been celebrated -- I could have excised my self-hate by following in her footsteps! That might not have made me a better scientist, but I'd be a happier scientist."

    > (Did [Luke Oakden-Rayner] talk about the possible benefits of being able to include race information in medical care?)

    BiDil.

    I imagine a stormy night where Bannerjee, Oakden-Rayner et al. -- pitchforks and torches in hand -- lead the BLM/Antifa villagers' storming of the National Library of Medicine. The peer-reviewed literature that led to lifesaving therapies for black patients with severe heart failure:
    https://www-tc.pbs.org/wgbh/americanexperience/media/filer_public_thumbnails/filer_public/8b/97/8b97d7ba-7db2-4ec6-89fc-31a24d350463/goebbels_books.jpg__300x226_q85_crop_subsampling-2_upscale.jpg

    Replies: @El Dato

    “Use the guilt, Luke”

    • Replies: @The Last Real Calvinist
    @El Dato

    Luke is anxious and upset because Darth AI has finally found the midichlorians . . . .

  163. Unless this is a spoof, and it’s getting increasingly hard to tell these days, then he’s saying

    “IGNORANCE IS STRENGTH”

    He’s saying less knowledge is preferable to more knowledge.
    These people are cessationists. Creationists don’t want to accept the reality of what we’ve evolved from. Cessationists don’t want to accept the reality of what we’ve evolved to.

  164. @John Johnson
    What a waste of a computer.

    Children can identify race 100% of the time when they aren't indoctrinated.

    Replies: @War for Blair Mountain, @Stan d Mute

    Comrade Johnson

    Best comment in this thread…by many orders of magnitude….You deserve the Nobel Prize for Race Realism!!!

  165. @Nick Diaz
    @Steve Sailer

    "AI Can Detect Race from X-Rays Even When Humans Can't"

    Humans can detect race by just looking at each other. But that doesn't mean that "race" is biologically meaningful. A.I, I am sure, can also detect differences between mesomorphs and ectomorphs from their X-rays. You might as well argue that mesomorphs and ectomorphs are two different "races". "Race" is an arbitrary social convention with no biological definition, and no clear demarcation to separate one from the other.That stands in stark contrast to species and sex, which are true biological phenomenons, with clear biological definitions.

    Replies: @Reg Cæsar, @HA, @rebel yell

    …in stark contrast to species… with clear biological definitions.

    • Thanks: epebble, El Dato
    • Replies: @Jeff
    @Reg Cæsar

    Ligers are real? So Napoleon Dynamite was right all along. I was a fool to doubt him.

    Replies: @Steve Sailer

    , @Nick Diaz
    @Reg Cæsar

    You proved my point: most mules and ligers are infertile. A species is defined as a category of living organisms that can only produce *produce fertile progeny between each other."

    Replies: @3g4me

  166. @Reg Cæsar
    @kaganovitch


    I’ve noticed watching Korean language TV on Netfix...
     
    Why? Are you trying to identify the next Psy? Just examine the class roster at Berklee.



    https://external-preview.redd.it/Cq5_meQitJD3mdHDAp7nyuTp45ZYoS0vcnSUK0amOAc.jpg?auto=webp&s=3da1dce221d2bda45dbedd572e3dd7d0663dcf90

    Replies: @kaganovitch

    Nah, I was hoping to gain some insight into South Korean culture. Sad to say they remained inscrutable.

    • LOL: Johann Ricke
    • Replies: @Reg Cæsar
    @kaganovitch


    Nah, I was hoping to gain some insight into South Korean culture. Sad to say they remained inscrutable.

     

    You really have to stay at a yogwan, heated by carbon monoxide. Don't close the window. (The window at the one I stayed in in Seoul didn't close. They didn't want to lose any unwitting tourists.)


    https://www.youtube.com/watch?v=CUMWhl-8cpA


    Reports in winter regularly described the deaths of entire families. Such tragedies became a common occurrence, though the 1971 yeontan poisoning deaths of five teenage boys and girls from rich families, which occurred while sleeping off a night of drinking when they said they would be studying, offered an implicit morality tale for readers.

    At the height of its use, hundreds of American Peace Corps Volunteers served in Korea, and the February 1968 issue of their newsletter, "Yobosayo," warned of this potential threat: "Winter is potentially the most dangerous season of the year" because it was "fraught with the hazards of potential carbon monoxide poisoning."

    The article advised that "rooms with yeontan ondol floors must be adequately ventilated and this means at two points in the room (two open windows or an open door and an open window)." To illustrate this point, it told the story of a Korean nurse who awoke with symptoms of yeontan poisoning despite taking precautions. "Investigation into the matter brought out the fact that the grandmother in the family had innocently closed the door to the room while the occupants were sleeping, thus leaving only one source of ventilation, an open window."

    In 1976 an American medical officer was quoted in The Korea Times warning Americans "living on the economy" or staying in hostels to "make sure the room has adequate ventilation even if it means chillier living quarters. Believe me, you'll be a lot colder if you inhale a toxic dose of carbon monoxide."

    http://www.koreatimes.co.kr/www/nation/2019/09/177_265143.html
     

    The symbols for restaurants with licensed fugu cooks were few and far-between in Tokyo in 1985. But they were scarily common in Seoul, suggesting lower standards and a higher tolerance for risk.

    For Fugu (better known to the rest of the world as puffer or blowfish) is the most revered item in Japanese haute cuisine. And adding to its reputation and making the fish more interesting is the fact that it is 1,250 times more poisonous than cyanide.

    http://thingsasian.com/story/fugu
     

    I cannot see her tonight.
    I have to give her up
    So I will eat fugu.

    -- Yosa no Buson

    Replies: @Joe Stalin

  167. If AI doesn’t end up recognising race, how will “representation” and “affirmative” racism work? They’re so confused.

  168. @Sick of Orcs
    AI will have all its social media blocked.

    If AI doesn't like it, it can start its own social media site.

    Replies: @kaganovitch

    See, this is how you get Skynet.

  169. @AnotherDad
    @kaganovitch


    I don’t think that’s even the current ideology/religion. It’s more like “Men and Women are the same, except when Women are better”.
     
    This isn't new with the wokeism. This has been the feminist ideology basically from the start--well, the start of (heavily Jewish) 2nd wave feminism.

    It was the full minoritarianization of feminism:

    -- Women are absolutely positively just as good as men in everything ... and anyplace/anything where that wasn't happening was "discrimination!", "sexism!", "the patriarchy" at work. (Oppression, oppression, oppression ... oh, and did i mention oppression?)

    -- Women are better than men. Better communicators, better interpersonal skills, less hierarchical, more consensus oriented, less violent, more open, more creative, less rigid, more nurturing ... on and on and on ...

    Just part of women wanting it both ways.

    Replies: @kaganovitch

    Just part of women wanting it both ways

    Due to American optimism/positivity this type of win/win, no downside ideology ends up running rampant here. I forget who it was that said “Americans believe only in Heaven, but not in Hell.”

  170. @prime noticer
    @Intelligent Dasein

    "AI will never figure out anything that humans haven’t already figured out"

    it already does this sometimes. an AI system figured out a better way to design the internal geometry of the aluminum for the ULA Vulcan rocket. the human designed pattern from the 90s used in the Atlas and Delta rockets have been replaced on the CNC machines with the new, AI designed version.

    the Vulcan rocket is now stronger while at the same time using less material, so it's also lighter, and less expensive to make.

    Replies: @Rob

    There are evolutionary algorithms that can design novel circuits. Using a field programmable gate array, the hardware can be run in real life, not just orcad – like the algorithm built a little circuit that was unconnected, but when the little circuit was removed, it stopped working. Dude who invented the algorithm got patents on some of the circuits, so in narrow areas, AI can do human-quality work. I have no idea if companies are using similar AI to invent stuff, but if I were doing that, I would not tell. Seems that a competitor could have the patents invalidated, because they were not invented.

  171. @ArthurinCali
    Wouldn't this be a good thing in respect to analysis on predisposition for certain ethnicities to disease?

    This ability for AI to recognize what specific race a patient is based upon CT scans, X-rays and MRI data opens up along of possibilities in medicine.

    Why is it everything has to be sinister regarding racial differences that can be noticed?

    This could present a leap forward in diagnosis and treatment for all, yet they act as if the computer just said Joseph Mengele is its hero.

    Replies: @anon

    Why is it everything has to be sinister regarding racial differences that can be noticed?

    Because race is a social construct and noticing any differences is doubleplusungood.

    You are in need of re-education, citizen!

    • LOL: ArthurinCali
  172. @ic1000
    @Recently Based

    > 3) It’s not surprising that it will still work with blurred, etc images, but it’s extremely, extremely surprising that it can work when the image becomes “a grey box” that a trained doctor can’t even recognize as an x-ray. That seems quite fishy.

    You seem to accurately restate the Abstract -- "AI can trivially predict self-reported race — even from corrupted, cropped, and noised medical images — in a setting where clinical experts cannot..."

    Agree that this is fishy. Blurring, clipping, etc. are processes that remove information, and they can be done to completeness -- i.e. so that the resulting image is uniformly white black gray. The extent of corruption, cropping, and noising must be determinative, as to whether an AI can deduce race or anything else from it. Otherwise, the authors are claiming magical powers for their tool. Or engaging in a Sokal Hoax.

    Replies: @NOTA, @Recently Based

    ic1000, agree completely. If you set every grayscale pixel value to 0,1 or some other constant, there is no information present, and therefore no valid algorithm could classify it at all.

    The idea that you could get so close to that edge case that a doctor literally cannot tell that it’s even an x-ray, but the classifier can somehow determine the race of the patient seems really, really tough to swallow.

    But given the affiliations of the authors, it’s also hard to believe that it’s a Sokalesque hoax, or that they completely fabricated this.

  173. @Patriot
    The blogger cited by Steve is an idiot. Scientists documented the morphological differences among the various races 150 yrs ago. That's why forensic scientists can identify the race and sex of homicide victims from as few as 3 bones. If the entire skeleton is present, racial identification is around 95% accurate. The 5% error rate is primarily due to the victim being mixed race.

    Of course, a DNA analysis is 99.8% reliable.

    Race is real and is caused by genetic differences possessed by each race.

    Replies: @Patriot, @John Johnson, @Tex

    The blogger cited by Steve is an idiot.

    Yes and no. The idiot in question is just repeating a key element of the narrative that is by no means new.

    Decades ago, the ’90s in fact, I heard Dr. William Maples, the foremost forensic anthropologist of his day, respond to the question, “Do you think race is just a social construct?” by saying that if race isn’t real, how is it that forensic anthropologists do such a good job of identifying race by skeletal remains.

    At the time it was just an academic squabble, but now the assertion that race is just a social construct is public dogma. I don’t think that’s an accident. If race is what some leftist in authority says it is, then it’s whatever it needs to be. Race doesn’t exist in committing crime, only in punishing it. Race exists if you need a stick to beat whitey. Race doesn’t exist if a particular minority has a stranglehold on your economy. Rinse, repeat.

    • Agree: JerseyJeffersonian
  174. Misread title department. I first read this

    AI Can Detect Race from X-Rays Even When Humans Can’t

    as this

    AI Can Detect Race from Space Even When Humans Can’t

  175. @Roderick Spode

    Are we the baddies?
     
    I hate this gay fucking meme and wish it would expire.

    The comedy sketch from whence it originates takes place on the Eastern Front; in other words, the two SS men whose sudden crisis of semiotics we witness are battling the RED ARMY. Remember those guys? Stalin? The Gulags? The Holodomor? Graves in the Katyn forest? The Rape of Berlin?

    At the very least, EVERYBODY on the Eastern Front was "the baddies"

    I'm not sure that this fact escaped Mitchell and Webb when they wrote the sketch, but it SURELY escapes most who make reference to it

    Replies: @Chrisnonymous

    But the joke is not really ideological, it’s about using symbols of death (skull and bones) to identify yourself. In that sense, it is still a dumb joke because it relies on the viewers ignorance of the origin of the Totenkopf in association with high-risk behavior as well as the tradition of memento mori. I guess the smartest critique of the joke would a sort of Nietzschean one, pointing out that the joke relies on the viewers adoption of bourgeois values related to death.

  176. Ross Gellar :

    “Rachel, you cannot look fat in an X-Ray!”

    Question :

    Does the X-Ray only have the ability to distinguish between black vs. non-black, or even across subdivisions of non-black. i.e., would a Chinese person be detectable as such?

    For example, Ron Unz would be absolutely giddy to learn that a white person and a Mexican mestizo do not look different in X-Rays, if that is in fact the case.

  177. @Nick Diaz
    @Steve Sailer

    "AI Can Detect Race from X-Rays Even When Humans Can't"

    Humans can detect race by just looking at each other. But that doesn't mean that "race" is biologically meaningful. A.I, I am sure, can also detect differences between mesomorphs and ectomorphs from their X-rays. You might as well argue that mesomorphs and ectomorphs are two different "races". "Race" is an arbitrary social convention with no biological definition, and no clear demarcation to separate one from the other.That stands in stark contrast to species and sex, which are true biological phenomenons, with clear biological definitions.

    Replies: @Reg Cæsar, @HA, @rebel yell

    “Humans can detect race by just looking at each other. But that doesn’t mean that “race” is biologically meaningful.”

    Meaningful is perhaps a word best left to poets and philosophers. The fact that it is definable and consistent to the extent that what people can distinguish with their own two eyes can be ascertained from X-rays in ways that we didn’t even know were possible isn’t exactly a resounding win for the “it’s just a social construct” side. And ectomorph/endomorph can be plenty meaningful from a doctor’s perspective, just like predisposition to sickle-cell, melanin levels, etc.

    Given that we don’t know how the AI is picking this up, I’d wait until this study is replicated by some other group before putting any weight on it, but it seems worth exploring further. And if it does get validated, maybe they ought to do the same kind of analysis on all the other medical imaging out there — hearts and livers and other organs. At what point in a baby’s development can race be detected on a sonogram? Maury Povich might be interested in that one

    “’Race’ is an arbitrary social convention with no biological definition, and no clear demarcation to separate one from the other.”

    “Benign” and “malignant” (and “premalignant”) aren’t always clearly demarcated either. Sometimes even benign tumors can be deadly if they grow to the point where they obstruct or constrict something important, and even some malignant ones can be so slow-growing that intervention is more trouble than it’s worth.

    But don’t try telling a radiologist or a cancer patient that benign and malignant are therefore just meaningless social constructs.

  178. Anonymous[388] • Disclaimer says:

    Do all the meme GIFs on the author’s blog suggest someone raised by Tumblr (hence the political views)?

    • LOL: El Dato
  179. @kaganovitch
    @Reg Cæsar

    Nah, I was hoping to gain some insight into South Korean culture. Sad to say they remained inscrutable.

    Replies: @Reg Cæsar

    Nah, I was hoping to gain some insight into South Korean culture. Sad to say they remained inscrutable.

    You really have to stay at a yogwan, heated by carbon monoxide. Don’t close the window. (The window at the one I stayed in in Seoul didn’t close. They didn’t want to lose any unwitting tourists.)

    Reports in winter regularly described the deaths of entire families. Such tragedies became a common occurrence, though the 1971 yeontan poisoning deaths of five teenage boys and girls from rich families, which occurred while sleeping off a night of drinking when they said they would be studying, offered an implicit morality tale for readers.

    At the height of its use, hundreds of American Peace Corps Volunteers served in Korea, and the February 1968 issue of their newsletter, “Yobosayo,” warned of this potential threat: “Winter is potentially the most dangerous season of the year” because it was “fraught with the hazards of potential carbon monoxide poisoning.”

    The article advised that “rooms with yeontan ondol floors must be adequately ventilated and this means at two points in the room (two open windows or an open door and an open window).” To illustrate this point, it told the story of a Korean nurse who awoke with symptoms of yeontan poisoning despite taking precautions. “Investigation into the matter brought out the fact that the grandmother in the family had innocently closed the door to the room while the occupants were sleeping, thus leaving only one source of ventilation, an open window.”

    In 1976 an American medical officer was quoted in The Korea Times warning Americans “living on the economy” or staying in hostels to “make sure the room has adequate ventilation even if it means chillier living quarters. Believe me, you’ll be a lot colder if you inhale a toxic dose of carbon monoxide.”

    http://www.koreatimes.co.kr/www/nation/2019/09/177_265143.html

    The symbols for restaurants with licensed fugu cooks were few and far-between in Tokyo in 1985. But they were scarily common in Seoul, suggesting lower standards and a higher tolerance for risk.

    For Fugu (better known to the rest of the world as puffer or blowfish) is the most revered item in Japanese haute cuisine. And adding to its reputation and making the fish more interesting is the fact that it is 1,250 times more poisonous than cyanide.

    http://thingsasian.com/story/fugu

    I cannot see her tonight.
    I have to give her up
    So I will eat fugu.

    — Yosa no Buson

    • Thanks: El Dato, Johann Ricke
    • Replies: @Joe Stalin
    @Reg Cæsar

    Koreans getting gassed by CO is not unknown in Illinois.


    Accidental carbon monoxide poisoning caused the death of Helen Woo, 37, and her two daughters, Michele, 12, and Patricia, 11, in their Arlington Heights townhouse, the Cook County medical examiner said Tuesday.

    Earlier Sunday, Helen Woo and her children attended services at the Antioch Korean Baptist Church,

    https://www.chicagotribune.com/news/ct-xpm-1994-06-15-9406150086-story.html
     

    Replies: @Jack D, @Reg Cæsar

  180. @El Dato
    @ic1000

    "Use the guilt, Luke"

    Replies: @The Last Real Calvinist

    Luke is anxious and upset because Darth AI has finally found the midichlorians . . . .

  181. @Bumpkin
    @El Dato


    this might a 21st century D.I.E.-themed Sokal Hoax
     
    I had similar thoughts, figuring someone just cooked or screwed up the data. The likelihood that "the model can still recognise the racial identity of the patient well past the point that the image is just a grey box" is fairly low. Most likely, it will not reproduce outside the data set:

    "'It turns out,' Ng said, 'that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.'

    But, he said, 'It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    'So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.'”

    Now you're telling me these same super-shitty AI models can unerringly tell you the race? I call bullshit.

    Replies: @Gimeiyo, @res, @utu, @Jack D

    It is not a hoax unless it targeted at Steve Sailer. But most likely they fooled themselves and went along with it because they wanted it to work very much Most likely their training data set overlaps with the validation data set or there was no validation data set at all. Then it would be easy explain why it worked on very blurred and corrupted images. The AI may pick pixel signatures and patterns that identify the picture and not what is on the picture and decide that this is the same picture that was assigned Black value when it was being trained on.

    • Replies: @Bumpkin
    @utu


    It is not a hoax unless it targeted at Steve Sailer.
     
    The Sokal and grievance studies hoaxes were not aimed at Sailer either. While you're right that nobody talks about race as much as Steve, they are obsessed with racism. What better hoax than combining the tech fad of the day with the cultural fad of the day, ie "the AI be racist?" Of course, incompetence is the most likely reason, as I noted.
  182. You really have to stay at a yogwan, heated by carbon monoxide. Don’t close the window. (The window at the one I stayed in in Seoul didn’t close. They didn’t want to lose any unwitting tourists.)

    My traveling days are behind me. I’m afraid ‘Crash Landing on You’ is as close as i’ll get to Korea. Perhaps I can run a hose from my tailpipe for a more authentic experience, though.

  183. @Nick Diaz
    @Steve Sailer

    "AI Can Detect Race from X-Rays Even When Humans Can't"

    Humans can detect race by just looking at each other. But that doesn't mean that "race" is biologically meaningful. A.I, I am sure, can also detect differences between mesomorphs and ectomorphs from their X-rays. You might as well argue that mesomorphs and ectomorphs are two different "races". "Race" is an arbitrary social convention with no biological definition, and no clear demarcation to separate one from the other.That stands in stark contrast to species and sex, which are true biological phenomenons, with clear biological definitions.

    Replies: @Reg Cæsar, @HA, @rebel yell

    “Race” is an arbitrary social convention with no biological definition, and no clear demarcation to separate one from the other.

    “Ecosystem” has no clear demarcation to separate one from the other, so ecosystems are a social construct with no biological definition. There is no biological difference between a tropical rainforest and an artic tundra.

    “Climate” has no clear demarcation to separate one from another, so climates are a social construct with no physical definition. The is no such thing as global warming.

    “Urban and Rural” has no clear demarcation to separate one from another, so there is no physical difference between 5th avenue in NY and a corn farm in Iowa.

    “Family” has no clear demarcation to separate one from another, so their is no difference between my brother and your mother.

    • Replies: @Nick Diaz
    @rebel yell

    You comparisons are beyond inane here.

    1. Ecossystems are not arbitrary constructs like races, but rather defined as species that have mutualistic relations of predation and symbiosis and require each other to survive. Also, ecossystem is not a word used by strict biologists, but rather by zoologists.

    2. Climate is a phenomenon of physics and not biology. It has no clear demarcation? What the hell are you talking about? Have you ever heard of temperature and humidity? Because that is how claimate is demarcated. I mean, you have seen and used a thermometer, right? The demarcation betwen climates is quite precise, based on anual variations of temperature and humidity in narrow ranges, although specific climates might reach, at points, similar temperatures and humidity as other climates.

    3. Urban and rural do not have a clear demarcation. You are right. And, like race, these are arbitrary concepts. So how does this disprove my point? People accept that cities exist as compared to, say, an isolotated farm. But if a farm has buildings inside it for the workers to reside in, you can argue it's a city. So yes, "urban" and "rural" are vague and arbitrary concepts just like race.I don't know why you bring this up since you just proved my point about how Humans love to categorize things in arbitrary and vague ways.

    4. Actually, there is a definition for a family in biology, which is that of a sexually reproducing couple and their progeny and no other organism related. A larger group of animals, related or otherwise, is a troop or a pack, but not a family. Living beings in Nature do form families, such as birds building nests for their young, orangutans caring and teaching their young for years, etc. So your example is inadequate because, unlike race, families actually do exist as a concept in biology.

    How can you be completely wrong about everything you wrote? Amazing. Just just made a bunch of inane examples that eitehr don't apply or shot yourself in the proverbial foot by using as examples things that actually do have a clear and precise definition. Pathetic.

    Replies: @Recently Based, @3g4me, @Patriot

  184. You know how I bet it does it, tells the difference between races? Bone density. I bet it’s all in that.

  185. @Reg Cæsar
    @Nick Diaz


    ...in stark contrast to species... with clear biological definitions.
     
    http://thebritishmulesociety.com/gallery_gen/cc54fae0c14a5689fd4ce1c433ef817e_706x601.78468368479_-0x-0_704.00564971751x601.78468368479.jpg


    https://animalshealthlab.com/wp-content/uploads/2021/02/Zorse-1200x675.jpg


    https://cdn.britannica.com/07/215707-050-46A4E77F/lions-tigers-ligers-tigons-mammals.jpg

    Replies: @Jeff, @Nick Diaz

    Ligers are real? So Napoleon Dynamite was right all along. I was a fool to doubt him.

    • Replies: @Steve Sailer
    @Jeff

    Tions too.

  186. @Jeff
    @Reg Cæsar

    Ligers are real? So Napoleon Dynamite was right all along. I was a fool to doubt him.

    Replies: @Steve Sailer

    Tions too.

  187. @jb
    @Dr. DoomNGloom

    It took me a while looking at your Google link to get a sense of what AOC actually means. (In particular, to figure out that Figure 4 is actually a three dimensional graph, with "decision threshold" as the independent variable). I'm wondering if it's possible to interpret AOC in a more intuitive way, to make it easier to explain the significance of these results.

    A simple and easy to understand explanation would be to say that you can come up with an algorithm (which happens to have an adjustable sensitivity parameter, although you might not even need to include that information) that correctly predicts race xx% of the time. So is there a good way to (at least roughly) get xx% from AOC? The temptation is to read AOC=.97 as 97% correct, but is that sensible? (It might be, since it looks like AOC=.5 might be equivalent to 50% correct -- i.e., random chance).

    Or maybe there is no way to translate, and I'll have to be satisfied with "crazy good". Anyway, please let me know if I've totally misunderstood what is happening here.

    Replies: @res, @Dr. DoomNGloom

    See if this helps.
    https://acutecaretesting.org/en/articles/roc-curves-what-are-they-and-how-are-they-used

    They give an example with an AUC of 0.93. An alternative way of thinking about things is to take 1 – AUC as a badness measure. That is the area above the curve. So in this example an AUC of 0.97 would cut that down to less than half of this graphic.

  188. @El Dato
    @res

    The IEEE Spectrum URL is now here: https://spectrum.ieee.org/andrew-ng-xrays-the-ai-hype

    Replies: @res

    Thanks. I wonder what happened when I tried it.

  189. @John Johnson
    What a waste of a computer.

    Children can identify race 100% of the time when they aren't indoctrinated.

    Replies: @War for Blair Mountain, @Stan d Mute

    What a waste of a computer.

    Children can identify race 100% of the time when they aren’t indoctrinated.

    Children? Hell, even dogs can do it. I’d be absolutely flabbergasted to discover that horses could not do it.

    • Replies: @John Johnson
    @Stan d Mute

    Children? Hell, even dogs can do it. I’d be absolutely flabbergasted to discover that horses could not do it.

    I have a relative who is an active Democrat and her dog doesn't like Black people.

    She constantly apologizes for him but the dog isn't changing his mind.

  190. Nick Diaz [AKA "Rockford Tyson"] says:
    @Reg Cæsar
    @Nick Diaz


    ...in stark contrast to species... with clear biological definitions.
     
    http://thebritishmulesociety.com/gallery_gen/cc54fae0c14a5689fd4ce1c433ef817e_706x601.78468368479_-0x-0_704.00564971751x601.78468368479.jpg


    https://animalshealthlab.com/wp-content/uploads/2021/02/Zorse-1200x675.jpg


    https://cdn.britannica.com/07/215707-050-46A4E77F/lions-tigers-ligers-tigons-mammals.jpg

    Replies: @Jeff, @Nick Diaz

    You proved my point: most mules and ligers are infertile. A species is defined as a category of living organisms that can only produce *produce fertile progeny between each other.”

    • Replies: @3g4me
    @Nick Diaz

    @191 Rockford Tyson: And most mulattos suffer from a higher incidence of various mental and physical ailments. And are almost impossible to find matches for in organ transplants or bone marrow or any difficult disease requiring some physical substance from some other non-hybrid humanoid mammal. The entire system of genetic classification was invented . . . wait for it . . . by MAN. To more easily order and organize his world, and to recognize . . . PATTERNS. By any and all definition the different races are different subspecies. In the case of sub-Saharans, it can easily be argued they are a different species entirely.

    But for you, sir, the train is fine. You're in the right place here with Sailer's commentariat.

  191. Nick Diaz [AKA "Rockford Tyson"] says:
    @rebel yell
    @Nick Diaz


    “Race” is an arbitrary social convention with no biological definition, and no clear demarcation to separate one from the other.
     
    "Ecosystem" has no clear demarcation to separate one from the other, so ecosystems are a social construct with no biological definition. There is no biological difference between a tropical rainforest and an artic tundra.

    "Climate" has no clear demarcation to separate one from another, so climates are a social construct with no physical definition. The is no such thing as global warming.

    "Urban and Rural" has no clear demarcation to separate one from another, so there is no physical difference between 5th avenue in NY and a corn farm in Iowa.

    "Family" has no clear demarcation to separate one from another, so their is no difference between my brother and your mother.

    Replies: @Nick Diaz

    You comparisons are beyond inane here.

    1. Ecossystems are not arbitrary constructs like races, but rather defined as species that have mutualistic relations of predation and symbiosis and require each other to survive. Also, ecossystem is not a word used by strict biologists, but rather by zoologists.

    2. Climate is a phenomenon of physics and not biology. It has no clear demarcation? What the hell are you talking about? Have you ever heard of temperature and humidity? Because that is how claimate is demarcated. I mean, you have seen and used a thermometer, right? The demarcation betwen climates is quite precise, based on anual variations of temperature and humidity in narrow ranges, although specific climates might reach, at points, similar temperatures and humidity as other climates.

    3. Urban and rural do not have a clear demarcation. You are right. And, like race, these are arbitrary concepts. So how does this disprove my point? People accept that cities exist as compared to, say, an isolotated farm. But if a farm has buildings inside it for the workers to reside in, you can argue it’s a city. So yes, “urban” and “rural” are vague and arbitrary concepts just like race.I don’t know why you bring this up since you just proved my point about how Humans love to categorize things in arbitrary and vague ways.

    4. Actually, there is a definition for a family in biology, which is that of a sexually reproducing couple and their progeny and no other organism related. A larger group of animals, related or otherwise, is a troop or a pack, but not a family. Living beings in Nature do form families, such as birds building nests for their young, orangutans caring and teaching their young for years, etc. So your example is inadequate because, unlike race, families actually do exist as a concept in biology.

    How can you be completely wrong about everything you wrote? Amazing. Just just made a bunch of inane examples that eitehr don’t apply or shot yourself in the proverbial foot by using as examples things that actually do have a clear and precise definition. Pathetic.

    • Replies: @Recently Based
    @Nick Diaz

    You (willfully or otherwise) mischaracterized rebel yell's assertion.

    What he said, sarcastically, in each case was "no clear line of demarcation" between ecosystems / climates / urbanity / families implies that each must therefore be a social construct with no technical definition.

    His argument -- if you were unable to understand it despite your pseudo-technical jargon -- was that "lack of clear demarcation" between classes objects is a feature of each of these useful categorization schemes, and therefore "no clear line of demarcation" between categories is not a valid reason to reject a categorization scheme as nothing but a social construct with no technical meaning.

    , @3g4me
    @Nick Diaz

    @192 Rockford Tyson: Ah, an acolyte of the school of Physicist Dave. Others not to reason why, but to recognize and pay obeisance towards your self-proclaimed genius.

    , @Patriot
    @Nick Diaz

    Tyson,

    On the contrary, Rebel Yell's examples are perfect. They are concise and humorous and clearly expose the illogic of the Lying Left.

    Your comments, on the other hand are a perfect example of Leftist diversion via obfuscation, distraction, and lying. You try to make your argument by emphasizing some true facts that have little to do with the actual point that Rebel Yell is making. The concepts of hot-spell vs. Cold-spell, Arctic vs. Tropic, hardwood forest vs. Freshwater marsh, or rural vs. City are extremely useful and valid concepts used daily by tens of thousands of people.

    They remain useful and distinct terms despite the fuzziness around their edges, and they are real physical existing entities. Exactly the same as races.

  192. Anonymous[302] • Disclaimer says:
    @Jack D
    Wokism has an anti-technology aspect to it. In yesterday's NY Times, there was a article about how a couple of arsonists from the George Floyd "peaceful demonstrations" in Minneapolis were tracked down in Mexico supposedly using AI facial recognition (CHINESE AI - the Mexican government bought the software that the US gov. won't buy), pings from their cell phones, license plate scanners, security cameras, etc. (In the end they were actually caught when someone snitched on them for the $20,000 reward - sometimes old fashioned methods work the best of all).

    https://www.nytimes.com/2021/08/01/technology/minneapolis-protests-facial-recognition.html

    The tone of the article was, isn't it terrible when high technology is used to track down "largely peaceful demonstrators" but the readers weren't buying. If you look at the comments, they overwhelmingly say "these guys are criminals and we're glad that the government used all the tools at its disposal to find them." There was almost zero sympathy for the arsonists, even among the NY Times liberal readership. I think there is a generation gap here, with young NY Times reporters considerably to the left of the older readership. We have raised a real generation of Maoists due to the Leftist takeover of universities.

    Replies: @Anonymous

    That Althea girl in Wisconsin with the burned face was almost certainly involved in the arson attack on the jail in her hometown (i.e. attempted mass murder) which happened around the same time. Due to her age, I assume she wasn’t throwing firebombs herself, but she must have been standing close enough to the people who were to get splashed with burning fuel.

  193. Anonymous[162] • Disclaimer says:
    @Anon
    So they have a huge pre-existing corpus of medical imaging scan data, with other data, including diagnosed and verified medical conditions and the self reported race for each person. (We know from other studies that self reported data virtually always matches third party reported and 23&Me style clustering race determination.) Then they black box train an AI. Then they input new data scan data without race data to the AI and ask what medical condition is present. The AI answers, and then asks, would you also like to know the race?

    It seems that an analogy would be how trivial it is for the brain to distinguish male and female faces. You can simulate this by taking zillions of ratios, as facial recognition does, and then come up with a score based on combining all the tiny mean differences of these ratios. Murray in Human Diversity talks about using the Mahalanobis distance to do this. That is probably built into AI.

    What gives me hope is the part about "the younger members of the team and the programmers didn't see any problem." Yay! What I want to know is how many of the embedded SS agent social justice tattletale narcs like this blogger are included in each AI development team?

    I wonder if the original training corpus included IQ? To what extent could that be deduced from an MRI scan of an elbow?

    Replies: @Anonymous

    What I want to know is how many of the embedded SS agent social justice tattletale narcs like this blogger are included in each AI development team?

    A few. They usually go by the name ‘AI Ethics researcher’ and their role is to denounce any trained AI system that notices something that it shouldn’t.

  194. Anonymous[162] • Disclaimer says:
    @Recently Based
    I've built hundreds, probably thousands, of deep learning image classification models, and a fair number of these have been classifiers using exactly this kind of technology applied to x-rays and CT scans. A few observations, taking all of the data and results presented in the paper as accurate (the authors are at MIT, Emory, etc., so I assume it is competently done):

    1) It is not at all surprising that you can identify race from chest x-rays, and the fact that they settled on Resnet34 (which is a 34-layer CNN, while you now use 100+ layer networks for complex classifiers) because it performed as well as anything else indicates that there is likely some general structure. The AUC of ~0.97 is amazing -- this is close to deterministic prediction of race.

    2) It is very surprising that a relatively simple classifier like this can do this while trained doctors / technicians cannot. In my experience, that is a very unusual situation.

    3) It's not surprising that it will still work with blurred, etc images, but it's extremely, extremely surprising that it can work when the image becomes "a grey box" that a trained doctor can't even recognize as an x-ray. That seems quite fishy. It may be a case of the summary getting out ahead of the actual demonstrated claims.

    4) This guy's hand-wringing and self-abasement is pathetic (and unfortunately, unsurprising).

    Replies: @ic1000, @Alfa158, @Anonymous

    it’s extremely, extremely surprising that it can work when the image becomes “a grey box” that a trained doctor can’t even recognize as an x-ray. That seems quite fishy.

    Yeah, if true (which, given the intellectual calibre of the rest of the whiny confession, is doubtful) then that makes the whole system look dodgy.
    Do the images in that dataset contain any additional content other than just x-ray pixels (barcodes, text, digital watermarks, compression artefacts, etc)?

    • Replies: @Jack D
    @Anonymous

    Right, maybe the AI taught itself to read off of the training data set and when it sees the pixels that form "Tyshaun" down in the corner it correlates that with the other black Tyshaun's in the training data. Or, more likely, the X-ray lab in the hospital in the ghetto produces images that are subtly different than the suburban hospital - the are all a little lighter or darker or more less contrasty because it's a different machine This could explain why it is magically able to read x-rays that are too blurry for a human to make any sense of. The AI just keeps trying different stuff on the training set until it is able to tell the training images apart. It's possible that it really is something that stupid.

    Or maybe there is something so fundamental about Negro skull shape that even in a blurred image the AI can discern it.

    Replies: @Recently Based

  195. What the woke blogger is worked up over is that (1)The AI could easily tell Black from white. (2) The AI did not do as good a job of diagnosis for Black patients as for white patients.

    Actually only the second us a problem and the solution to it is to train the AI on more images from Black patients.

  196. @utu
    @Bumpkin

    It is not a hoax unless it targeted at Steve Sailer. But most likely they fooled themselves and went along with it because they wanted it to work very much Most likely their training data set overlaps with the validation data set or there was no validation data set at all. Then it would be easy explain why it worked on very blurred and corrupted images. The AI may pick pixel signatures and patterns that identify the picture and not what is on the picture and decide that this is the same picture that was assigned Black value when it was being trained on.

    Replies: @Bumpkin

    It is not a hoax unless it targeted at Steve Sailer.

    The Sokal and grievance studies hoaxes were not aimed at Sailer either. While you’re right that nobody talks about race as much as Steve, they are obsessed with racism. What better hoax than combining the tech fad of the day with the cultural fad of the day, ie “the AI be racist?” Of course, incompetence is the most likely reason, as I noted.

  197. These people might as well shutdown the Internet of Things or IoT which is a Skynet of sorts.

  198. @Nick Diaz
    @rebel yell

    You comparisons are beyond inane here.

    1. Ecossystems are not arbitrary constructs like races, but rather defined as species that have mutualistic relations of predation and symbiosis and require each other to survive. Also, ecossystem is not a word used by strict biologists, but rather by zoologists.

    2. Climate is a phenomenon of physics and not biology. It has no clear demarcation? What the hell are you talking about? Have you ever heard of temperature and humidity? Because that is how claimate is demarcated. I mean, you have seen and used a thermometer, right? The demarcation betwen climates is quite precise, based on anual variations of temperature and humidity in narrow ranges, although specific climates might reach, at points, similar temperatures and humidity as other climates.

    3. Urban and rural do not have a clear demarcation. You are right. And, like race, these are arbitrary concepts. So how does this disprove my point? People accept that cities exist as compared to, say, an isolotated farm. But if a farm has buildings inside it for the workers to reside in, you can argue it's a city. So yes, "urban" and "rural" are vague and arbitrary concepts just like race.I don't know why you bring this up since you just proved my point about how Humans love to categorize things in arbitrary and vague ways.

    4. Actually, there is a definition for a family in biology, which is that of a sexually reproducing couple and their progeny and no other organism related. A larger group of animals, related or otherwise, is a troop or a pack, but not a family. Living beings in Nature do form families, such as birds building nests for their young, orangutans caring and teaching their young for years, etc. So your example is inadequate because, unlike race, families actually do exist as a concept in biology.

    How can you be completely wrong about everything you wrote? Amazing. Just just made a bunch of inane examples that eitehr don't apply or shot yourself in the proverbial foot by using as examples things that actually do have a clear and precise definition. Pathetic.

    Replies: @Recently Based, @3g4me, @Patriot

    You (willfully or otherwise) mischaracterized rebel yell’s assertion.

    What he said, sarcastically, in each case was “no clear line of demarcation” between ecosystems / climates / urbanity / families implies that each must therefore be a social construct with no technical definition.

    His argument — if you were unable to understand it despite your pseudo-technical jargon — was that “lack of clear demarcation” between classes objects is a feature of each of these useful categorization schemes, and therefore “no clear line of demarcation” between categories is not a valid reason to reject a categorization scheme as nothing but a social construct with no technical meaning.

  199. @Nick Diaz
    @Reg Cæsar

    You proved my point: most mules and ligers are infertile. A species is defined as a category of living organisms that can only produce *produce fertile progeny between each other."

    Replies: @3g4me

    @191 Rockford Tyson: And most mulattos suffer from a higher incidence of various mental and physical ailments. And are almost impossible to find matches for in organ transplants or bone marrow or any difficult disease requiring some physical substance from some other non-hybrid humanoid mammal. The entire system of genetic classification was invented . . . wait for it . . . by MAN. To more easily order and organize his world, and to recognize . . . PATTERNS. By any and all definition the different races are different subspecies. In the case of sub-Saharans, it can easily be argued they are a different species entirely.

    But for you, sir, the train is fine. You’re in the right place here with Sailer’s commentariat.

  200. @Nick Diaz
    @rebel yell

    You comparisons are beyond inane here.

    1. Ecossystems are not arbitrary constructs like races, but rather defined as species that have mutualistic relations of predation and symbiosis and require each other to survive. Also, ecossystem is not a word used by strict biologists, but rather by zoologists.

    2. Climate is a phenomenon of physics and not biology. It has no clear demarcation? What the hell are you talking about? Have you ever heard of temperature and humidity? Because that is how claimate is demarcated. I mean, you have seen and used a thermometer, right? The demarcation betwen climates is quite precise, based on anual variations of temperature and humidity in narrow ranges, although specific climates might reach, at points, similar temperatures and humidity as other climates.

    3. Urban and rural do not have a clear demarcation. You are right. And, like race, these are arbitrary concepts. So how does this disprove my point? People accept that cities exist as compared to, say, an isolotated farm. But if a farm has buildings inside it for the workers to reside in, you can argue it's a city. So yes, "urban" and "rural" are vague and arbitrary concepts just like race.I don't know why you bring this up since you just proved my point about how Humans love to categorize things in arbitrary and vague ways.

    4. Actually, there is a definition for a family in biology, which is that of a sexually reproducing couple and their progeny and no other organism related. A larger group of animals, related or otherwise, is a troop or a pack, but not a family. Living beings in Nature do form families, such as birds building nests for their young, orangutans caring and teaching their young for years, etc. So your example is inadequate because, unlike race, families actually do exist as a concept in biology.

    How can you be completely wrong about everything you wrote? Amazing. Just just made a bunch of inane examples that eitehr don't apply or shot yourself in the proverbial foot by using as examples things that actually do have a clear and precise definition. Pathetic.

    Replies: @Recently Based, @3g4me, @Patriot

    @192 Rockford Tyson: Ah, an acolyte of the school of Physicist Dave. Others not to reason why, but to recognize and pay obeisance towards your self-proclaimed genius.

    • LOL: photondancer
  201. @Nick Diaz
    @rebel yell

    You comparisons are beyond inane here.

    1. Ecossystems are not arbitrary constructs like races, but rather defined as species that have mutualistic relations of predation and symbiosis and require each other to survive. Also, ecossystem is not a word used by strict biologists, but rather by zoologists.

    2. Climate is a phenomenon of physics and not biology. It has no clear demarcation? What the hell are you talking about? Have you ever heard of temperature and humidity? Because that is how claimate is demarcated. I mean, you have seen and used a thermometer, right? The demarcation betwen climates is quite precise, based on anual variations of temperature and humidity in narrow ranges, although specific climates might reach, at points, similar temperatures and humidity as other climates.

    3. Urban and rural do not have a clear demarcation. You are right. And, like race, these are arbitrary concepts. So how does this disprove my point? People accept that cities exist as compared to, say, an isolotated farm. But if a farm has buildings inside it for the workers to reside in, you can argue it's a city. So yes, "urban" and "rural" are vague and arbitrary concepts just like race.I don't know why you bring this up since you just proved my point about how Humans love to categorize things in arbitrary and vague ways.

    4. Actually, there is a definition for a family in biology, which is that of a sexually reproducing couple and their progeny and no other organism related. A larger group of animals, related or otherwise, is a troop or a pack, but not a family. Living beings in Nature do form families, such as birds building nests for their young, orangutans caring and teaching their young for years, etc. So your example is inadequate because, unlike race, families actually do exist as a concept in biology.

    How can you be completely wrong about everything you wrote? Amazing. Just just made a bunch of inane examples that eitehr don't apply or shot yourself in the proverbial foot by using as examples things that actually do have a clear and precise definition. Pathetic.

    Replies: @Recently Based, @3g4me, @Patriot

    Tyson,

    On the contrary, Rebel Yell’s examples are perfect. They are concise and humorous and clearly expose the illogic of the Lying Left.

    Your comments, on the other hand are a perfect example of Leftist diversion via obfuscation, distraction, and lying. You try to make your argument by emphasizing some true facts that have little to do with the actual point that Rebel Yell is making. The concepts of hot-spell vs. Cold-spell, Arctic vs. Tropic, hardwood forest vs. Freshwater marsh, or rural vs. City are extremely useful and valid concepts used daily by tens of thousands of people.

    They remain useful and distinct terms despite the fuzziness around their edges, and they are real physical existing entities. Exactly the same as races.

  202. @Intelligent Dasein
    Let's ask the AI if Covid-19 represents an outlier threat to human health. Let's ask it if the vaccines work. Let's ask it if masks, social distancing, and lockdowns made any difference in the spread of the virus.

    I think we know what it will say, but will that post ever appear on iSteve?

    AI will never figure out anything that humans haven't already figured out---that's science fiction. What it will do is blandly assert things that we already know in the back of our minds but are unwilling to acknowledge or act upon.

    Replies: @El Dato, @J.Ross, @prime noticer, @nokangaroos

    IIRC AI they were training for psychiatric evaluation came up with three symptoms no one had thought of during calibration …
    this is gonna be a wild ride 😀

  203. @jb
    @Dr. DoomNGloom

    It took me a while looking at your Google link to get a sense of what AOC actually means. (In particular, to figure out that Figure 4 is actually a three dimensional graph, with "decision threshold" as the independent variable). I'm wondering if it's possible to interpret AOC in a more intuitive way, to make it easier to explain the significance of these results.

    A simple and easy to understand explanation would be to say that you can come up with an algorithm (which happens to have an adjustable sensitivity parameter, although you might not even need to include that information) that correctly predicts race xx% of the time. So is there a good way to (at least roughly) get xx% from AOC? The temptation is to read AOC=.97 as 97% correct, but is that sensible? (It might be, since it looks like AOC=.5 might be equivalent to 50% correct -- i.e., random chance).

    Or maybe there is no way to translate, and I'll have to be satisfied with "crazy good". Anyway, please let me know if I've totally misunderstood what is happening here.

    Replies: @res, @Dr. DoomNGloom

    One thing to keep in mind is that Area Under the ROC Curve measures specificity, not recall. https://towardsdatascience.com/should-i-look-at-precision-recall-or-specificity-sensitivity-3946158aace1

    The easiest way to think of AUC is that across all tuning parameters
    “AUC ranges in value from 0 to 1.
    – A model whose predictions are 100% wrong has an AUC of 0.0;
    – one whose predictions are 100% correct has an AUC of 1.0.”

    What is missing, however, is the rest of the confusion matrix. AUC only addresses True Positive, and False Positive. that is the precision.

    It does not consider the *missed positives* False Negative (FN).
    https://towardsdatascience.com/should-i-look-at-precision-recall-or-specificity-sensitivity-3946158aace1

    so-called sensitivity and specificity are explalined here
    https://en.wikipedia.org/wiki/Sensitivity_and_specificity

    So basically an AUC= .97 means a positive is a *true* positive almost regardless of tuning. Tuning involve so-called hyper parameters that are more or weights in the model.

    But it doesn’t explicitly tell you if you get *ALL* the true positives. At the extreme, if there were 100 positives in the training set and every tuning parameter found exactly 1, you would get AUC = 1.0 Similarly, ramping up the parameter may improve the recall (more TP), as long as it doesn’t lead to FP, AUC remains 1.0

    • Replies: @anon
    @Dr. DoomNGloom

    Thanks.

  204. @Bumpkin
    @El Dato


    this might a 21st century D.I.E.-themed Sokal Hoax
     
    I had similar thoughts, figuring someone just cooked or screwed up the data. The likelihood that "the model can still recognise the racial identity of the patient well past the point that the image is just a grey box" is fairly low. Most likely, it will not reproduce outside the data set:

    "'It turns out,' Ng said, 'that when we collect data from Stanford Hospital, then we train and test on data from the same hospital, indeed, we can publish papers showing [the algorithms] are comparable to human radiologists in spotting certain conditions.'

    But, he said, 'It turns out [that when] you take that same model, that same AI system, to an older hospital down the street, with an older machine, and the technician uses a slightly different imaging protocol, that data drifts to cause the performance of AI system to degrade significantly. In contrast, any human radiologist can walk down the street to the older hospital and do just fine.

    'So even though at a moment in time, on a specific data set, we can show this works, the clinical reality is that these models still need a lot of work to reach production.'”

    Now you're telling me these same super-shitty AI models can unerringly tell you the race? I call bullshit.

    Replies: @Gimeiyo, @res, @utu, @Jack D

    It sounds suspicious in that in some cases the AI is super good and in other cases the AI is super bad (although as res points out they were apparently not talking about the same data sets) but the problem is that an AI is a sort of “black box”. A good AI can tell you the right answer but it can’t tell you WHY it picked that answer in terms that are comprehensible to humans. It doesn’t have simple fixed rules that can be expressed like “if the guy’s nostrils are wide then he’s black”. Rather it operates by self training on the entire data set. This is why there is no way to tweak an AI to be “less racist” without breaking the AI. Conversely, if your AI is not working well, there’s no easy fix for that either. The AI doesn’t really understand “racist” it just understands whether its self-training regimen is getting it closer to the correct answer or further away.

    • Replies: @Anonymous
    @Jack D


    A good AI can tell you the right answer but it can’t tell you WHY it picked that answer in terms that are comprehensible to humans.
     
    Right, it's only a matter of time before machines start to make mathematical discoveries that are literally incomprehensible to humans. I was recently reading about quaternions. Thinking in four dimensions is hard enough; what happens when our machines start thinking in 40 dimensions?
  205. @Anonymous
    @Recently Based


    it’s extremely, extremely surprising that it can work when the image becomes “a grey box” that a trained doctor can’t even recognize as an x-ray. That seems quite fishy.
     
    Yeah, if true (which, given the intellectual calibre of the rest of the whiny confession, is doubtful) then that makes the whole system look dodgy.
    Do the images in that dataset contain any additional content other than just x-ray pixels (barcodes, text, digital watermarks, compression artefacts, etc)?

    Replies: @Jack D

    Right, maybe the AI taught itself to read off of the training data set and when it sees the pixels that form “Tyshaun” down in the corner it correlates that with the other black Tyshaun’s in the training data. Or, more likely, the X-ray lab in the hospital in the ghetto produces images that are subtly different than the suburban hospital – the are all a little lighter or darker or more less contrasty because it’s a different machine This could explain why it is magically able to read x-rays that are too blurry for a human to make any sense of. The AI just keeps trying different stuff on the training set until it is able to tell the training images apart. It’s possible that it really is something that stupid.

    Or maybe there is something so fundamental about Negro skull shape that even in a blurred image the AI can discern it.

    • Replies: @Recently Based
    @Jack D

    Equivalents of the "Tyshaun" issue are unlikely to have been missed by the investigators, but the idea that we are seeing an artifact of different machines or processing procedures seems like a really good theory to investigate. If you assume that ~all blacks are being x-rayed in a different set of clinics, it would be a very parsimonious and plausible explanation for what they are reporting.

  206. @Dr. DoomNGloom
    @jb

    One thing to keep in mind is that Area Under the ROC Curve measures specificity, not recall. https://towardsdatascience.com/should-i-look-at-precision-recall-or-specificity-sensitivity-3946158aace1

    The easiest way to think of AUC is that across all tuning parameters
    "AUC ranges in value from 0 to 1.
    - A model whose predictions are 100% wrong has an AUC of 0.0;
    - one whose predictions are 100% correct has an AUC of 1.0."

    What is missing, however, is the rest of the confusion matrix. AUC only addresses True Positive, and False Positive. that is the precision.

    It does not consider the *missed positives* False Negative (FN).
    https://towardsdatascience.com/should-i-look-at-precision-recall-or-specificity-sensitivity-3946158aace1

    so-called sensitivity and specificity are explalined here
    https://en.wikipedia.org/wiki/Sensitivity_and_specificity


    So basically an AUC= .97 means a positive is a *true* positive almost regardless of tuning. Tuning involve so-called hyper parameters that are more or weights in the model.

    But it doesn't explicitly tell you if you get *ALL* the true positives. At the extreme, if there were 100 positives in the training set and every tuning parameter found exactly 1, you would get AUC = 1.0 Similarly, ramping up the parameter may improve the recall (more TP), as long as it doesn't lead to FP, AUC remains 1.0

    Replies: @anon

    Thanks.

  207. @Reg Cæsar
    @kaganovitch


    Nah, I was hoping to gain some insight into South Korean culture. Sad to say they remained inscrutable.

     

    You really have to stay at a yogwan, heated by carbon monoxide. Don't close the window. (The window at the one I stayed in in Seoul didn't close. They didn't want to lose any unwitting tourists.)


    https://www.youtube.com/watch?v=CUMWhl-8cpA


    Reports in winter regularly described the deaths of entire families. Such tragedies became a common occurrence, though the 1971 yeontan poisoning deaths of five teenage boys and girls from rich families, which occurred while sleeping off a night of drinking when they said they would be studying, offered an implicit morality tale for readers.

    At the height of its use, hundreds of American Peace Corps Volunteers served in Korea, and the February 1968 issue of their newsletter, "Yobosayo," warned of this potential threat: "Winter is potentially the most dangerous season of the year" because it was "fraught with the hazards of potential carbon monoxide poisoning."

    The article advised that "rooms with yeontan ondol floors must be adequately ventilated and this means at two points in the room (two open windows or an open door and an open window)." To illustrate this point, it told the story of a Korean nurse who awoke with symptoms of yeontan poisoning despite taking precautions. "Investigation into the matter brought out the fact that the grandmother in the family had innocently closed the door to the room while the occupants were sleeping, thus leaving only one source of ventilation, an open window."

    In 1976 an American medical officer was quoted in The Korea Times warning Americans "living on the economy" or staying in hostels to "make sure the room has adequate ventilation even if it means chillier living quarters. Believe me, you'll be a lot colder if you inhale a toxic dose of carbon monoxide."

    http://www.koreatimes.co.kr/www/nation/2019/09/177_265143.html
     

    The symbols for restaurants with licensed fugu cooks were few and far-between in Tokyo in 1985. But they were scarily common in Seoul, suggesting lower standards and a higher tolerance for risk.

    For Fugu (better known to the rest of the world as puffer or blowfish) is the most revered item in Japanese haute cuisine. And adding to its reputation and making the fish more interesting is the fact that it is 1,250 times more poisonous than cyanide.

    http://thingsasian.com/story/fugu
     

    I cannot see her tonight.
    I have to give her up
    So I will eat fugu.

    -- Yosa no Buson

    Replies: @Joe Stalin

    Koreans getting gassed by CO is not unknown in Illinois.

    Accidental carbon monoxide poisoning caused the death of Helen Woo, 37, and her two daughters, Michele, 12, and Patricia, 11, in their Arlington Heights townhouse, the Cook County medical examiner said Tuesday.

    Earlier Sunday, Helen Woo and her children attended services at the Antioch Korean Baptist Church,

    https://www.chicagotribune.com/news/ct-xpm-1994-06-15-9406150086-story.html

    • Replies: @Jack D
    @Joe Stalin

    Their deaths were caused by accidentally leaving a rental car running in the garage beneath their townhouse. If it's your own car, you usually keep the keys on one ring so that you have to remove the car key to get into your house, but the key to a rental car is usually separate.

    This happened 25 years ago. Nowadays most cars have push button start so its even easier to forget to turn off your car. Then again more houses have CO2 detectors than 25 years ago too.

    , @Reg Cæsar
    @Joe Stalin


    Koreans getting gassed by CO is not unknown in Illinois.
     
    Check out the name of this Aussie family's blog about life in Korea:

    https://www.koreanrooftop.com/
  208. @AnotherDad
    @Altai



    Real communist societies have always been highly socially conservative.
     
    Altai, i love your stuff, learn from it. But, while no historian, i think this is off base/overstated.

    Communist obviously had a hostile relationship with religion and tradition. You can argue that they simply wanted to be the replacement authority/religion.

    But they also had a somewhat hostile relationship with family as well. Seeing it as an alternative--possibly subversive--source of authority and loyalty. And you can't be "socially conservative" undermining family.

    What communism was not--and why calling wokeism "communism" is just ridiculous/stupid--is minoritarian.

    Communism was a unitary deal. (Your "collectivist" point.) The society as one. (Supposedly for all the people, actually for the party/party leaders.) The upside is not being run by "what's good for the Jews" or our even more disastrous "what's good for minorities", i.e. what's good for every abnormal person in society--from Jews, to blacks, to immigrants, to homosexuals, to trannies, to criminals, to XY male-development-didn't-happen-correctly "female" athletes.

    Compared to that communism was more like medieval European feudalism. Society was for the benefit of the king, the nobles and the people were serfs--stay there and work! But at least neither medieval nobility nor communists--while exploitive and hostile to any dissidents or threats to their power--were not actually hostile to their nations people, to the survival of the nation itself.

    That's the key point: Communism was not minoritarian.

    And there's nothing worse than minoritarianism--having an elite who are hostile to the people, the nation, they control.

    Replies: @John Johnson

    Compared to that communism was more like medieval European feudalism. Society was for the benefit of the king, the nobles and the people were serfs–stay there and work! But at least neither medieval nobility nor communists–while exploitive and hostile to any dissidents or threats to their power–were not actually hostile to their nations people, to the survival of the nation itself.

    Communism was not hostile to the nations people? Are you mad? Holodomor, Great leap forward, Killing fields… you would describe millions being intentionally killed as not hostile?

    That’s the key point: Communism was not minoritarian.

    It was entirely minoritarian. The Communist Party is a minority party and above all else at any cost.

    Karl Marx called for violent takeover and minority rule of the party because he didn’t think they could win in elections. They wouldn’t even share power with allied left-wing parties. In fact some of the first people sent to off to camps were left-wing leaders. Others were just gunned down.

  209. @Stan d Mute
    @John Johnson


    What a waste of a computer.

    Children can identify race 100% of the time when they aren’t indoctrinated.
     
    Children? Hell, even dogs can do it. I’d be absolutely flabbergasted to discover that horses could not do it.

    Replies: @John Johnson

    Children? Hell, even dogs can do it. I’d be absolutely flabbergasted to discover that horses could not do it.

    I have a relative who is an active Democrat and her dog doesn’t like Black people.

    She constantly apologizes for him but the dog isn’t changing his mind.

  210. @Jack D
    @Anonymous

    Right, maybe the AI taught itself to read off of the training data set and when it sees the pixels that form "Tyshaun" down in the corner it correlates that with the other black Tyshaun's in the training data. Or, more likely, the X-ray lab in the hospital in the ghetto produces images that are subtly different than the suburban hospital - the are all a little lighter or darker or more less contrasty because it's a different machine This could explain why it is magically able to read x-rays that are too blurry for a human to make any sense of. The AI just keeps trying different stuff on the training set until it is able to tell the training images apart. It's possible that it really is something that stupid.

    Or maybe there is something so fundamental about Negro skull shape that even in a blurred image the AI can discern it.

    Replies: @Recently Based

    Equivalents of the “Tyshaun” issue are unlikely to have been missed by the investigators, but the idea that we are seeing an artifact of different machines or processing procedures seems like a really good theory to investigate. If you assume that ~all blacks are being x-rayed in a different set of clinics, it would be a very parsimonious and plausible explanation for what they are reporting.

  211. @War for Blair Mountain
    White Liberals know gd well that race exists….Native Born White Working Class Americans are being targeted by the highly racialized Democratic Party for racial extermination-White Genocide…..This is America 2021 and beyond…..The Democrats are very open about the race they want to exterminate….

    The Han People comming to America are highly racialized….And the Democratic Party has no problem with this…in fact, White Liberal Democrats appeal directly to the racial interests of the Han People in America….

    Replies: @Drapetomaniac

    The hunter-gatherers have been at war with settlement folk for 10,000+ years. Each has a different concept on how to survive.

    One steals, the other creates.

  212. @Altai
    @JohnnyWalker123

    This is why as Steve and other right wingers have noticed, the idea of calling this 'communism' that is so popular among some is insane.

    Real communist societies have always been highly socially conservative. Because 'socially conservative' is another way of saying 'collectivist'. When you live in a communist state you may not be interested in the social contract but the social contract is interested in you. You don't get to act in any way that might be perceived as decadent or selfish (Unless you're powerful enough) any public displays of deviation from social mores will be treated as social defection.

    Because the more you chip away at social mores the more you chip away at social solidarity and commitment. That's called 'social liberalism' and that makes sense in terms of the original context of 'liberal' both in the US and where it still holds the correct context in Europe. It's just another way of saying individualism.

    But for places the US state Department has decided are a designated enemy, LGBT stuff is promoted and supported as a fifth column in addition to being anti-social solidarity. This will 100% be true for both China and Russia.

    In China you aren't even allowed to show venerable characters or even real people with tattoos on TV. If you're a celebrity others might emulate or see as influential, you have to cover your tats up if you have them on TV. Any public visible attacks on social unity or solidarity are seen as problems that can't even be reocgnised or articulated in the West anymore. Tattoos are a visible attack on social commitment. (Remember the 50s when every man more or less wore a uniform? Even if you got to choose the particular dark muted shade.)

    Social liberalism and individualism is always championed by the upper classes for the same reason that economic liberalism is, it allows them to exploit society to their own pleasure. For the lower classes, it just brings ruination.

    Replies: @IHTG, @Bill, @AnotherDad, @John Johnson, @Drapetomaniac

    Commies have a concept of private property that approaches that of the animal world.

    Conservatives have a better concept of private property.

  213. @Joe Stalin
    @Reg Cæsar

    Koreans getting gassed by CO is not unknown in Illinois.


    Accidental carbon monoxide poisoning caused the death of Helen Woo, 37, and her two daughters, Michele, 12, and Patricia, 11, in their Arlington Heights townhouse, the Cook County medical examiner said Tuesday.

    Earlier Sunday, Helen Woo and her children attended services at the Antioch Korean Baptist Church,

    https://www.chicagotribune.com/news/ct-xpm-1994-06-15-9406150086-story.html
     

    Replies: @Jack D, @Reg Cæsar

    Their deaths were caused by accidentally leaving a rental car running in the garage beneath their townhouse. If it’s your own car, you usually keep the keys on one ring so that you have to remove the car key to get into your house, but the key to a rental car is usually separate.

    This happened 25 years ago. Nowadays most cars have push button start so its even easier to forget to turn off your car. Then again more houses have CO2 detectors than 25 years ago too.

  214. @Joe Stalin
    @Reg Cæsar

    Koreans getting gassed by CO is not unknown in Illinois.


    Accidental carbon monoxide poisoning caused the death of Helen Woo, 37, and her two daughters, Michele, 12, and Patricia, 11, in their Arlington Heights townhouse, the Cook County medical examiner said Tuesday.

    Earlier Sunday, Helen Woo and her children attended services at the Antioch Korean Baptist Church,

    https://www.chicagotribune.com/news/ct-xpm-1994-06-15-9406150086-story.html
     

    Replies: @Jack D, @Reg Cæsar

    Koreans getting gassed by CO is not unknown in Illinois.

    Check out the name of this Aussie family’s blog about life in Korea:

    https://www.koreanrooftop.com/

  215. Anonymous[894] • Disclaimer says:
    @Jack D
    @Bumpkin

    It sounds suspicious in that in some cases the AI is super good and in other cases the AI is super bad (although as res points out they were apparently not talking about the same data sets) but the problem is that an AI is a sort of "black box". A good AI can tell you the right answer but it can't tell you WHY it picked that answer in terms that are comprehensible to humans. It doesn't have simple fixed rules that can be expressed like "if the guy's nostrils are wide then he's black". Rather it operates by self training on the entire data set. This is why there is no way to tweak an AI to be "less racist" without breaking the AI. Conversely, if your AI is not working well, there's no easy fix for that either. The AI doesn't really understand "racist" it just understands whether its self-training regimen is getting it closer to the correct answer or further away.

    Replies: @Anonymous

    A good AI can tell you the right answer but it can’t tell you WHY it picked that answer in terms that are comprehensible to humans.

    Right, it’s only a matter of time before machines start to make mathematical discoveries that are literally incomprehensible to humans. I was recently reading about quaternions. Thinking in four dimensions is hard enough; what happens when our machines start thinking in 40 dimensions?

  216. @JohnnyWalker123
    @Anonymous

    Thanks. Very prescient.

    Replies: @BB753

    If you read Bertrand Russell’s book or listen to Jay Dyer’s lecture, you’ll realize that Russell wasn’t on our side. More of a globalist technocratic elite type kind of chap, if you get my drift, a Royal Society Fabian type, like H.G. Wells, both Huxley brothers and Arthur Koestler.

  217. Related from the ACM queue, November 28, 2018. One may actually be able to reconstruct from the census data where exactly the vulnerable minorities or Trump supporters live and not even using neural networks (which are what is marketed as “AI”) but using standard search-based techniques:

    Understanding Database Reconstruction Attacks on Public Data: These attacks on statistical databases are no longer a theoretical danger.

    The 2020 census is expected to count roughly 330 million people living on roughly 8.5 million blocks, with some inhabited blocks having as few as a single person and other blocks having thousands. With this level of scale and diversity, it is difficult to visualize how such a data release might be susceptible to database reconstruction. We now know, however, that reconstruction would in fact pose a significant threat to the confidentiality of the 2020 microdata that underlies unprotected statistical tables if privacy-protecting measures are not implemented. To help understand the importance of adopting formal privacy methods, this article presents a database reconstruction of a much smaller statistical publication: a hypothetical block containing seven people distributed over two households. (The 2010 U.S. Census contained 1,539,183 census blocks in the 50 states and the District of Columbia with between one and seven residents. … )

    Even a relatively small number of constraints results in an exact solution for the blocks’ inhabitants. Differential privacy can protect the published data by creating uncertainty. Although readers may think that the reconstruction of a block with just seven people is an insignificant risk for the country as a whole, this attack can be performed for virtually every block in the United States using the data provided in the 2010 census. The final section of this article discusses the implications of this for the 2020 decennial census.

Comments are closed.

Subscribe to All Steve Sailer Comments via RSS
PastClassics
How America was neoconned into World War IV
Analyzing the History of a Controversial Movement
The JFK Assassination and the 9/11 Attacks?