The Unz Review: An Alternative Media Selection
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 BlogviewJames Thompson Archive
Bias Bias: The Inclination to Accuse People of Bias
🔊 Listen RSS
Email This Page to Someone

 Remember My Information


Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
These buttons register your public Agreement, Disagreement, Troll, or LOL with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used once per hour.
Ignore Commenter Follow Commenter
Search Text Case Sensitive  Exact Words  Include Comments
List of Bookmarks

Early in any psychology course, students are taught to be very cautious about accepting people’s reports. A simple trick is to stage some sort of interruption to the lecture by confederates, and later ask the students to write down what they witnessed. Typically, they will misremember the events, sequences and even the number of people who staged the tableaux. Don’t trust witnesses, is the message.

Another approach is to show visual illusions, such as getting estimates of line lengths in the Muller-Lyer illusion, or studying simple line lengths under social pressure, as in the Asch experiment, or trying to solve the Peter Wason logic problems, or the puzzles set by Kahneman and Tversky. All these appear to show severe limitations of human judgment. Psychology is full of cautionary tales about the foibles of common folk.

As a consequence of this softening up, psychology students come to regard themselves and most people as fallible, malleable, unreliable, biased and generally irrational. No wonder psychologists feel superior to the average citizen, since they understand human limitations and, with their superior training, hope to rise above such lowly superstitions.

However, society still functions, people overcome errors and many things work well most of the time. Have psychologists, for one reason or another, misunderstood people, and been too quick to assume that they are incapable of rational thought?

Gerd Gigerenzer thinks so.

He is particularly interested in the economic consequences of apparent irrationality, and whether our presumed biases really result in us making bad economic decisions. If so, some argue we need a benign force, say a government, to protect us from our lack of capacity. Perhaps we need a tattoo on our forehead: Diminished Responsibility.

The argument leading from cognitive biases to governmental paternalism—in short, the irrationality argument—consists of three assumptions and one conclusion:

1. Lack of rationality. Experiments have shown that people’s intuitions are systematically biased.

2. Stubbornness. Like visual illusions, biases are persistent and hardly corrigible by education.

3. Substantial costs. Biases may incur substantial welfare-relevant costs such as lower wealth, health, or happiness.

4. Biases justify governmental paternalism. To protect people from theirbiases, governments should “nudge” the public toward better behavior.

The three assumptions—lack of rationality, stubbornness, and costs—imply that there is slim chance that people can ever learn or be educated out of their biases; instead governments need to step in with a policy called libertarian paternalism (Thaler and Sunstein, 2003).

So, are we as hopeless as some psychologists claim we are? In fact, probably not. Not all the initial claims have been substantiated. For example, it seems we are not as loss averse as previously claimed. Does our susceptibility to printed visual illusions show that we lack judgement in real life?

In Shepard’s (1990) words, “to fool a visual system that has a full binocular and freely mobile view of a well-illuminated scene is next to impossible” (p. 122). Thus, in psychology, the visual system is seen more as a genius than a fool in making intelligent inferences, and inferences, after all, are necessary for making sense of the images on the retina.

Most crucially, can people make probability judgements? Let us see. Try solving this one:

A disease has a base rate of .1, and a test is performed that has a hit rate of .9 (the conditional probability of a positive test given disease) and a false positive rate of .1 (the conditional probability of a positive test given no disease). What is the probability that a random person with a positive test result actually has the disease?

Most people fail this test, including 79% of gynaecologists giving breast screening tests. Some researchers have drawn the conclusion that people are fundamentally unable to deal with conditional probabilities. On the contrary, there is a way of laying out the problem such that most people have no difficulty with it. Watch what it looks like when presented as natural frequencies:

Among every 100 people, 10 are expected to have a disease. Among those 10, nine are expected to correctly test positive. Among the 90 people without the disease, nine are expected to falsely test positive. What proportion of those who test positive actually have the disease?

In this format the positive test result gives us 9 people with the disease and 9 people without the disease, so the chance that a positive test result shows a real disease is 50/50. Only 13% of gynaecologists fail this presentation.

Summing up the virtues of natural frequencies, Gigerenzer says:

When college students were given a 2-hour course in natural frequencies, the number of correct Bayesian inferences increased from 10% to 90%; most important, this 90% rate was maintained 3 months after training (Sedlmeier and Gigerenzer, 2001). Meta-analyses have also documented the “de-biasing” effect, and natural frequencies are now a technical term in evidence-based medicine (Akiet al., 2011; McDowell and Jacobs, 2017). These results are consistent with a long literature on techniques for successfully teaching statistical reasoning (e.g., Fonget al., 1986). In sum, humans can learn Bayesian inference quickly if the information is presented in natural frequencies.

If the problem is set out in a simple format, almost all of us can all do conditional probabilities.

I taught my medical students about the base rate screening problem in the late 1970s, based on: Robyn Dawes (1962) “A note on base rates and psychometric efficiency”. Decades later, alarmed by the positive scan detection of an unexplained mass, I confided my fears to a psychiatrist friend. He did a quick differential diagnosis on bowel cancer, showing I had no relevant symptoms, and reminded me I had lectured him as a student on base rates decades before, so I ought to relax. Indeed, it was false positive.

Here are the relevant figures, set out in terms of natural frequencies

Every test has a false positive rate (every step is being taken to reduce these), and when screening is used for entire populations many patients have to undergo further investigations, sometimes including surgery.

Setting out frequencies in a logical sequence can often prevent misunderstandings. Say a man on trial for having murdered his spouse has previously physically abused her. Should his previous history of abuse not be raised in Court because only 1 woman in 2500 cases of abuse is murdered by her abuser? Of course, whatever a defence lawyer may argue and a Court may accept, this is back to front. OJ Simpson was not on trial for spousal abuse, but for the murder of his former partner. The relevant question is: what is the probability that a man murdered his partner, given that she has been murdered and that he previously battered her.

Accepting the figures used by the defence lawyer, if 1 in 2500 women are murdered every year by their abusive male partners, how many women are murdered by men who did not previously abuse them? Using government figures that 5 women in 100,000 are murdered every year then putting everything onto the same 100,000 population, the frequencies look like this:

So, 40 to 5, it is 8 times more probable that abused women are murdered by their abuser. A relevant issue to raise in Court about the past history of an accused man.

Are people’s presumed biases costly, in the sense of making them vulnerable to exploitation, such that they can be turned into a money pump, or is it a case of “once bitten, twice shy”? In fact, there is no evidence that these apparently persistent logical errors actually result in people continually making costly errors. That presumption turns out to be a bias bias.

Gigerenzer goes on to show that people are in fact correct in their understanding of the randomness of short sequences of coin tosses, and Kahneman and Tversky wrong. Elegantly, he also shows that the “hot hand” of successful players in basketball is a real phenomenon, and not a stubborn illusion as claimed.

With equal elegance he disposes of a result I had depended upon since Slovic (1982), which is that people over-estimate the frequency of rare risks and under-estimate the frequency of common risks. This finding has led to the belief that people are no good at estimating risk. Who could doubt that a TV series about Chernobyl will lead citizens to have an exaggerated fear of nuclear power stations?

The original Slovic study was based on 39 college students, not exactly a fair sample of humanity. The conceit of psychologists knows no bounds. Gigerenzer looks at the data and shows that it is yet another example of regression to the mean. This is an apparent effect which arises whenever the predictor is less than perfect (the most common case), an unsystematic error effect, which is already evident when you calculate the correlation coefficient. Parental height and their children’s heights are positively but not perfectly correlated at about r = 0.5. Predictions made in either direction will under-predict in either direction, simply because they are not perfect, and do not capture all the variation. Try drawing out the correlation as an ellipse to see the effect of regression, compared to the perfect case of the straight line of r= 1.0

What diminishes in the presence of noise is the variability of the estimates, both the estimates of the height of the sons based on that of their fathers, and vice versa. Regression toward the mean is a result of unsystematic, not systematic error (Stigler,1999).

Gigerenzer also looks at the supposed finding that people are over-confidence in predictions, and finds that it is another regression to the mean problem.

Gigerenzer then goes on to consider that old favourite, that most people think they are better than average, which supposedly cannot be the case, because average people are average.

Consider the finding that most drivers think they drive better than average. If better driving is interpreted as meaning fewer accidents, then most drivers’ beliefs are actually true. The number of accidents per person has a skewed distribution, and an analysis of U.S. accident statistics showed that some 80% of drivers have fewer accidents than the average number of accidents (Mousavi and Gigerenzer, 2011)

Then he looks at the classical demonstration of framing, that is to say, the way people appear to be easily swayed by how the same facts are “framed” or presented to the person who has to make a decision.

A patient suffering from a serious heart disease considers high-risk surgery and asks a doctor about its prospects.

The doctor can frame the answer in two ways:

Positive Frame: Five years after surgery, 90% of patients are alive.
Negative Frame: Five years after surgery, 10% of patients are dead.

Should the patient listen to how the doctor frames the answer? Behavioral economists say no because both frames are logically equivalent (Kahneman, 2011). Nevertheless, people do listen. More are willing to agree to a medical procedure if the doctor uses positive framing (90% alive) than if negative framing is used (10% dead) (Moxeyet al., 2003). Framing effects challenge the assumption of stable preferences, leading to preference reversals. Thaler and Sunstein (2008) who presented the above surgery problem, concluded that “framing works because people tend to be somewhat mindless, passive decisionmakers” (p. 40)

Gigerenzer points out that in this particular example, subjects are having to make their judgements without knowing a key fact: how many survive without surgery. If you know that you have a datum which is more influential. These are the sorts of questions patients will often ask about, and discuss with other patients, or with several doctors. Furthermore, you don’t have to spin a statistic. You could simply say: “Five years after surgery, 90% of patients are alive and 10% are dead”.

Gigerenzer gives an explanation which is very relevant to current discussions about the meaning of intelligence, and about the power of intelligence tests:

In sum, the principle of logical equivalence or “description invariance” is a poor guide to understanding how human intelligence deals with an uncertain world where not everything is stated explicitly. It misses the very nature of intelligence, the ability to go beyond the information given (Bruner, 1973)

The key is to take uncertainty seriously, take heuristics seriously, and beware of the bias bias.

One important conclusion I draw from this entire paper is that the logical puzzles enjoyed by Kahneman, Tversky, Stanovich and others are rightly rejected by psychometricians as usually being poor indicators of real ability. They fail because they are designed to lead people up the garden path, and depend on idiosyncratic interpretations.

For more detail:

Critics of examinations of either intellectual ability or scholastic attainment are fond of claiming that the items are “arbitrary”. Not really. Scholastic tests have to be close to the curriculum in question, but still need to a have question forms which are simple to understand so that the stress lies in how students formulate the answer, not in how they decipher the structure of the question.

Intellectual tests have to avoid particular curricula and restrict themselves to the common ground of what most people in a community understand. Questions have to be super-simple, so that the correct answer follows easily from the question, with minimal ambiguity. Furthermore, in the case of national scholastic tests, and particularly in the case of intelligence tests, legal authorities will pore over the test, looking at each item for suspected biases of a sexual, racial or socio-economic nature. Designing an intelligence test is a difficult and expensive matter. Many putative new tests of intelligence never even get to the legal hurdle, because they flounder on matters of reliability and validity, and reveal themselves to be little better than the current range of assessments.

In conclusion, both in psychology and behavioural economics, some researchers have probably been too keen to allege bias in cases where there are unsystematic errors, or no errors at all. The corrective is to learn about base rates, and to use natural frequencies as a guide to good decision-making.

Don’t bother boosting your IQ. Boost your understanding of natural frequencies.

Hide 47 CommentsLeave a Comment
Commenters to FollowEndorsed Only
Trim Comments?
  1. res says:

    Good concrete advice. Perhaps even more useful for those who need to explain things like this to others than for those seeking to understand for themselves.

  2. “intelligence deals with an uncertain world where not everything is stated explicitly. It misses the very nature of intelligence, the ability to go beyond the information given (Bruner, 1973)”

    “The key is to take uncertainty seriously, take heuristics seriously, and beware of the bias bias.”

    Why I come to Unz.

    • Replies: @Dieter Kief
  3. @ThreeCranes

    I agree that’s why I come to Unz (especially Steve Sailer and James Thompson), and: That’s why I’m angry at times too: About the twisters – Kahnemann, Tversky, Taleb, the (wo)man at my local bank, journalists, TED-talkers…Michael Lewis, who wrote a terrible piece of misleading “criticism” about Gerd Gigerenzer … the surgeons at the Freiburg University Clinic, who held a conference once, when asked what would happen, if I rejected surgery – and in the end asked, whether I’m a colleague, and I said that I wasn’t but that I’d read Gigerenzer, … – ahh, that’s more than ten years now, since I rejected this highly appreciated surgery – and I’m quite fine to this day… – When asked, whether my chances would be worse if I declined surgery now and asked for it five years later, they said: From what they know – no. I said fine, that’s all I need to know, thank you.

    • Replies: @Aft
  4. Cortes says:

    “Many putative new tests of intelligence never even get to the legal hurdle, because they flounder on matters of reliability and validity”


  5. @Cortes

    struggle or stagger clumsily in mud or water.

  6. utu says:

    OT but about probabilities, being fooled by them or faking them. Possibly the greatest scandal in science but it will be swept under the carpet.

    Exclusive: Grave doubts over LIGO’s discovery of gravitational waves

    The Danish group’s independent checks, published in three peer-reviewed papers, found there was little evidence for the presence of gravitational waves in the September 2015 signal. On a scale from certain at 1 to definitely not there at 0, Jackson says the analysis puts the probability of the first detection being from an event involving black holes with the properties claimed by LIGO at 0.000004. That is roughly the same as the odds that your eventual cause of death will be a comet or asteroid strike – or, as Jackson puts it,”consistent with zero”. The probability of the signal being due to a merger of any sort of black holes is not huge either. Jackson and his colleagues calculate it as 0.008.

    And there are legitimate questions about that trust. New Scientist has learned, for instance, that the collaboration decided to publish data plots that were not derived from actual analysis. The paper on the first detection in Physical Review Letters used a data plot that was more “illustrative” than precise, says Cornish. Some of the results presented in that paper were not found using analysis algorithms, but were done “by eye”.

    Brown, part of the LIGO collaboration at the time, explains this as an attempt to provide a visual aid. “It was hand-tuned for pedagogical purposes.” He says he regrets that the figure wasn’t labelled to point this out.

    This presentation of “hand-tuned” data in a peer-reviewed, scientific report like this is certainly unusual. New Scientist asked the editor who handled the paper, Robert Garisto, whether he was aware that the published data plots weren’t derived directly from LIGO’s data, but were “pedagogical” and done “by eye”, and whether the journal generally accepts illustrative figures. Garisto declined to comment.

    And there still issue of blind injection

    The problem is, however, that the researchers were by no means as convinced of the authenticity of GW150914 as it was communicated. At first, the wave seemed too perfect for anyone to believe. Because in earlier years, artificially generated dummy signals, so-called blind injections, were used to test whether the collaboration would be able to detect a signal.

    After the blind injection test the whole team voted to publish results and only then it was revealed the data were fake.

    And false claim that LIGO measured before the gamma burst was measured by Fermi lab.

    For many, therefore, the strongest evidence for gravitational waves is based on the August 2017 GW170817 signal discovered by LIGO and then confirmed by the Fermi (NASA) and Integral (ESA) gamma-ray / gamma-ray telescopes, but with very weak signal. at any rate, it was presented at the press conference.

    In truth, it was the other way around: Fermi had sent the notification email first, and LIGO needed four hours to “predict” the sky position – which matched the coordinates already known. The false impression that LIGO was the first one arose simply from the fact that after an explicit request by LIGO the subject line of the alert mail had been modified (see picture).

    • Replies: @James N. Kennett
  7. @James Thompson

    flounder ~= struggle. founder ~= fail. Founder is the better word in the given context.

    • Replies: @Pericles
  8. Apart from that, though, good article.

    • Agree: Cortes
  9. Half-Jap says:
    @James Thompson

    It’s a wonderful expressive term, which simultaneously reminds me that I never liked those flatfish.

  10. Bruno says:

    You made the assumption that partnered women had an abusive partner (when it could be one shot) and not partners women didn’t (when it could be a neighbor, a boss, a supervisor etc)

    So to just make a little sense – because most women are in a partner relationship – your « partners » predicate should be « already abused by their killer-partner » and the negation should be the rest « either not in a partnership or not abused by their partner »

    The inconsistency is that the number of women murdered should include both or it does t makes any sense ! The % can be higher with the absolute number lower if the reference base is a fraction of the total. Don’t mix up absolute value and %.

    Let’s take you total number of murdered women at
    50 out 1 000 000

    To make it round, change your ratio of 8 to 1 for abused to non abused among murdered women to 9 to 1.

    45 out 1 000 000 were killed by an abusive partner
    5 out of 1 000 000 were not abused by a partner (either with a non abusive partner or no partner)

    If ration of killed to abused is 1 in 2500, it means that
    112 500 women are abused by their partner out of 1 000 000 women wich would give you a ratio of 11,25% of women who are in an abusive partnership.

    You don’t need to know how many women are in a relationship because the variable is independent. But
    you neeed to know how many women are killed by a non partner (the stranger who comes from nowhere).
    Let’s say it’s 1 in 250 000.

    4 out of 1 000 000 where killed by a stranger (non partner non abusive)
    1 in 1 000 000 were killed by a non abusive partner .

    So you ve got then
    45 out of 112 500 killed by abusive partner
    4 out of 1 000 000 killed by a stranger
    1 out of 1 000 000 killed by a partner who wasn’t abusives

    Thus the probably for a women who is killed by his partner that his partner was not abusive is 1 to 400. And the probability that the non abused women would have been killed by a stranger is 80%.

    So the trial lawyer could demonstrate that it’s extemelly unlikely that a woman in a relationship who wasn’t abused was killed by his partner. So the killer of this women should be either a hidden lover who abused her (400 to 1) or in case she had no lover a total stranger ( 4 to 1).

    • Replies: @James Thompson
  11. Tom Welsh says:

    Sounds fishy to me.

    Actually I think this is an example of an increasingly common genre of malapropism, where the writer gropes for the right word, finds one that is similar, and settles for that.

    The worst of it is that readers intuitively understand what was intended, and then adopt the marginally incorrect usage themselves.

    That’s perhaps how the world and his dog came to say “literally” when they mean “figuratively”.

    Maybe a topic for a future article?

  12. Biff says:

    In 2009 Google finished engineering a reverse search engine to find out what kind of searches people did most often.
    Seth Davidowitz and Steven Pinker wrote a very fascinating/entertaining book using the tool called Everybody Lies

    Everybody Lies offers fascinating, surprising, and sometimes laugh-out-loud insights into everything from economics to ethics to sports to race to sex, gender, and more, all drawn from the world of big data. What percentage of white voters didn’t vote for Barack Obama because he’s black? Does where you go to school effect how successful you are in life? Do parents secretly favor boy children over girls? Do violent films affect the crime rate? Can you beat the stock market? How regularly do we lie about our sex lives, and who’s more self-conscious about sex, men or women?

    Investigating these questions and a host of others, Seth Stephens-Davidowitz offers revelations that can help us understand ourselves and our lives better. Drawing on studies and experiments on how we really live and think, he demonstrates in fascinating and often funny ways the extent to which all the world is indeed a lab. With conclusions ranging from strange-but-true to thought-provoking to disturbing, he explores the power of this digital truth serum and its deeper potential – revealing biases deeply embedded within us, information we can use to change our culture, and the questions we’re afraid to ask that might be essential to our health – both emotional and physical. All of us are touched by big data every day, and its influence is multiplying. Everybody Lies challenges us to think differently about how we see it and the world.

  13. dearieme says:

    I shall treat this posting (for which many thanks, doc) as an invitation to sing a much-loved song: everybody should read Gigerenzer’s Reckoning with Risk. With great clarity it teaches what everyone ought to know about probability.

    (It could also serve as a model for writing in English about technical subjects. Americans and Britons should study the English of this German – he knows how, you know.)

    Inspired by “The original Slovic study was based on 39 college students” I shall also sing another favourite song. Much of Psychology is based on what small numbers of American undergraduates report they think they think.

    • Agree: utu, atlantis_dweller
    • Replies: @BIll Jones
  14. @Bruno

    Base rate in this case is 100,000 not 1,000,000.
    The assumption of 5 per 100,000 murdered by others (non-abusive partners and passersby) is a bit too generous to that category (because, as is evident, it includes those murdered by abusive partners) but it is a small difference, and favours the defence case that “it could have been anyone else” who committed the murder.

  15. Anon[410] • Disclaimer says:

    “…Gigerenzer points out that in this particular example, subjects are having to make their judgements without knowing a key fact: how many survive without surgery. …”

    This one reminds of the false dichotomy. The patient has additional options! Like changing diet, and behaviours such as exercise, elimination of occupational stress , etc.

    The statistical outcomes for a person change when the person changes their circumstances/conditions.

  16. Cortes says:
    @Tom Welsh

    A disposition (conveyance) of an awkwardly shaped chunk out of a vast estate contained reference to “the slither of ground bounded on or towards the north east and extending two hundred and twenty four metres or thereby along a chain link fence…” Not poor clients (either side) nor cheap lawyers. And who never erred?

    Better than deliberately inserting “errors” to guarantee a stream of tidy up work (not unknown in the “professional” world) in future.

  17. Tom Fix says:

    Good article. 79% of gynaecologists fail a simple conditional probability test?! Many if not most medical research papers use advanced statistics. Medical doctors must read these papers to fully understand their field. So, if medical doctors don’t fully understand them, they are not properly doing their job. Those papers use mathematical expressions, not English. Converting them to another form of English, instead of using the mathematical expressions isn’t a solution.

    • Replies: @Pericles
  18. I think I have Bias Bias Bias.

  19. SafeNow says:

    Regarding witnesses: When that jet crashed into Rockaway several years ago, a high percentage of witnesses said that they saw smoke before the crash. But there was actually no smoke. The witnesses were adjusting what they saw to conform to their past experience of seeing movie and newsreel footage of planes smoking in the air before a crash. Children actually make very good witnesses.

    Regarding the chart. Missing, up there in the vicinity of cancer and heart disease. The third-leading cause of death. 250,000 per year, according to a 2016 Hopkins study. Medical negligence.

  20. @Cortes

    This makes me very flustrated.

  21. iffen says:

    OJ Simpson was not on trial for spousal abuse, but for the murder of his former partner.

    Not really, as the case was presented to the jury, the trial was on the question of whether or not, out of the hundreds of LA police officers, there were some who could be described as racially biased. Probability being what it is, the jury made the “correct” decision.

  22. Anon[724] • Disclaimer says:

    1. Lack of rationality. Experiments have shown that people’s intuitions are systematically biased.

    2. Stubbornness. Like visual illusions, biases are persistent and hardly corrigible by education.

    3. Substantial costs. Biases may incur substantial welfare-relevant costs such as lower wealth, health, or happiness.

    4. Biases justify governmental paternalism. To protect people from theirbiases, governments should “nudge” the public toward better behavior.

    Well… the sad fact is that there’s nobody in the position to protect “governments” from their own biases, and “scientists” from theirs.

    So, behind the smoke of all words and rationalisations, the law is unchanged: everyone strives to gain and exert as much power as possible over as many others as possible.
    Most do that without writing papers to say it is right, others write papers, others books.
    Anyway, the fundamental law would stay as it is even if all this writing labour was spared, wouldn’t it?
    But then another fundamental law, the law of framing all one’s drives as moral and beneffective comes into play… the papers and the books are useful, after all.

  23. @Tom Welsh

    Actually I think this is an example of an increasingly common genre of malapropism, where the writer gropes for the right word, finds one that is similar, and settles for that.

    Isn’t that exactly what the original Mrs Malaprop in Sheridan’s play did?

  24. @utu

    If doubts about LIGO are being discussed in public by specialists in the field, and even reported in the popular science press, it is unlikely that they will be swept under the carpet. The story will run and run, with analysis of multiple “astronomical events”, until the results are understood.

    I am not a specialist in the field, but to me the most striking thing about LIGO is not the detection of gravity waves or even their astrophysical interpretation, but the claim that LIGO can detect a change in the length of the interferometer arms as small as 1/10,000 of the diameter of the proton – a claimed sensitivity of 10^-19 m, or a billionth of the diameter of a hydrogen atom; a ten-trillionth of the wavelength of the light being used. It is an engineering tour de force that is little short of a miracle – if it is true. It is a good thing that another research group is doing a sanity check on the whole of LIGO.

    • Agree: utu
    • Replies: @utu
  25. An interesting article. However, I think that the only thing we have to know about how illogical psychiatry is this:

    In 1973, the American Psychiatric Association (APA) asked all members attending its convention to vote on whether they believed homosexuality to be a mental disorder. 5,854 psychiatrists voted to remove homosexuality from the DSM, and 3,810 to retain it.

    The APA then compromised, removing homosexuality from the DSM but replacing it, in effect, with “sexual orientation disturbance” for people “in conflict with” their sexual orientation. Not until 1987 did homosexuality completely fall out of the DSM.


    The article makes no mention of the fact that no “new science” was brought to support the resolution.
    It appears that the psychiatrists were voting based on feelings rather than science. Since that time, the now 50+ genders have been accepted as “normal” by the APA. My family has had members in multiple generations suffering from mental illness. None were “cured”. I know others with the same circumstances. How does one conclude that being repulsed by the prime directive of every living organism – reproduce yourself – is “normal”? That is not to say these people are horrible or evil, just not normal. How can someone, who thinks (s)he is a cat be mentally ill, but a grown man thinking he is a female child is not?

    Long ago a lawyer acquaintance, referring to a specific judge, told me that the judge seemed to “make shit up as he was going along”. I have long held psychiatry fits that statement very well.

    • Replies: @Dieter Kief
    , @Republic
  26. utu says:
    @James N. Kennett

    but the claim that LIGO can detect a change in the length of the interferometer arms as small as 1/10,000 of the diameter of the proton – a claimed sensitivity of 10^-19 m

    I have difficulty comprehending this level of sensitivity. While I workedd and built Michelson interferometers in my pervious incarnation I did not read papers on LIGOO interferometer and what tricks they have used. One way to increase sensitivity it to use multiple reflections which has been done in the past in precise measurements. Here is one possibility:

    However to get to the point of 13 orders of magnitude lower than the wavelength of light used I just do not see how this is done. Certainly it can’t be fringe counting but intensity measurements on the minute change of slopes of cosine function of intensity of two combined beams. To do so noise must be extremely low. Then other possibility is to use frequency modulation if laser light used could be modulated.

    Anyway as I said I haven’t read papers on LIGO design and its detection system so all I can do is vaguely speculate that does not count for much. But certainly the sensitivity of displacement they claim is astounding.

    • Replies: @Parisian Guy
  27. Paul2 says:

    Thank you for this article. I find the information about the interpretation of statistical data very interesting.

    My take on the background of the article is this:

    Here we have a real scientist fighting the nonsense spreading from (neoclassical) economics into other realms of science/academia.

    Behavioural economics is a sideline by-product of neoclassical micro-economic theory. It tries to cope with experimental data that is inconsistent with that theory.

    Everything in neoclassical economics is a travesty. “Rational choice theory” and its application in “micro economics” is false from the ground up. It basically assumes that people are gobbling up resources without plan, meaning or relevant circumstances. Neoclassical micro economic theory is so false and illogical that I would not know where to start in a comment, so I should like to refer to a whole book about it:
    Keen, Steve: “Debunking economics”.

    As the theory is totally wrong it is really not surprising that countless experiments show that people do not behave the way neoclassical theory predicts. How do economists react to this? Of course they assume that people are “irrational” because they do not behave according to their studid theory. (Why would you ever change your basic theory because of some tedious facts?)

    We live in a strange world in which such people have control over university faculties, journals, famous prizes. But at least we have some scientists who defend their area of knowledge against the spreading nonsense produced by economists.

    The title of the 1st ed. of Keen’s book was “Debunking Economics: The Naked Emperor of the Social Sciences” which was simply a perfect title.

    • Replies: @utu
  28. @Curmudgeon

    Could it be that you expect psychiatrists in the past to be as rational as you are now?

    Would the result have been any different, if members of a 1973 convention of physicists or surgeons would have been asked?

  29. utu says:

    Good that you wrote this comment.

  30. @utu

    However to get to the point of 13 orders of magnitude lower than the wavelength of light used I just do not see how this is done.

    Basically, it was very easy.
    To get 13 orders of magnitude lower, just add more data.
    Simply multiply the amount of data collected by 26 orders of magnitude. That did the trick.

    That is why, at the end of the experiment, the world had changed a lot. Time travel was now a commodity; They only had to buy a ticket and returned to the twenty-first century. This explanation is so simple and obvious. Therefore, it says a lot about your state of mind that you preferred to indulge in some kind of unrealistic conspiracy theory.

  31. Republic says:

    It is highly likely in the future that the DSM will have a category which will classify as a mental disorder people who oppose the LGBT agenda.

    The left will use this new classification as an excuse to harass/imprison any person with an anti Gay position

    One may recall that psychiatrists in the Soviet Union frequently put political dissents in mental hospitals

  32. dearieme says:

    Duty calls: I thought I might at least skim through the paper once. First observation:

    In the book jacket copy of his biography on Kahneman and Tversky, Lewis (2017) states that they “are more responsible than anybody for the powerful trend to mistrust human intuition and defer to algorithms.”

    I suppose the autopilot cars we are all promised might test this trend to destruction.

  33. dearieme says:

    an article by the Deutsche Bank Research “Homo economicus – or more like Homer Simpson?” attributed the financial crisis to a list of 17 cognitive biases rather than the reckless practices and excessive fragility of banks and the financial system

    Aha – crooks and damned fools can shuck off all responsibility by saying “Not my fault, it wuz them infernal biases. No moral agency here; no siree!”

  34. dearieme says:

    Risk Savvy Citizens: in addition to deprecating the omission of a necessary hyphen, I deplore the use of “savvy”. The word is far too vague – sometimes it seems to mean “shrewd”, and therefore refers to the exercise of a cognitive ability. At other times it seems to mean “well informed” and therefore refers to having received instruction in something or other. Such ambiguity is best avoided.

  35. dearieme says:

    On first reading I found Section 2.1 persuasive. Need I read it again?

    Section 2.2 reveals a genuine bias: I can never make myself pay attention to an example couched in terms of a sport that I find terminally boring.

    two widespread methodological shortcomings in the bias studies. First, the heuristics (availability, representativeness, affect) are never specified; since the 1970s, when they were first proposed, they have remained common-sense labels that lack formal models. Second, the heuristic is used to “explain” a bias after the fact, which is almost always possible given the vagueness of the label.

    That’s them hit for six, where “them” = da bad guys.

    “One of the most significant and irrefutable findings of behavioral psychologists is that people are overconfident in their judgments”

    Pp 325-328 on framing seemed conclusive to me in supporting the diagnosis of overconfidence in the diagnosis of overconfidence. Good stuff, this: not just hit for six, but out of the ground and over the river.

    Lastly, uncertainty contrasted with risk: Maynard Keynes was a very clever bugger; if he thought the distinction crucial then I suspect that it needs keen attention.

  36. Super piece. It is interesting to observe that British Primary School children are taught to think about probability and outcomes of decision processes using tree diagrams. As early as Year Three, I seem to remember my children doing exercises about putting socks in a washing machine and estimating what colour came out next. I don’t know if such excellent background teaching shows up by the time they attend university. Some will still just think about socks.

  37. Pericles says:
    @Random Anonymous

    Flounder works fine, and as you have seen it was the word intended by the author.

  38. Pericles says:
    @Tom Welsh

    While we’re at it, let’s clear up this stubborn reign/rein foolishness once and for all.

  39. Pericles says:
    @Tom Fix

    Well, consider that the writers probably p-hacked away with some PC program they didn’t understand in the first place. Or perhaps the paper doesn’t replicate.

    (Don’t get me started on statistics. It’s an awful field, made even worse by publish-or-perish.)

  40. Factorize says:

    Ever have an idea so good that you wonder why no one else had thought of it? I have.

    Here it is. Most nations have political parties that can best be described as Spenders and another Savers. The Spenders create terrible fiscal messes that must then be corrected by responsible adults, the Savers. Why would we expect any other result? The game is structured on the basis of short term thinking in which there are few consequences for irresponsible behavior. Typically, the party that displays fiscal prudence must make difficult cost-cutting choices due to the moral and financial bankruptcy of the other party. Might there be another way to structure the game of government finance to avoid severe fiscal imbalances that have occurred from time to time? I think the answer is yes!

    Why not keep a set of fiscal books for each party instead of only one for the entire nation? Rules would be put into place that would limit future spending ability of parties based upon the fiscal position in their own balances. For example if a government were to run up a large deficit, corrected for the business cycle, then their spending ability would be limited by statute after an election. Under such rules, it would become possible that if a political party created a large deficit that they would become essentially unelectable. Voters would know that voting for the Spending party would result in forced fiscal constraint.

    Please comment about this idea!

    • Replies: @res
  41. res says:

    It would result in endless arguing about exceptions. For example, which parts of the TARP bailouts should count against Obama or Bush? And who defines the business cycle?

    In the US there is the additional issue of the House/Senate/Presidency all being decided separately. How to assign parties when the control is split?

    An appealing idea, but I can’t see how to make it work in practice.

  42. Factorize says:

    res, thank you for replying!

    Yes, I realized that there would be complications. Nonetheless, the potential benefits are so substantial that some governments might do their due diligence, though as you noted the multi-layered nature of the American political structure might make the idea more difficult to enact. Even now, there is a non-partisan component of government accounting (related to measuring deficits etc.), so it is not entirely far-fetched that the idea might find real world application.

    Capital markets would give a rapid positive endorsement for those choosing this policy. The debt rating of an entire nation could be upgraded. Nations could have immediate windfalls in the billions of dollars. As it is now one never knows what way the political wind might blow from moment to moment. One party might win an election by 1% and then the entire fiscal landscape could shift. This introduces an overwhelming amount of uncertainty that ultimately capital markets require payment for. With the double book double entry accounting system I propose it would no longer matter that much who won the election: everyone would be held accountable for their fiscal choices. As I noted in my first post, one could easily identify political parties on the world stage today that would become unelectable under the plan. The tiresome game of bipolar government financing could finally end.

  43. Factorize says:

    Considering all the financial turbulence over the last few years I think it is best not to mention all the nation scaled financial bailouts that have been necessary. So, I wouldn’t mention Greece (this one cost more than 300 billion euro), not Iceland, nor even America and its housing bubble (startlingly this cost every American $70,000–for a total of 2 trillion plus) and there are so many others.

    Creating a built-in fiscal safety valve would likely receive substantial public support. Perhaps it could be tried first when the IMF is called in to help a nation in trouble. The double accounting method could be part of the package and might be imposed for 30 to 40 years after the crisis had resolved. Some nations might consider such a measure so restrictive that they might decide it to be in their self-interests not to allow financial meltdowns to occur. Yeah!

  44. Aft says:
    @Dieter Kief

    Great article.

    I was once wowed by Kahnemann et al but then realized one by one all these biases are highly adaptive. A 10% chance of one thing falling through isn’t 10% less utility, it’s a massive shift in all the other things that risk of uncertainty affects, monitoring costs, contigency planning etc. So we humans are very right to overweight the significance of rare risks on the order of 1-10%. Reliability matters.

    While there are some useful implications of their work: decisions about life and death risks from cancers, nuclear power, etc. are better handled by knowledgeable fact-based analyses than our intuitions, those are largely obviously already.

    Besides the idea that the peak intensity and the ending matter more than duration, very useful for designing an experience for oneself or others, nothing else remains useful out of their work really.

Current Commenter

Leave a Reply - Comments on articles more than two weeks old will be judged much more strictly on quality and tone

 Remember My InformationWhy?
 Email Replies to my Comment
Submitted comments become the property of The Unz Review and may be republished elsewhere at the sole discretion of the latter
Subscribe to This Comment Thread via RSS Subscribe to All James Thompson Comments via RSS