**Case Sensitive**

**Exact Words**

**Include Comments**

Michael Lewis has been the gold standard author of frequent flier books since the end of the 1980s. He has a new book coming out in December about the Israeli psychologists Daniel Kahneman and Amos Tversky, *The Undoing Project, *who studied why people make bad decisions.

*Vanity Fair* has one chapter from the book. I didn’t find too much of interest in that chapter, although I did like this question devised by Kahneman:

The mean I.Q. of the population of eighth-graders in a city is known to be 100. You have selected a random sample of 50 children for a study of educational achievement. The first child tested has an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?

I think I know what answer Kahneman wants.

But, by the way, if the first kid tested scores a 150, how sure should you be that the average IQ of the city “is known” to be 100?

Maybe you are using a 20 year old Ravens test and the average is now 106 due to the Flynn Effect. (From the 1940s onward, the Flynn Effect kept being discovered by psychologists, but then getting undiscovered because it Is Known that it should not be happening. It took James T. Flynn to make sure the Flynn Effect, like Columbus with America, stayed discovered.)

Or maybe there has been a massive misnorming screw-up, like with the military’s enlistment test from 1976-1980 that let in a whole bunch of dopes during the *Stripes* era. In 1978, Senator Sam Nunn started asking Pentagon officials why sergeants and chief petty officers kept complaining to him about the intelligence of new recruits. The Pentagon replied that it is known that new recruits were scoring higher than ever. Then in 1980, they admitted to Nunn that the test scores had been incorrectly inflated since 1976.

Or if the first child who takes the test scores 3.33 standard deviations above the expected mean, are you quite sure you have a random sample?

Kahneman would find all these real world quibbles of yours to be examples of bad decision making. Just as Emily is stipulated to be a bank teller who was active in feminism in college, this question stipulates certain conditions, and who are you to question their plausibility?

One of Kahneman’s standard shticks is to write all sorts of Red Flags into his questions — “Emily led a feminist commune in college that was infiltrated by the FBI on suspicion of anti-male terrorism” — and then ding you for noticing his Red Flags.

If you want to de-Red Flag this IQ question, you could make the first kid score 125, or if you want to keep the arithmetic super simple, 130 with a sample size of 30. But a 150 is a Red Flag.

One secret to scoring well on Kahneman’s questions is to take them extremely literally. He’s kind of like Hymie the Robot in *Get Smart*:

Ironically, I have a vague hunch that part of the Flynn Effect is that people over the last century have learned to take things more literally from having to deal ever more with machine logic, which makes them better at taking IQ tests. (But it makes them worse at understanding their new President. Hence, the rage of the more Aspergery intellects toward Trump’s vaguely stated stances.)

Interestingly, Lewis positions his new book as explaining the science behind his *Moneyball* rather than his *The Big Short*. That seems pretty reasonable, in that financial bubbles tend to be historically contingent: the big money boys at least tend to learn from the mistakes of the recent past (while forgetting older analogies). A science of bad decisions works better when people keep making the same bad decisions. Baseball is a pretty traditionalist enterprise.

On the other hand, that reminds me that one interesting project for sabermetricians might be a history of fads in baseball decision-making. While baseball doesn’t change all that much, there have been bandwagons, some of which become permanent (e.g., home run hitting), some of which don’t. For example, successful franchises like the Los Angeles Dodgers in the 1960s and 1970s can cause a chain reaction of imitations around baseball: The Dodgers of my childhood, for instance, seem to have set off fads for having your aces like Don Drysdale and Sandy Koufax pitch over 300 innings per year; putting a low on-base average base stealer like Maury Wills in as your leadoff hitter; converting outfielders to middle infielders in the high minor leagues (like Bill Russell); and teaching minor leaguers to switch hit (i.e., bat left handed against right handed pitchers and right handed against left handed pitchers). Most of these ideas are now out of fashion, but they seemed pretty cool a half century ago when the Dodgers were drawing huge crowds.

Likewise, it would be interesting to see which ideas of the early Moneyball era are now discredited. My guess is that early sabermetricians undervalued defensive skill in ballplayers because they had poorer quality data on defense than on hitting. This led to a lot of Dr. Strangeglove-type players, big clumsy oafs who could hit homers and get walks but not much else, being in demand. Nowadays, however, defensive statistics have improved so much that baseball has edged back toward the all-around athletes who look good in a uniform that the sabermetricians were making fun of fifteen years ago. My guess is that poor fielding is more psychologically destructive to teams than poor hitting, but I wouldn’t know how to measure that.

Have you checked out Tom Wolfe’s new book yet? It’d be interesting to get your take on it, since you’re a fan and have followed his career over the years, and since as you’ve noted before, he seemed to share a biologically inclined viewpoint similar to your own.

http://www.baseball-reference.com/leaders/WAR_def_active.shtml …this would be what your looking for.

Replies:@Kyle aWell maybe not.ha

“What do you expect the mean I.Q. to be for the whole sample?”

If we did not sample one with 150 the answer would be 100 but since we sample one with 150 the answer will be 101. But this is my first approach w/o using conditional probabilities P(A/B) by doing this: (100*49+150)/50=101

Who has the correct answer?

Replies:@415 reasonsI assume the answer Kahneman wants is that the expected mean of the sample is now 150(.02) + 100(.98) = 101, since he assumes we "know" with certainty that the expected mean of the remaining 49 draws is 100. But this is flawed because for actual humans, absolute faith that we "know" the population mean is irrational.

To be picky, even Kahneman's answer of 101 is wrong (as Hoots points out). We are sampling without replacement, so even if the population mean is 100, we have removed one student with 150, so the mean of the remaining sample is expected to be LESS than 100. How much less depends on the population size relative to the sample size. In the extreme, if the population is only 50, then we expect the mean of the entire sample of 50 (including the first student with 150) to be 100, not 101.

Average IQ-101

(49*(100*p-150) / (p-1) + 150) / 50

where p is the population

By the way, if the first kid tested scores a 150, how sure should you be that the average IQ of the city “is known” to be 100?

Maybe you are using a 20 year old Ravens test and the average is now 106 due to the Flynn Effect. Maybe there has been a massive misnorming screw-up, like with the military’s enlistment test from 1976-1980.

Replies:@whorefinderThe binomial distribution says that the chance of success of 1 (or more) trial(s) out of 50 at 2.1453% each is 70.9%.

With Flynn, like with Columbus, it stayed discovered, however.

Kahneman says the average IQ is "known" to be 100, but, he doesn't define what he means by "known". He wants us to assume "known" means "known with 100% certainty", but he doesn't actually say that. To the contrary, in common usage, "known" is usually understood to mean "highly likely" or "highly probable".

So, until he defines what he means by "known", we don't know what "known" means so we don't know the answer. At best, we can rationally say that "it is possible the answer is 101, but we don't really know the correct answer."

So, Kahneman's "correct" answer is irrational because it is too certain.

The Dodgers’ 1960s teams, I think, had something to do with their pioneering of the farm system. Branch Rickey, the legendary GM of the Dodgers who brought Jackie Robinson in, also pioneered the farm system idea—first with the Cardinals (where he was first employed) and then with the Dodgers.

Before Rickey, teams didn’t have a full-fledged developmental system, relying mostly on tryouts and scouts scattered throughout the country watching various high school and independent minor league teams for a good young guy.

Those old scouts could tell if a big galoot (1) was more athletic than the other guys; (2) threw very hard; and (3) were smashing more homeruns than their compadres. So they were quite good at evaluating raw talent and athleticism. But they were harder pressed to tell you if a guy could handle the road or develop any better than what he now was.

Rickey’s full-fledged minor league system lent itself to training guys in the smaller tools of baseball. Base-stealing, bunting, taking the extra base, backing up throws, painting the corners in pitching, shifting defensively based on tendencies—these are skills that require a bit more than pure athleticism (e.g. good base-stealing isn’t about pure wheels; being a good one means knowing how to judge when a pitcher is about to pitch, studying his pickoff moves, noting when he throws breaking pitches, coordination with the batter, etc) . Those various skills required a lot of hours of training and dedication, something only a farm system could implement into a player. But it also meant that you’d get a player more major-league ready than most teams.

It also meant the Dodgers didn’t have to compete with other teams during this era for the bonus-baby athletic wunderkinds. Instead, they could pickup some less-athletic but more intelligent players and train them through the system on how to pick up the extra base when the throw went home. (Sort of like how Bill Belichick of the Patriots has made a career out of picking up more intelligent but less flashy football players and training them in his more complicated systems and in the subtleties of getting an extra half yard per play, leading every year to sportswriters “marveling” at how the Patriots pick up guys “out of nowhere” who then become successful.)

The Dodgers were sort of Bizarro

Moneyball: while everyone else was looking for the next Bob Feller and Ted Williams, guys who would be stars from the start, the Dodgers sought out the Sandy Koufaxes and Maury Wills, guys who would improve with age and with a great developmental system (and having no free agency at the time helped them tremendously, as they could keep the fruits of their development dirt cheap).A possible successful strategy in baseball is to do the opposite of the rest of the league. If the big guys are going for homeruns and strikeouts, you’ll have your pick of the bunters, groundball pitchers, and great fielders. Find something the market is undervaluing and grab it before the others realize they need it, and then charge them (via trades) an arm and a leg to acquire it. In other words, the old Wall Street adage: buy cheap, sell dear.

Maybe you are using a 20 year old Ravens test and the average is now 106 due to the Flynn Effect. Maybe there has been a massive misnorming screw-up, like with the military's enlistment test from 1976-1980.

Yet more evidence that post-McCarthyism America really did see a mass infiltration by foreign agents into our government and media. I look forward to the future uncovering of Soviet/East German/Cuban/Chicom records detailing just how many “trusted” folks in this era really were on the payrolls of foreign powers.

Maybe you are using a 20 year old Ravens test and the average is now 106 due to the Flynn Effect. Maybe there has been a massive misnorming screw-up, like with the military's enlistment test from 1976-1980.

Well, the probability of a member of a 100 IQ population having a 150 IQ is 2.1453%.

The binomial distribution says that the chance of success of 1 (or more) trial(s) out of 50 at 2.1453% each is 70.9%.

Replies:@EHNo, it's 1 in 2330.67, 0.0429%

And the question was whether the very first in the sample of 50 had that score, not whether one out of the 50 did, which is a different question. Given the inverse rarity (2330) is much higher than the sample size(50) the odds of having one person in the group that high or higher is better estimated by dividing the inverse rarity by the size of the sample, or about 1 in 26 or 27, given a population large enough, and with smaller populations the gain in precision from using a fancier method to calculate will be spurious. Quite likely the s.d. isn't exactly the same size as assumed, so for an s.d. between 14 and 16 over the

whole interval from 100 to 150(generally not true since the distribution of equal-interval measures of intelligence is closer to log-normal, compressing a wide range of ability into the top normal-curve IQ scores) , the error bounds on the inverse rarity would be between 1125 and 5633, so trying for more than one digit of precision is a waste of time.So what’s the right answer? If you assume the other 49 have a mean of 100, them plus the 150 would put the mean of the 50 at 101. But maybe the fact that you already have an outlier with 150, the other 49 are slightly below average? Or maybe it’s just 100 since it’s supposed to be a random subset of a set with mean 100?

Replies:@Doug(Technically this isn't *exactly* true, because we're sampling without replacement. (Can't pick the same kid twice). However the population of a city is so much larger than the sample size, the impact of the approximation is insignificant.)

Maybe you are using a 20 year old Ravens test and the average is now 106 due to the Flynn Effect. Maybe there has been a massive misnorming screw-up, like with the military's enlistment test from 1976-1980.

There was a long history from the 1940s onward of psychologists discovering the Flynn Effect of rising raw IQ test scores but then the field of psychology losing track of it.

With Flynn, like with Columbus, it stayed discovered, however.

Replies:@utuImagine that the French physicists discovered that when using the 1 meter standard held in Sèvres the average length of fish caught in Seine increase by about 0.3% per year. Would they conclude that the fish gets longer or that the 1 meter standard gets shorter. They would use to measure say the height of the Eiffel tower and if its height remained constant to within significantly less then 0.3%/year they would conclude that the fish indeed are getting longer. If the length of all objects measured with 1-meter standard were getting larger by about 0.3%/year they most likely would conclude that their standard sucks, that something is wrong with it. The IQ researcher have no luxury to use the IQ test, i.e., their 1 meter standard to anything else but to the fish. So form the strictly epistemological point of view the so called the Flynn effect is just more elegant tautology hiding the shaky foundation on which the IQ research stands.

Do you have a good link to the Stripes era/incident? Nothing comes up quickly in my searches..

Why do people make bad decisions?

How about sleep deprivation.

Donald Trump: sleep-deprived maniac.

Trump actually used this sentence construction in the NYT interview: “We’ve had the storms always.”

This guy is firing on seven cylinders. Uneven.

Replies:@Paul Walker Most beautiful man ever...Lookout 'Anonymous'! He'll put you in a work camp. Boo!!

As a busy 70 year old, he's learned his time is valuable. He never had a Shakespearean tongue, but found that developing it would have hindered his ability (i.e. made him waste more time) trying to convince many diverse groups of people to hop on board his projects. Instead of learning how to convince anyone via a honey tongue, he learned the art of sizing someone up to see if they were worth talking to and if they would listen to him (hence his mega-fast dismissal of the Ali G clown):

https://www.youtube.com/watch?v=sP5ElraFHHE

I see the Left is going to go the whole Bush 43/Dan Quayle route and make fun of Trump's speaking abilities to claim that he's stupid. That's a grand plan, I hope they continue it; I never want to interrupt my opponents while they're making a mistake.

Perhaps the last "great" (i.e. traditionally stereotypical) Republican speaker was St. Reagan. Bush 41 was middling; the Dana Carvey SNL impression captured his style perfectly. Cheney was a good speaker, but the Left painted everything he did in a Sith Emperor light. W. gave really good prepared speeches (his post-9/11 speech was awesome) but off the cuff his style was easy to insult. Quayle's potato gaffe haunted him, but his

Murphy Brownspeech was quite excellent, if you ever listen to it. And what they did to Palin....just disgusting.Having watched Trump live at rallies, the man is an excellent speaker: funny, clear, bright, confident, and in total command. He speaks like an old-time union boss to his fellow workers during a rally, which isn't surprising, given his business was in the construction industry. The Left is determined to convince us he's a buffoon and idiot, and they're going to fail, because he's taken fighting with the media to a whole other level, which is awesome to watch, and their classist nature is coming through with each put down.

Who else can say 'bigly' and make it cool?

It makes me giggly.

There is something of the Bill the Butcher about The Donald. He has blood stains all over him, so if he gets soiled a bit, it's no big deal.

Romney, in contrast, was a starched-clean-white-shirt candidate, so any blemish really stood out.

Some people have the style to mess up and make it seem like it's no big deal, or even funny.

Others just don't have this quality.

It's like the scene in SEPARATE PEACE.

https://youtu.be/wdq7YxGhMhs?t=17m44s

101, a weighted mean of 150 and 100. That’s the answer that Tversky and Kahneman provide. And it works for their purposes of illustrating that N=50 is still a small number.

But of course upon encountering on the first test from a sample of 50 a score that has expected frequency of observation ~ 1 in 2000, a Bayesian’s first thought would be “hmm, this does not sound like the mean of the true distribution is likely to be 100”. So the 101 answer is only correct if one pays attention to the given of “is known to be 100”. Seems like another example of Kahneman’s propensity to ask gotcha questions where the key to the answer is in some minor detail rather than what seems like something worth asking about.

Replies:@Opinionator(49*(100*p-150) / (p-1) + 150) / 50

which is always less than 101. Notice that for a population of 50, meaning that the entire population is tested, the expected mean is exactly the initial population mean.

Poor fielding is really demoralizing to pitchers, I can say that for sure.

Replies:@The Last Real CalvinistI was a small-town high school pitcher, and there's a play that I remember with the most perfect clarity. We were on the road; up 2-1; bottom of the 7th (7-inning games); two on, two outs; I've pitched the entire game and thrown a 1-hitter to this point. I was utterly gassed, but got the batter to loft a weak fly ball to dead center field. Our regular center fielder was at camp or something, but his replacement didn't even have to move -- of course he dropped it, and we lost 3-2. Demoralizing doesn't begin to describe it!

Conversely, a great defense that takes away a hit or turns an unexpected double play is like water in the desert for the man on the hill . . . .

OT: The Washington Post notices The White Death:

Trade with China literally kills Americans

I have yet to dig out my copy, but unfortunately as the As found out, while Moneyball works out well in the regular season, you can pretty much dominate (especially in seven-game series) in the post-season with two aces, and they cost bux.

Or that the sample really is random.

If we did not sample one with 150 the answer would be 100 but since we sample one with 150 the answer will be 101. But this is my first approach w/o using conditional probabilities P(A/B) by doing this: (100*49+150)/50=101

Who has the correct answer?

I thought it was more like… the odds of a single child of IQ 150 being sampled from a population with mean 100 and SD 15 is ~4 in 1000, so low as to be extremely unlikely to happen with the first child by chance. This might reflect that a more likely explanation for sampling one with an IQ of 150 is that the sample mean is actually higher, for instance if it were 130 there’d be about a 1/10 chance of getting such a high observation on the first try. But I also thought of your answer as well. I feel like it depends on whether there is a plausible rationale for why the supposed known mean of IQ of 100 might be wrong. Also you’d have much more information about whether it was a pure fluke after you sampled a few more.

Replies:@Je Suis Charlie MartelI was actually surprised in business how many great plans relied on data from, or implementation by, unknown outsourced entities... an intern, an email marketing company that had a 22 year-old product manager, etc. and execution ended up bumbled

4 in 1000 would approximate the expected Ashkenazi ratio, though.

Furthermore you and others here start speculating about probabilities haven one kid with IQ=150 and somebody even starts spouting about IQ of Ashkenazis. This is pointless and irrelevant and most importantly wrong. Approaching this problem you do not even know what is the distribution of the random variable. Who says it must be Gaussian? What if the distribution is binary? 50% has IQ=150 and 50% has IQ=50 with mean of 100. This is a mathematical problem that does not have to have anything to do with any reality that you know. It is mathematical reality in which clearly you do not seem to feel very comfortable.

If it’s random selection, then the distribution of the other 49 samples are conditionally independent of the first sample. So the mean of the remaining kids is 100, which when averaged with the 150 sample point is 101.

(Technically this isn’t

*exactly*true, because we’re sampling without replacement. (Can’t pick the same kid twice). However the population of a city is so much larger than the sample size, the impact of the approximation is insignificant.)Replies:@abnerI know Nassim Taleb has a similar semi-joke. "Say a fair coin lands on heads 50 times in a row..."

To better grasp that, suppose we recaracterize this exercise as picking a random 50-person sample and then removing the highest score. Shouldn't we expect the mean of the remaining 49 to be below 100?

As you say, the issue of replacement can mostly be ignored as a trivial detail.

(Technically this isn't *exactly* true, because we're sampling without replacement. (Can't pick the same kid twice). However the population of a city is so much larger than the sample size, the impact of the approximation is insignificant.)

Not trusting the stipulations of a question is an old Sailerist trope, if you haven’t been hanging out here long.

I know Nassim Taleb has a similar semi-joke. “Say a fair coin lands on heads 50 times in a row…”

Replies:@SFGWith Flynn, like with Columbus, it stayed discovered, however.

The “losing track” of it was more like sweeping it under the carpet. Their measuring stick turned out to be not constant, perhaps shrinking. The whole empirical foundation of the IQ research was undermined. They did not know what to do with it so they prefer not to acknowledge it. Flynn proposed that the effect is deterministic thus correctable and furthermore proposed that it is a linear function of time with rate of change circa 0.3IQ/year. Did he save the empirical foundation of the IQ research or just provided a bandaid temporary solution that hides a real flaw of the methodology?

Imagine that the French physicists discovered that when using the 1 meter standard held in Sèvres the average length of fish caught in Seine increase by about 0.3% per year. Would they conclude that the fish gets longer or that the 1 meter standard gets shorter. They would use to measure say the height of the Eiffel tower and if its height remained constant to within significantly less then 0.3%/year they would conclude that the fish indeed are getting longer. If the length of all objects measured with 1-meter standard were getting larger by about 0.3%/year they most likely would conclude that their standard sucks, that something is wrong with it. The IQ researcher have no luxury to use the IQ test, i.e., their 1 meter standard to anything else but to the fish. So form the strictly epistemological point of view the so called the Flynn effect is just more elegant tautology hiding the shaky foundation on which the IQ research stands.

Replies:@I, LibertineThis is surely true.

I was a small-town high school pitcher, and there’s a play that I remember with the most perfect clarity. We were on the road; up 2-1; bottom of the 7th (7-inning games); two on, two outs; I’ve pitched the entire game and thrown a 1-hitter to this point. I was utterly gassed, but got the batter to loft a weak fly ball to dead center field. Our regular center fielder was at camp or something, but his replacement didn’t even have to move — of course he dropped it, and we lost 3-2. Demoralizing doesn’t begin to describe it!

Conversely, a great defense that takes away a hit or turns an unexpected double play is like water in the desert for the man on the hill . . . .

Replies:@Steve SailerHow about sleep deprivation.

Donald Trump: sleep-deprived maniac.

Trump actually used this sentence construction in the NYT interview: "We've had the storms always."

This guy is firing on seven cylinders. Uneven.

“Donald Trump: sleep-deprived maniac.”

Lookout ‘Anonymous’! He’ll put you in a work camp. Boo!!

I was a small-town high school pitcher, and there's a play that I remember with the most perfect clarity. We were on the road; up 2-1; bottom of the 7th (7-inning games); two on, two outs; I've pitched the entire game and thrown a 1-hitter to this point. I was utterly gassed, but got the batter to loft a weak fly ball to dead center field. Our regular center fielder was at camp or something, but his replacement didn't even have to move -- of course he dropped it, and we lost 3-2. Demoralizing doesn't begin to describe it!

Conversely, a great defense that takes away a hit or turns an unexpected double play is like water in the desert for the man on the hill . . . .

Bad fielding can demoralize a hitter. The Dodgers moved Pedro Guerrero from the outfield to third base in 1983, where he was a klutz. His hitting stayed good for two years but he kept asking to be allowed to go back to the outfield. In 1985 he was hitting poorly for the first two months of the season. Finally, they moved him to left field and he tied the National League record for homers in June.

Replies:@The Last Real CalvinistDetroit Tiger Justin Upton.

Literally couldn't hit anything until late August. Absolutely terrible in left field. The biggest free-agent bust of 2016 - six years for $133 million.

And then?

Between August 22 and the end of the season, Upton hit 18 home runs in seven weeks to finish with a career-high 31.

His left-field play also went from unwatchable to competent.

His batting average went from .190's to finish at .245.

Seven weeks is not a season. But it happened, and now his value is high enough to be tradeable again. Some other team might take on his contract because he improved so dramatically and finished so strongly.

I paraphrase closely: "First, I pray that they don't hit the ball to me. Then I pray they don't hit it to Sax.

Why does Donald J. Trump keep giving interviews to The New York Times? You don’t see Barack Hussein Obama giving interviews to The National Review. If I was The Donald I would tell The New York Times to go kick rocks. He’s never going to convince any of their readers and staff to vote to re-elect him in 2020. No need for him to kiss the pinky ring of The New York Times.

Replies:@SFGHe wound up getting a good future presidential campaign manager, which shows how much luck plays in everything. Makes me wonder if he should have done this 4 or 8 years ago. Of course at that point he'd have to fight Barack Obama, who has a lot more charisma than Hillary, or defend the Republican brand after W's disastrous mishandling.

Seriously, this will be his undoing. The NYT hates

anyRepublican president, most of all one who isn't anti-white. Beware, God Emperor--Horus is near.The Times has just as much contempt for lesser, corporate media as it does for regular Americans. No reason a New Yorker like Trump can't put that contempt to work.

And outside of the obsolete political prejudices of the class it writes for, the Times is still an outstanding paper.

Trump is the most unconventional person elected president. There are no normal or typical comparisons to make--he will surprise and disappoint as no one before him. But projecting personal prejudices about the NYT onto Trump is a wasted effort. The NYT needs access to Trump more than Trump needs the NYT. Assume Trump knows this. Four years is a long time.

Besides, even the enemies can do some good for our side.

We saw this with Glenn Beck who has turned full retard PC. (It began with his idea of handing out soccer balls to illegal aliens few yrs back.)

Beck is out to hurt us, but there is the Poo Boy Appleseed Factor. It may do us some good.

It's like a bear. It will eat apples just to eat apples. It is only to please itself and cares not for the fate of the apples it devours. And the apples are devoured in the stomach of the bear... but the seeds pass out of the other end with the poo, and the poo fertilized the apple seeds that sprout and grow into more apple trees.

So, it's not always a bad thing that the media are covering our side(even if with a lot of shi*), especially when certain people and ideas on our side are not yet household names.

Take Sailer. Most Americans haven't heard of him. So, even though Glenn Beck mentioned him negatively on Anderson Cooper, the word got out to a lot of people. Beck mentioned Sailer to devour and destroy him, but Sailer seeds pass out of the other end and are fertilized by Beck's poop.

The whole MSM are trying to devour our side and trying to make Trump look more extreme. And it does it by attacking certain individuals and ideas, but the Poo Boy Factor only spreads the seeds of those ideas far and wide.

The bear doesn't know that it is actually helping the spread the apple seeds around to grow more apple trees.

We should welcome the Poo Boys of the media.

https://www.youtube.com/watch?v=EMcShE4VEPs

Indeed. It goes to show the degree to which baseball is a ‘head’ game, i.e. you need physical talent and skills, but a negative mental and emotional outlook can be your undoing. This is no doubt at least one reason baseball’s history is so rich in weird characters, bizarre superstitions, and tedious player rituals.

Replies:@ForbesBilly Beane tells the story of teammate Lenny Dykstra (who rose with Beane in the Mets minor league system) who just outright competed--not caring who he was facing. Meanwhile, Beane over-thought every situation, and intimidated himself into mediocrity. After batting practice prior to a game against the Phillies, Dykstra asked Beane who were they facing on the mound. When Beane answered, "Lefty, Steve Carlton," Dykstra asked, "What's he throw?" Beane was agog that Dykstra appeared unfamiliar with and unflustered by the most dominating left handed pitcher then in baseball. Beane knew everything about Carlton. Dykstra went to the plate to collect hits.

When it came to the "head" game, it was the 5-tool star Beane who didn't excel.

Gary Johnson says he is never running for POTUS again.

http://m.reviewjournal.com/news/politics-and-government/libertarian-gary-johnson-says-he-will-not-seek-public-office-again

Now that marijuana is being legalized in several states there is no more need for a president Gary Johnson.

Can confirm.

I know Nassim Taleb has a similar semi-joke. "Say a fair coin lands on heads 50 times in a row..."

He had a very funny elaboration of this idea with a stereotypical SWPL college professor and a gangster. “Whaddaya talkin’ about? No way that coin’s fair!”

He still wants to show he’s made it in the big city? Sounds funny I know but he was always considered something of a C-lister real estate wise; he didn’t actually own lots of the buildings but rented his name out. His son-in-law owned more than he did, which was probably why he saw the match as a good idea–a way to set up Ivanka for life.

He wound up getting a good future presidential campaign manager, which shows how much luck plays in everything. Makes me wonder if he should have done this 4 or 8 years ago. Of course at that point he’d have to fight Barack Obama, who has a lot more charisma than Hillary, or defend the Republican brand after W’s disastrous mishandling.

Seriously, this will be his undoing. The NYT hates

anyRepublican president, most of all one who isn’t anti-white. Beware, God Emperor–Horus is near.Replies:@Je Suis Charlie MartelWhether true or not it leaked that he lit up the other news guys.

Divide and conquer? Isolate the NYT from the others, toy with them, crush Carlos Slim?

Let's see if Trump can leverage that.

The answer should be 100. The random sample is expected to reflect the population mean of 100. Therefore, 100 for the 50-person sample But once you remove the 150 score from the 50-person sample, the mean of the remaining 49 would be expected to drop below 100. This lower mean would be expected to offset the 150 score as you continue to compute the mean for the overall 50. You still expect to end at 100. After you remove the first of the 50, the remaining sample is no longer random.

Replies:@JeremiahJohnbalayaAfter you remove the first of the 50, the remaining sample is no longer random.Sorry to pick on you, but this is exactly the kind of follow-your-intuition approach to probability that is wrongheaded.

The assumptions here is that the IQ scores are independent events, that the original sample size is large enough, and the 50 were chosen randomly. If the first X out of 50 have scores of 150+, the remaining (50 - X) still have an expected value of 100.

Now there are other questions that can be posed, such as, given these assumptions, what is the probability that the first one picked, or the first 49, or all 50 have an IQ of 150. And then you might say that there is a such-and-such chance that the sample really was random (which leads to the probably that the

null hypothesisof randomness was actually true).(Technically this isn't *exactly* true, because we're sampling without replacement. (Can't pick the same kid twice). However the population of a city is so much larger than the sample size, the impact of the approximation is insignificant.)

Why would the remaining ones be conditionally independent? After removal, you no longer have a random sample.

To better grasp that, suppose we recaracterize this exercise as picking a random 50-person sample and then removing the highest score. Shouldn’t we expect the mean of the remaining 49 to be below 100?

Replies:@OpinionatorWhy would the remaining ones be conditionally independent?Because that's implicit in the problem statement.

After removal, you no longer have a random sampleYes, you do

To better grasp that, suppose we recaracterize this exercise as picking a random 50-person sample and then removing the highest scoreBut that is NOT the same as the stated problem.

By far and away the most important (if not most difficult) practical thing in basic probability is in stating the problem accurately.

The only information you have is that the remaining 49 samples have an average IQ of 100. By definition.

In fact, the interesting thing about this sort of thing is in analyzing all the assumptions that are implicit in the statement of the problem. Which is what would be of interest in using it as an interviewing device.

And in cricket too. Why I can remember when I ……….

But of course upon encountering on the first test from a sample of 50 a score that has expected frequency of observation ~ 1 in 2000, a Bayesian's first thought would be "hmm, this does not sound like the mean of the true distribution is likely to be 100". So the 101 answer is only correct if one pays attention to the given of "is known to be 100". Seems like another example of Kahneman's propensity to ask gotcha questions where the key to the answer is in some minor detail rather than what seems like something worth asking about.

Should be 100.

I would expect to find out that my intern, who was a little lazy and/or untrained in sampling, went to two schools in one area, one private and one public, and it happened to be a high achieving area, possibly with a synagogue (like where I grew up with two Pulitzer Prize winners as neighbors)…

I was actually surprised in business how many great plans relied on data from, or implementation by, unknown outsourced entities… an intern, an email marketing company that had a 22 year-old product manager, etc. and execution ended up bumbled

Replies:@FactsAreImportantThe correct answer is that we don't have enough information to answer the question because he has left out a lot of important information about how the sample was collected, and he is asking us to infer what he means using our knowledge of how imperfect humans actually talk and think.

For instance, Kahneman doesn't say whether the sample was collected from among all the students in the city, or from some subset of students in the city (he only says it was random). Indeed, he doesn't even say the students are from the same city as the city with a mean IQ of 100. He doesn't even say the sample is from the same country or even the same planet.

Kahneman is being hypocritical. He wants us to accept with mathematical literalness a highly stylized world where we can "know" with certainty the population mean IQ, but, when he leaves out details, he wants us to casually fill in the blanks based on our human understanding of how other humans speak when speaking casually.

To better grasp that, suppose we recaracterize this exercise as picking a random 50-person sample and then removing the highest score. Shouldn't we expect the mean of the remaining 49 to be below 100?

Or recharacteeize the exercise as removing one score that is above the mean. Mean of remaining sample should fall.

Replies:@JeremiahJohnbalayaOr recharacteeize the exercise as removing one score that is above the mean. Mean of remaining sample should fall.Different question. Different answer.

If we did not sample one with 150 the answer would be 100 but since we sample one with 150 the answer will be 101. But this is my first approach w/o using conditional probabilities P(A/B) by doing this: (100*49+150)/50=101

Who has the correct answer?

101 is correct. What answer book is expecting I am not sure.

He wound up getting a good future presidential campaign manager, which shows how much luck plays in everything. Makes me wonder if he should have done this 4 or 8 years ago. Of course at that point he'd have to fight Barack Obama, who has a lot more charisma than Hillary, or defend the Republican brand after W's disastrous mishandling.

Seriously, this will be his undoing. The NYT hates

anyRepublican president, most of all one who isn't anti-white. Beware, God Emperor--Horus is near.Well, he did split the NYT off from the other news orgs, correct?

Whether true or not it leaked that he lit up the other news guys.

Divide and conquer? Isolate the NYT from the others, toy with them, crush Carlos Slim?

Replies:@SFGI think Trump's Achilles heel is he has an enormous sentimental attachment to his hometown. (A very conservative flaw, of course...ashes of our fathers and temples of our gods and all that.) He put pictures of the skyscrapers behind him in ads and actually defended the city to Ted Cruz. The tall buildings and fancy restaurants fit his Louis-XIV ego. He wants to be the king, a properly gold-foiled, regal king. We'll see what happens. He's surprised me before.

Not sure the shift back to well-rounded players from the galoots of the moneyball era discredits it. The point at the time was that OBP was seriously undervalued at the time. The A’s, Red Sox and Yankees all put together juggernaut offenses focusing on obp and power and eliminating small ball tactics. Then, predictably, the market for OBP reached an equilibrium and speedy defenders became the value play.

The other factor to consider is that the run environment changes over time. Whether it’s balls, steroids or the changing strike zone, particular skills will rise or fall in value. Bunting make a lot more sense when teams are averaging three runs per game then when it’s five runs per game. Bunts also make a lot more sense when strikeouts are low, as balls in play have greater incremental value. Pitcher value also fluctuates according to the environment. High strikeout pitchers are always high value, but are especially important when the defense behind them is shaky. Then home run hitters become especially valuable when defense is strong…

It’s not really about “right” and “wrong,” but about staying ahead of the value curve.

Replies:@Steve SailerMoneyball, but simply Bizarro-ball: whatever the rich teams are focusing on getting/developing, focus on the opposite, horde it, and then use your horde to trade for the few pieces you need to compete for a title. Lather, rinse, repeat every 5 years.Also, I'm really sick of people harping on the A's and

Moneyball, it's such a joke. Beane's teams haven't won squat, and were cosnistently second division for far too long.Instead, people should have been focusing on another very small-market team that had huge success: the Tampa Bay Devil Rays. Locked in a division with two of the biggest spending teams (Yankees and Red Sox) and with practically no fan base (i.e. revenue), they put together a powerhouse farm system that won them the division and took them to the World Series in the teeth of the Yankees and Red Sox engaging in bidding wars for every free agent on the market.

The fact that people have been lauding the win-nothing A's for years and ignoring the Devil Rays is like someone lauding Tom from MySpace ignoring Facebook.

Decimal error– IQ of 150 or greater for (100,15) is .4 per 1000. For instance, a typical large high school of 2,000 students won’t have eight of them breaching 150.

4 in 1000 would approximate the expected Ashkenazi ratio, though.

Divide and conquer.

The Times has just as much contempt for lesser, corporate media as it does for regular Americans. No reason a New Yorker like Trump can’t put that contempt to work.

And outside of the obsolete political prejudices of the class it writes for, the Times is still an outstanding paper.

The other factor to consider is that the run environment changes over time. Whether it's balls, steroids or the changing strike zone, particular skills will rise or fall in value. Bunting make a lot more sense when teams are averaging three runs per game then when it's five runs per game. Bunts also make a lot more sense when strikeouts are low, as balls in play have greater incremental value. Pitcher value also fluctuates according to the environment. High strikeout pitchers are always high value, but are especially important when the defense behind them is shaky. Then home run hitters become especially valuable when defense is strong...

It's not really about "right" and "wrong," but about staying ahead of the value curve.

How much was Moneyball an intellectual facade for covering up that the success of the 2002 A’s was driven by a roided up shortstop MVP driving in 131 runs?

Replies:@DesideriusSurely Moneyball had something to do with that.

The PED issue shouldn't be ignored -- use was widespread and it's reasonable to think that some teams were more likely than others to look the other way -- LaRussa's A's and Cards, Theo's Red Sox, late 90s Astros, Bonds era Giants all come to mind. Likely too that plenty of less successful teams also looked the other way with less memorable results.

He wound up getting a good future presidential campaign manager, which shows how much luck plays in everything. Makes me wonder if he should have done this 4 or 8 years ago. Of course at that point he'd have to fight Barack Obama, who has a lot more charisma than Hillary, or defend the Republican brand after W's disastrous mishandling.

Seriously, this will be his undoing. The NYT hates

anyRepublican president, most of all one who isn't anti-white. Beware, God Emperor--Horus is near.But there are things it hates more. And Trump isn’t your typical R president.

Let’s see if Trump can leverage that.

Those 131 RBIs required runners on base to drive in (ask Joey Votto about that).

Surely Moneyball had something to do with that.

An example from the 2016 MLB season:

Detroit Tiger Justin Upton.

Literally couldn’t hit anything until late August. Absolutely terrible in left field. The biggest free-agent bust of 2016 – six years for $133 million.

And then?

Between August 22 and the end of the season, Upton hit 18 home runs in seven weeks to finish with a career-high 31.

His left-field play also went from unwatchable to competent.

His batting average went from .190’s to finish at .245.

Seven weeks is not a season. But it happened, and now his value is high enough to be tradeable again. Some other team might take on his contract because he improved so dramatically and finished so strongly.

Replies:@Steve SailerI Want You!: The Evolution of the All-Volunteer Force

By Bernard D. Rostker, K. C. Yeh

p. 382 onward

With Flynn, like with Columbus, it stayed discovered, however.

Steve

Do you have a good link to the Stripes era/incident? Nothing comes up quickly in my searches..

Replies:@WhoeverOT:

Facebook has developed censorship software in an effort to get China to lift its seven-year ban on the world’s largest social network, according to reports.

Another confirmation of the convergence of interests between the Chinese regime and US sphere media and political elites. I’m sure this kind of software will go a long way to giving Zuckerberg a smoother ride in all kinds of court cases, tax investigations and other harassments in a number of US sphere countries.

If smart people are leaving a buncha money on the table, there’s usually a reason for it; social opprobrium more likely than Kahneman-style heuristics and biases.

I was thinking about this in the case of Michael Tracey. A year ago, he was a little-known freelance journalist, now he’s famous for the simple reason of being a reasonably respectable liberal-leaning guy who bothered to cover the Wikileaks and other Clinton/DNC-damning stories in a way that made use of the ample available evidence rather than insisting everything was a nothingburger. The retaliation of other journalists would have been more stinging if Hillary had won, but it’s still remarkable how many well known people now hate his guts for the simple reason that he took the money that was on the table and everyone else had agreed to pretend wasn’t there, the reverse of the Emperor’s New Clothes.

Replies:@Dieter Kief"the reverse of the emperor's new clothes"I know a truck-driver, who once was in the same situation. He had a truck full of kitchen-stuff (roughly eight tons), and when he started to unload it at the customer, he was told, that the delivery of this stuff had had already taken place.

He drove back to the wholesaler and was informed there, that nobody had been send out yet from there to this very customer.

He thougth it over and decided to unload all the pots, knives, scissors, glasses, forks and bowls at home - just about eight tons of them, - and store them in his basement.

The only difference is: In this case, nobody ever bothered.

"the reverse of the emperor's new clothes"I know a truck-driver, who once was in the same situation. He had a truck full of kitchen-stuff (roughly eight tons), and when he started to unload it at the customer, he was told, that the delivery of this stuff had already taken place.

He drove back to the wholesaler and was informed there, that nobody had been send out yet from there to this very customer.

He thougth it over and decided to unload all the pots, cutlery, table linen, scissors, glasses, and bowls at home - just about eight tons of them, - and store them in his basement.

The only difference is: In this case, nobody ever bothered.

Could be I'll meet him this weekend. He's an old Rhine-Palatinian now. If I do meet him, I'll ask him what he thinks about Donald Trump.

There are three reasons for the success of the 2002 A’s: Hudson, Zito and Mulder. And all three skipped town to get paid as soon as they hit free agency.

Replies:@OpinionatorDoes anybody have an explanation for why Zito fell off a cliff in his post-A's career?

What about Giambi, Tejada, and their closer?

Does anybody have an explanation for why Zito fell off a cliff in his post-A’s career?

Replies:@antipater_1"Does anybody have an explanation for why Zito fell off a cliff in his post-A’s career?"Yes - Barry Zito was notorious for his laziness and lack of hard work. A physically talented player can survive on his athletic skills for a few seasons but without doing the hard work that player will quickly decline.

How about sleep deprivation.

Donald Trump: sleep-deprived maniac.

Trump actually used this sentence construction in the NYT interview: "We've had the storms always."

This guy is firing on seven cylinders. Uneven.

Or Trump has just learned to cut closer to the point he wants to get across in speaking.

As a busy 70 year old, he’s learned his time is valuable. He never had a Shakespearean tongue, but found that developing it would have hindered his ability (i.e. made him waste more time) trying to convince many diverse groups of people to hop on board his projects. Instead of learning how to convince anyone via a honey tongue, he learned the art of sizing someone up to see if they were worth talking to and if they would listen to him (hence his mega-fast dismissal of the Ali G clown):

I see the Left is going to go the whole Bush 43/Dan Quayle route and make fun of Trump’s speaking abilities to claim that he’s stupid. That’s a grand plan, I hope they continue it; I never want to interrupt my opponents while they’re making a mistake.

Perhaps the last “great” (i.e. traditionally stereotypical) Republican speaker was St. Reagan. Bush 41 was middling; the Dana Carvey SNL impression captured his style perfectly. Cheney was a good speaker, but the Left painted everything he did in a Sith Emperor light. W. gave really good prepared speeches (his post-9/11 speech was awesome) but off the cuff his style was easy to insult. Quayle’s potato gaffe haunted him, but his

Murphy Brownspeech was quite excellent, if you ever listen to it. And what they did to Palin….just disgusting.Having watched Trump live at rallies, the man is an excellent speaker: funny, clear, bright, confident, and in total command. He speaks like an old-time union boss to his fellow workers during a rally, which isn’t surprising, given his business was in the construction industry. The Left is determined to convince us he’s a buffoon and idiot, and they’re going to fail, because he’s taken fighting with the media to a whole other level, which is awesome to watch, and their classist nature is coming through with each put down.

Agree:utuReplies:@Jack HansonWhen I saw him in person he would take a thread, and someone would yell out a theme, and he would start running with that theme before tying it into his original talking point and using it as a springboard into his next talking point.

For example, at Fountain Hills I remember him talking about the Iran deal, and someone yells out "Build the Wall!" and he starts running with it, before talking about Islamic terrorists using the southern border to get into the US, and then seguing into vetting refugees. His entire speech had that tempo.

But the anonymous eeyore contingent around here thinks he's an idiot. Lawl.

I would use "awesome" in the adverbial sense, to modify quite another adjective or two.

I think it is best to use Bayesian analysis to answer the IQ question: Because there is only one observed data point, the variance of the data is 0. The posterior mean equals the prior mean, and the mean IQ of the sample is still 100.

Agree:CKThe other factor to consider is that the run environment changes over time. Whether it's balls, steroids or the changing strike zone, particular skills will rise or fall in value. Bunting make a lot more sense when teams are averaging three runs per game then when it's five runs per game. Bunts also make a lot more sense when strikeouts are low, as balls in play have greater incremental value. Pitcher value also fluctuates according to the environment. High strikeout pitchers are always high value, but are especially important when the defense behind them is shaky. Then home run hitters become especially valuable when defense is strong...

It's not really about "right" and "wrong," but about staying ahead of the value curve.

Yup. As I put it above, a good strategy for teams might not be

Moneyball, but simply Bizarro-ball: whatever the rich teams are focusing on getting/developing, focus on the opposite, horde it, and then use your horde to trade for the few pieces you need to compete for a title. Lather, rinse, repeat every 5 years.Also, I’m really sick of people harping on the A’s and

Moneyball, it’s such a joke. Beane’s teams haven’t won squat, and were cosnistently second division for far too long.Instead, people should have been focusing on another very small-market team that had huge success: the Tampa Bay Devil Rays. Locked in a division with two of the biggest spending teams (Yankees and Red Sox) and with practically no fan base (i.e. revenue), they put together a powerhouse farm system that won them the division and took them to the World Series in the teeth of the Yankees and Red Sox engaging in bidding wars for every free agent on the market.

The fact that people have been lauding the win-nothing A’s for years and ignoring the Devil Rays is like someone lauding Tom from MySpace ignoring Facebook.

But of course upon encountering on the first test from a sample of 50 a score that has expected frequency of observation ~ 1 in 2000, a Bayesian's first thought would be "hmm, this does not sound like the mean of the true distribution is likely to be 100". So the 101 answer is only correct if one pays attention to the given of "is known to be 100". Seems like another example of Kahneman's propensity to ask gotcha questions where the key to the answer is in some minor detail rather than what seems like something worth asking about.

If Tversky and Kahneman give 101 as the answer, then they’re being somewhat lazy (assuming they aren’t just bad at analysis). 101 is only correct if the population is sampled WITH replacement, meaning that each draw has no impact on the mean IQ of the population for the next draw. But that’s a totally inferior way of sampling, since you might be testing subjects multiple times, reducing the power of the experiment. That’s why you sample WITHOUT replacement. Since the subject tested at 150 is not returned to the pool, the mean IQ of the remaining population is reduced. As I said, for population p, the new expectation for the entire experiment is:

(49*(100*p-150) / (p-1) + 150) / 50

which is always less than 101. Notice that for a population of 50, meaning that the entire population is tested, the expected mean is exactly the initial population mean.

Replies:@hootsNotice that for a population of 50, meaning that the entire population is tested, the expected mean is exactly the initial population mean[100]And for a not-even-very-large original sample space of 1000, the answer is 100.95.

Correcting for sampling with replacement is spurious precision under every real circumstance where one has enough data to have a decent hope of getting even p less than 0.05 for effect sizes of the magnitude seen in psychology. It's also spurious precision even in this ideal case because the population size is presumed to be much much larger than the sample size.

The belief that each person "has an IQ" is wrong, anyway -- the likelihood that the person who scored 150 in the sample also scored that high in the norming sample is going to be fairly low, perhaps 1 in 3, even without ceiling effects. Likely a retest would be in the low-to-mid 140s, (depending on the test-retest correlation, which would likely be 0.9 - 0.95). Not only are there uncertainties, but even the magnitude of the uncertainties is also uncertain.

Or, if you flip a coin 10 times, and the first two are heads, you would expect a total of 6 heads, 4 tails. But life is loaded a lot more often than coins.

I don’t think you can answer the mathematical question at all without knowing the ethnic composition of the random sample and how it was randomized, and whether the kid with the 150 IQ had done some prep work on the type of questions used.

Three kids with an IQ of 85 from the other side of the railroad tracks could wipe out the super intelligent kid. The random sample might also include one or more very low IQ mentally retarded children. A sample size of 50 is just too small.

(49*(100*p-150) / (p-1) + 150) / 50

which is always less than 101. Notice that for a population of 50, meaning that the entire population is tested, the expected mean is exactly the initial population mean.

I’ll add that the phrase “a random sample of 50 children” implies sampling WITHOUT replacement, so shame on Tversky and Kahneman if they said the answer was 101.

Replies:@ben tillmanTo better grasp that, suppose we recaracterize this exercise as picking a random 50-person sample and then removing the highest score. Shouldn't we expect the mean of the remaining 49 to be below 100?

Why would the remaining ones be conditionally independent?Because that’s implicit in the problem statement.

After removal, you no longer have a random sampleYes, you do

To better grasp that, suppose we recaracterize this exercise as picking a random 50-person sample and then removing the highest scoreBut that is NOT the same as the stated problem.

By far and away the most important (if not most difficult) practical thing in basic probability is in stating the problem accurately.

The only information you have is that the remaining 49 samples have an average IQ of 100. By definition.

In fact, the interesting thing about this sort of thing is in analyzing all the assumptions that are implicit in the statement of the problem. Which is what would be of interest in using it as an interviewing device.

Replies:@utuOr recharacteeize the exercise as removing one score that is above the mean. Mean of remaining sample should fall.Different question. Different answer.

While it’s fair to criticize Lewis for over-simplification and ignoring Zito/Hudson/Mulder, it’s unfair to ignore the low budget offense he put together that was driven by high OBP.

The PED issue shouldn’t be ignored — use was widespread and it’s reasonable to think that some teams were more likely than others to look the other way — LaRussa’s A’s and Cards, Theo’s Red Sox, late 90s Astros, Bonds era Giants all come to mind. Likely too that plenty of less successful teams also looked the other way with less memorable results.

Replies:@Steve SailerAfter you remove the first of the 50, the remaining sample is no longer random.Sorry to pick on you, but this is exactly the kind of follow-your-intuition approach to probability that is wrongheaded.

The assumptions here is that the IQ scores are independent events, that the original sample size is large enough, and the 50 were chosen randomly. If the first X out of 50 have scores of 150+, the remaining (50 – X) still have an expected value of 100.

Now there are other questions that can be posed, such as, given these assumptions, what is the probability that the first one picked, or the first 49, or all 50 have an IQ of 150. And then you might say that there is a such-and-such chance that the sample really was random (which leads to the probably that the

null hypothesisof randomness was actually true).Replies:@hootsThe assumptions here is that the IQ scores are independent events, that the original sample size is large enough, and the 50 were chosen randomly. If the first X out of 50 have scores of 150+, the remaining (50 – X) still have an expected value of 100.No. There is no sample size large enough for your statement to be true. Only if the first tested subject has an IQ of 100 does the expected value for the experiment remain equal to 100. See my previous comments.

Gaylord Perry told his teammates that if they made any errors when he pitched, he would have them taken out of the lineup when he pitched.

Replies:@MartyMy answer is as follows:

As the sample of 50 is somewhat small, I would expect an average for the sample to fall somewhere in a range between (say) 97 and 103. Increasing the sample size to 500 might reduce this range to 99.5 to 100.5.

You an need IQ of 70 to be able to complete an IQ test.

So using a rule of thumb of 3 intervals for estimating the sd on one side of the distribution, this is a normal distribution with mean of 100 and sd of 10.

150 is 5 sd above mean.

This is an outliier with a probability of 1 in 3.5 million

Ah, Baseball.

One measure of judging a team is their defensive skills at CRUCIAL positions. And a cruciality is determined by how often the position comes into contact with the ball. Therefore, the Pitcher is defensively most important, followed by the Catcher, and followed by, wait and consider, the First Baseman position is third. Following in descending order would be the Shortstop, the other two infield members, and the outfielders. I don’t have the stats on Third Base vs. Second Base, nor the outfielders, but probably the Second Base and the Center Field positions prevail.

The #3 First Baseman is obvious once you look into it. And that is where traditionally so many clumsy fielding but power hitters were stationed. But I remember Keith Hernandez taking over a game with his play when the opposition had a man on second with no outs. His defensive and attacking skills would force the opposing manager to forego the obvious sacrifice bunt.

Replies:@Steve SailerSteve Garvey had been a bad third baseman because he had a terrible arm, but he was a valuable defensive first baseman because he almost never failed to scoop up a throw in the dirt. Cey, Russell, and Lopes were told to aim low and Garvey would scoop it out. This vacuum cleaner knack of Garvey's solidified the longest running infield in history: eight years, about twice any other foursome.

The other factor to consider is that the run environment changes over time. Whether it's balls, steroids or the changing strike zone, particular skills will rise or fall in value. Bunting make a lot more sense when teams are averaging three runs per game then when it's five runs per game. Bunts also make a lot more sense when strikeouts are low, as balls in play have greater incremental value. Pitcher value also fluctuates according to the environment. High strikeout pitchers are always high value, but are especially important when the defense behind them is shaky. Then home run hitters become especially valuable when defense is strong...

It's not really about "right" and "wrong," but about staying ahead of the value curve.

As for Moneyball, one should total up the World Series wins for the A’s since Billy Beane came on board….

Replies:@BrutusaleOne of the peculiar things about probability is how sensitive it is to exact conceptual framing of a problem, and this question about the estimated IQ of a sample may be a case of it. One really needs to accept that the mean in the population really is 100, that the sample of 50 really is random, and the implicit assumption that the “first one” in that sample has itself been selected from the sample at random–or, equivalently, that the numbering of the sample is independent of the IQs. If, for example, the “first one” came to be picked out because it was, say, the highest value in the sample, the calculation of the mean of the 50 would be different.

This peculiarity of probability models has really dogged probability theory from the beginning, I think, rendering its development quite a bit slower than it would have been otherwise.

One example of this very subtle sort of conceptual problem became evident when Marilyn vos Savant raised the Monty Hall Problem:

https://en.wikipedia.org/wiki/Monty_Hall_problem

A number of experts in statistics and mathematics at first criticized her solution as incorrect, though in time it was granted that her solution was in fact correct:

.

I don’t know of another area of mathematics or hard science that seems to display the same level of epistemological confusion over what seems like the quite basic conceptual framework appropriate for a given real problem.

Agree:AbeReplies:@EdwardMThey teach you in statistics class that a statistic -- such as the observed difference in sample selection rates for a given job between whites and blacks -- is only really useful in drawing conclusions if it's based on a hypothesis (and good sample design). I.e., you make a hypothesis that you have reason to suspect, and then look at data to try to refute it. In contrast, disparate impact litigation usually just mines the data, absent any real

a priorihypothesis, and looks for patterns to emerge. This leads to false positives.This phenomenon is related to the Monty Hall problem, in which the key dynamic that leads to the correct conclusion an observed behavior, namely, that Monty Hall's offer of a door is not completely random, but rather follows a defined pattern as discussed in the Wikipedia article.

The simplest way I can think to conceptualize the problem is that the key is understanding that Monty Hall has to operate under certain rules, so that if you understand the rules, his action in revealing one of the two doors containing the goat does reveal information about one of the remaining doors, the door that the contestant might switch to.

Monty Hall has to open a door containing a goat, but can never open the door originally selected by the contestant, whether or not it contains a goat.

He does not reveal anything additional about the probability that the door first selected by the contestant contains the prize which remains one third. He can never open that door regardless. But the chance that the door he doesn't select, out of the set of two he can selects, contains the prize goes from one third (half of two thirds) to two thirds (all of two thirds), the set of two the contestant doesn't originally select being different from the set of three the contestant could originally select.

Coming up with genuinely novel models of the world -- such as was required by, say, Galileo or Newton, or by Darwin, or by the founders of quantitative genetics -- seems to require a very special kind of mind, not a "mere" mathematician.

Probability is a measure of ignorance.

Young students through college level students are taught probability badly, where they are misguided from the beginning. They think a statement of probability is a statement of the world--like the probability that a die comes up 5 is 1/6. They think this 1/6 is some truth about our physical universe.

But in fact, the statement that probability that a 6-sided die comes up 5 is 1/6 is really "Given we have no knowledge of the physical system and therefore we have no reason to expect one outcome more than any other, all outcomes are equally likely to occur. six possible outcomes, so a given one (in this case 5,) comes up 1 of those 6. We call that 1/6."

All of that is predicated on our ignorance of the physical system. There's nothing inherently random about flipping a coin. There CAN'T BE, it's a completely Newtonian physical system. Specify the inputs (position and momentum of the coin), and the forces you put on it, and the outputs are completely determined. BUT, in practice, most of us don't carefully control the inputs to the flipped coin, and small perturbations affect the outcome. If we KNOW how to toss a coin perfectly, then that "FAIR COIN" can still always come out heads.

That fundamental misunderstanding is why even profs who should know better get confused about whether an already-tossed-coin has a probability of 1/2 of being head ("but it already happened! the outcome is fixed! how can the probability not be H or T???" "Because probability is a statement of OUR knowledge of th system--we knew nothing about the tossing, so we have no preference for H or T. Probability is 1/2.")

That's why economists *like those at GMU) using Bayesian reasoning make Really Stupid mistakes, because they think these prior probabilities EXIST in the world like a platonic solid, when in fact, they are a function of our ignorance.

It's also why people get confused by the seemingly absurd statements like "DJT has a 25% chance of winning the election"--that's really a statement measuring the ignorance someone has to the outcome. Silver's knowledge, say, of the electoral system gave him the known number of electoral votes in the DJT side and in the HRC side, and the ones where they didn't know (X county in PA, Y county in NC) could be totalled up to account for some number of circumstances...and about of quarter of those scenarios had DJT winning.

A statement about sample means is again, a statment of ignorance. The whole point of the mean is that you DON'T know the underlying reasons for the thing you're measuring--if you did, you wouldn't need to sample! you'd predict it!

The idea that prior probabilities --like the mean IQ of all 8th graders is "known to be" 100--are real is absurd. Again, the probability is a statement of ignorance. To claim you KNOW the mean to be something is absurd, because the expected value is the expectation GIVEN YOUR ignorance.

Expected values (that's another word for "mean", here) can be determined because you actually HAVE a process you're measuring (i.e. you gained knowledge and decreased your ignorance), or you actually sampled a population. And the law of large numbers helps you to know how likely your sample value is off from the whole population value (not very, overall.)

A good way to deal with the 150 IQ as first sample problem is with log likelihood. I may post that later tonight. It might be elucidating...

People really need to calm down. It’s two weeks since the election and two months before he’s sworn in, yet these silly analogies are made, as if he’s thrown his victory away.

Trump is the most unconventional person elected president. There are no normal or typical comparisons to make–he will surprise and disappoint as no one before him. But projecting personal prejudices about the NYT onto Trump is a wasted effort. The NYT needs access to Trump more than Trump needs the NYT. Assume Trump knows this. Four years is a long time.

Replies:@Jack HansonOh no. Time to climb back into gimp suit because *insert this week's non issue here*.

Good point.

To sharpen the point, change the question a little to “The first FORTY NINE children tested have an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?”

This rewording makes it clearer that people are inherently Bayesian and much more subtly rational than Kahneman assumes. When he says “the population of eighth-graders in a city is known to be 100,” he really means that you have a strong prior that the mean is 100 because humans don’t really “know” anything with utter certainty. But a strong prior is not absolute certainty. So, real humans will begin to question their strong priors if presented with enough contrary evidence because real humans are aware that we never really “know” anything with certainty.

Faced with 49 observations that undermine the assumption that the mean is 100, a rational person should question the assumption that the mean is 100. Indeed, it would be irrational to not question the assumption.

So, as you have pointed out repeatedly, Kahneman’s little puzzles assume people are non-human machines endowed with god-like information, and when he claims to find that people are “irrational,” he is just showing that he is willfully misunderstanding how subtly rational people actually are in the real world which is filled with uncertainty.

Replies:@JeremiahJohnbalayaTo sharpen the point, change the question a little to “The first FORTY NINE children tested have an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?”This is an very different question. It's not even close to the original answer. would be expected to go up.

But, then I read some other comments of yours and realized that you are likely mathematically illiterate. Or maybe some kind of contrarian.

For instance, Kahneman doesn’t say whether ... blahblahblah ... Kahneman is being hypocritical. He wants us to accept with mathematical literalness a highly stylized world where we can “know” with certainty the population mean IQ, but, when he leaves out details, he wants us to casually fill in the blanks based on our human understanding of how other humans speak when speaking casually.

No, he just wants to know if you know how to correctly reason about probability from a basic set of assumptions.

You could read up on the aforementioned Monty Hall Problem, or the related Bertrand Coin Box problem, to try to get an idea of how badly your intuition will lead you astray if you don't establish the exact problem and reason from those assumptions.

That baseball is a “head” game is what (partly) convinced Billy Beane of the unsuitability of the “5-tool” model of scouting, of which he was a star product.

Billy Beane tells the story of teammate Lenny Dykstra (who rose with Beane in the Mets minor league system) who just outright competed–not caring who he was facing. Meanwhile, Beane over-thought every situation, and intimidated himself into mediocrity. After batting practice prior to a game against the Phillies, Dykstra asked Beane who were they facing on the mound. When Beane answered, “Lefty, Steve Carlton,” Dykstra asked, “What’s he throw?” Beane was agog that Dykstra appeared unfamiliar with and unflustered by the most dominating left handed pitcher then in baseball. Beane knew everything about Carlton. Dykstra went to the plate to collect hits.

When it came to the “head” game, it was the 5-tool star Beane who didn’t excel.

Replies:@whorefinderBut the beane/Dykstra situation reminds me of the old tale about Jimmy Stewart, later in life, on a small private plane that was hitting turbulence in a big storm. The pilot was scared, but Stewart was terrified beyond belief, and with good reason: Stewart was a legitimate war hero as a pilot, flying many missions into enemy territory, winning medals, and wound up retiring as a Brigadier General. So while the pilot (who wasn't that good) knew

halfof the things that could go possibly wrong, Stewart kneweverythingthat could go wrong.In short, ignorance is bliss.

Who has the correct answer?

In reality, the correct answer requires information about the strength of the prior probability that the mean is 100. A rational Bayesian human would update the prior that the mean is 100 based on the new information; the weaker the prior, the larger the change in the updated estimate of the mean.

I assume the answer Kahneman wants is that the expected mean of the sample is now 150(.02) + 100(.98) = 101, since he assumes we “know” with certainty that the expected mean of the remaining 49 draws is 100. But this is flawed because for actual humans, absolute faith that we “know” the population mean is irrational.

To be picky, even Kahneman’s answer of 101 is wrong (as Hoots points out). We are sampling without replacement, so even if the population mean is 100, we have removed one student with 150, so the mean of the remaining sample is expected to be LESS than 100. How much less depends on the population size relative to the sample size. In the extreme, if the population is only 50, then we expect the mean of the entire sample of 50 (including the first student with 150) to be 100, not 101.

Replies:@ben tillmanSure you have to finish by multiplying that average by 49, adding 150, and then dividing that sum by 50, but Kahneman is testing our ability to determine the average of the other 49 in the sample.

Whether true or not it leaked that he lit up the other news guys.

Divide and conquer? Isolate the NYT from the others, toy with them, crush Carlos Slim?

I wish. But if I was going to pick a news org to court, the NYT wouldn’t be my first pick. They’re the most left-elitist-bubbly of the bunch, except maybe WaPo.

I think Trump’s Achilles heel is he has an enormous sentimental attachment to his hometown. (A very conservative flaw, of course…ashes of our fathers and temples of our gods and all that.) He put pictures of the skyscrapers behind him in ads and actually defended the city to Ted Cruz. The tall buildings and fancy restaurants fit his Louis-XIV ego. He wants to be the king, a properly gold-foiled, regal king. We’ll see what happens. He’s surprised me before.

Agree:Je Suis Charlie MartelReplies:@Je Suis Charlie MartelThis peculiarity of probability models has really dogged probability theory from the beginning, I think, rendering its development quite a bit slower than it would have been otherwise.

One example of this very subtle sort of conceptual problem became evident when Marilyn vos Savant raised the Monty Hall Problem:

https://en.wikipedia.org/wiki/Monty_Hall_problem

A number of experts in statistics and mathematics at first criticized her solution as incorrect, though in time it was granted that her solution was in fact correct: .

I don't know of another area of mathematics or hard science that seems to display the same level of epistemological confusion over what seems like the quite basic conceptual framework appropriate for a given real problem.

Agreed, and this extends to a major epistemological flaw surrounding probabilities that Steve writes frequently about: disparate impact.

They teach you in statistics class that a statistic — such as the observed difference in sample selection rates for a given job between whites and blacks — is only really useful in drawing conclusions if it’s based on a hypothesis (and good sample design). I.e., you make a hypothesis that you have reason to suspect, and then look at data to try to refute it. In contrast, disparate impact litigation usually just mines the data, absent any real

a priorihypothesis, and looks for patterns to emerge. This leads to false positives.This phenomenon is related to the Monty Hall problem, in which the key dynamic that leads to the correct conclusion an observed behavior, namely, that Monty Hall’s offer of a door is not completely random, but rather follows a defined pattern as discussed in the Wikipedia article.

The Koufax/Drysdale Dodgers didn’t pioneer the 300-inning ace pitcher, not by a long shot. Granted, from 1957 to 1961 no major league pitcher went 300 innings (Drysdale ended that streak with 314 in ’62), but before that there was at least one 300-inning pitcher almost every year, and usually more. Just to name one example, Robin Roberts had 6 such seasons, more than either Drysdale (4) or Koufax (3). And of course, in the dead ball era 400 innings wasn’t unusual.

What ended the era of the 300-inning pitcher was the switch from 4- to 5-man rotations. I believe the Dodgers

wereamong the first teams to make that change.101. But grabbing a 150 IQ in a group of only 50 is pretty good luck (well north of 1 in 1000, I’d guess).

Wildly OT: anyone else wondering if the Kardashian Klan is having money troubles, or maybe a blackmailer (though I’m having trouble imagining what could shame them into paying up)? First Kim has her own jewelry stolen so she can collect the insurance money and sell the jewels on the sly, now Kanye rants about his support for Trump before “having a nervous breakdown” so he can collect the insurance money?

Replies:@Lot"Nervous breakdown" was not the explanation, but "exhaustion" which here probably means prescription drug addiction.

It would be my mine, they’ll be the Judas goat.

I was actually surprised in business how many great plans relied on data from, or implementation by, unknown outsourced entities... an intern, an email marketing company that had a 22 year-old product manager, etc. and execution ended up bumbled

Good point.

The correct answer is that we don’t have enough information to answer the question because he has left out a lot of important information about how the sample was collected, and he is asking us to infer what he means using our knowledge of how imperfect humans actually talk and think.

For instance, Kahneman doesn’t say whether the sample was collected from among all the students in the city, or from some subset of students in the city (he only says it was random). Indeed, he doesn’t even say the students are from the same city as the city with a mean IQ of 100. He doesn’t even say the sample is from the same country or even the same planet.

Kahneman is being hypocritical. He wants us to accept with mathematical literalness a highly stylized world where we can “know” with certainty the population mean IQ, but, when he leaves out details, he wants us to casually fill in the blanks based on our human understanding of how other humans speak when speaking casually.

Agree:CoemgenI know a first-class irrational thinker and drawer-of-conclusions: Jill Stein. It’s a boneheaded thing to try to get a recount hoping it goes in favor of one of your opponents, since she’s obviously trying to win it for Hillary. If Stein wanted Hillary to win, why did Stein choose to run for the presidency in the first place?

Oh, how a fat, crazy ego blinds a person.

Stein’s current Michigan total is 51,000 votes. The vote gap there between Trump and Clinton is only 10,700 votes. It’s pretty obvious that Stein flipped the state to Trump, since most of her votes would have gone to the other leftist female on the ballot if Stein hadn’t chosen to run. In Wisconsin, The Trump-Clinton gap is 27,000 votes, and Stein got 31,000 there. She may have cost Clinton the state of Wisconsin, too.

But the most recent totals have Trump ahead 70,000 votes in Pennsylvania. There’s no way the Democratic machine will be able to manufacture enough votes to win the state on a recount. The gap is too big, and I’m sure the Democratic machine was cheating with all their might to manufacture votes for Clinton during the election anyway.

To flip the electoral college to Clinton, Trump would have to lose all three states. If he wins only one of them on a recount, even the state with the lowest number of electoral votes, Wisconsin, Trump’s still got 270 votes, and he wins the presidency.

Still, I’m pleased to see math-impaired, irrational and indignant Democrats spending their money on a recount instead of using that cash to pay their mortgates/rent, medical bills, kid’s college education, or building up a nest egg to tide themselves over hard times. Letting your fury spend your cash is always a bad idea and helps you on your way to winning a Darwin Award, but Democrats are ruled by emotions, not reason.

Replies:@mapThat is either a bot or a lot of foreign donors.

The Election was Stolen – Here’s How…

http://www.gregpalast.com/election-stolen-heres/

Why Jill Stein is going with it is really a good question? She already collected more for the recount in 24h than what she had for her own election.

Jill Stein, Working for NEOCON Team Hillary, is Pushing for Recount in 3 Key States to Make Killary Our President (or to kick off civil war)

https://willyloman.wordpress.com/2016/11/24/jill-stein-working-for-neocon-team-hillary-is-pushing-for-recount-in-3-key-states-to-make-killary-our-president-or-to-kick-off-civil-war/

This peculiarity of probability models has really dogged probability theory from the beginning, I think, rendering its development quite a bit slower than it would have been otherwise.

One example of this very subtle sort of conceptual problem became evident when Marilyn vos Savant raised the Monty Hall Problem:

https://en.wikipedia.org/wiki/Monty_Hall_problem

A number of experts in statistics and mathematics at first criticized her solution as incorrect, though in time it was granted that her solution was in fact correct: .

I don't know of another area of mathematics or hard science that seems to display the same level of epistemological confusion over what seems like the quite basic conceptual framework appropriate for a given real problem.

“This peculiarity of probability models has really dogged probability theory from the beginning, I think”: it certainly dogged the teaching of probability when I was an undergraduate. So much so that many of the problems we were set were really an exercise in “guess what I mean”.

Replies:@candid_observerI can't even guess how much of the time I spend "learning" something that is wasted pursuing approaches to questions that simply miss the point the author is getting at, and which, I'm convinced, would be avoided entirely if either he could be clearer in his exposition or if I could ask him a few pointed questions.

you can be sure your random kid with the 150 was not steve balboni!

altho in 1988 his on base percentage was about 150 (.156)

How about sleep deprivation.

Donald Trump: sleep-deprived maniac.

Trump actually used this sentence construction in the NYT interview: "We've had the storms always."

This guy is firing on seven cylinders. Uneven.

Trump can weather the storms.

Who else can say ‘bigly’ and make it cool?

It makes me giggly.

There is something of the Bill the Butcher about The Donald. He has blood stains all over him, so if he gets soiled a bit, it’s no big deal.

Romney, in contrast, was a starched-clean-white-shirt candidate, so any blemish really stood out.

Some people have the style to mess up and make it seem like it’s no big deal, or even funny.

Others just don’t have this quality.

It’s like the scene in SEPARATE PEACE.

The more you dig into Kahneman’s question, the more it falls apart.

Kahneman says the average IQ is “known” to be 100, but, he doesn’t define what he means by “known”. He wants us to assume “known” means “known with 100% certainty”, but he doesn’t actually say that. To the contrary, in common usage, “known” is usually understood to mean “highly likely” or “highly probable”.

So, until he defines what he means by “known”, we don’t know what “known” means so we don’t know the answer. At best, we can rationally say that “it is possible the answer is 101, but we don’t really know the correct answer.”

So, Kahneman’s “correct” answer is irrational because it is too certain.

I was thinking about this in the case of Michael Tracey. A year ago, he was a little-known freelance journalist, now he's famous for the simple reason of being a reasonably respectable liberal-leaning guy who bothered to cover the Wikileaks and other Clinton/DNC-damning stories in a way that made use of the ample available evidence rather than insisting everything was a nothingburger. The retaliation of other journalists would have been more stinging if Hillary had won, but it's still remarkable how many well known people now hate his guts for the simple reason that he took the money that was on the table and everyone else had agreed to pretend wasn't there, the reverse of the Emperor's New Clothes.

“the reverse of the emperor’s new clothes”I know a truck-driver, who once was in the same situation. He had a truck full of kitchen-stuff (roughly eight tons), and when he started to unload it at the customer, he was told, that the delivery of this stuff had had already taken place.

He drove back to the wholesaler and was informed there, that nobody had been send out yet from there to this very customer.

He thougth it over and decided to unload all the pots, knives, scissors, glasses, forks and bowls at home – just about eight tons of them, – and store them in his basement.

The only difference is: In this case, nobody ever bothered.

My Liberal family members are asking me sincerely this Thanksgiving why Fox News is trying to destroy Trump. I answered as follows. There is a brutal three way war going on in America. »Liberal view: Start no wars. Allow millions of impoverished prople to immigrate »NeoCon view : Start many wars. Allow millions of impoverished people to immigrate. »PaleoCon view : Start no wars. Allow no impoverished people to immigrate. Fox is the

NeoCon network

I was thinking about this in the case of Michael Tracey. A year ago, he was a little-known freelance journalist, now he's famous for the simple reason of being a reasonably respectable liberal-leaning guy who bothered to cover the Wikileaks and other Clinton/DNC-damning stories in a way that made use of the ample available evidence rather than insisting everything was a nothingburger. The retaliation of other journalists would have been more stinging if Hillary had won, but it's still remarkable how many well known people now hate his guts for the simple reason that he took the money that was on the table and everyone else had agreed to pretend wasn't there, the reverse of the Emperor's New Clothes.

“the reverse of the emperor’s new clothes”I know a truck-driver, who once was in the same situation. He had a truck full of kitchen-stuff (roughly eight tons), and when he started to unload it at the customer, he was told, that the delivery of this stuff had already taken place.

He thougth it over and decided to unload all the pots, cutlery, table linen, scissors, glasses, and bowls at home – just about eight tons of them, – and store them in his basement.

The only difference is: In this case, nobody ever bothered.

Could be I’ll meet him this weekend. He’s an old Rhine-Palatinian now. If I do meet him, I’ll ask him what he thinks about Donald Trump.

“He has a new book coming out in December about the Israeli psychologists Daniel Kahneman and Amos Tversky, The Undoing Project, who studied why people make bad decisions.”

I don’t know about the authors’ theories, but I think most bad decisions are based on either instinct or ideology, though if ideology takes over a society, it can almost function as a kind of second-nature instinct, an ideolonct or ideonct.

Instinct-driven bad decisions are usually connected to the pleasure principle or strong emotions. We can see this with people who eat too much sugary food even though they know it’s bad for them. Or people who won’t quit smoking even though tobacco is slowly killing them. Too much addictive pleasure. And too many guys go with bad women cuz of sexual attraction, and good girls often go with bad boys for same reason.

But the reasons could be emotional too. If you have a personal animus toward someone, you might want to do things to ruin him(even if it ruins you too) than do things that is mutually beneficial to both of you. Revenge Principle is a strong emotion.

Also, as humans are social animals, they instinctively want to be approved and liked. So, even if they think a certain proposal is bad, they may go along just to be part of the tribe.

Also, most humans are followers than leaders, would-be-employees than employers. So, they tend to follow whoever is considered the Best or Top Dog.

The instinct for approval and authority plays into Ideology-driven reasons for bad decisions. Most people follow whatever ideology that happens to be dominant.

And all ideologies have its sacred truths and taboos.

Under PC, what is called ‘hate speech’ is just Critical Speech.

Even if you don’t call homos ‘f–s’, you will be called a ‘homophobe’ and hater if you are very critical of homo power and agenda. And even if you don’t wave the Nazi flag, don’t reject the Holocaust narrative, and don’t call Jews ‘k—s’, you will be denounced as a hateful ‘anti-Semite’ if you’re critical of Jewish power and Zionist agenda.

While I support all free speech, I can see why some speech is considered especially ‘hateful’. Some people get a kick out of saying willfully offensive stuff about various peoples. But what is often called ‘hateful’ is merely critical without being mindlessly hostile and deranged. If anything, the most hateful passions are seen among the Progs who claim to oppose ‘hate’.

If only mindlessly hateful speech were banned, it wouldn’t do much harm to the cause of truth. Even though I’m for total freedom of speech, I might concede the world might be a better place if people didn’t say such horrible stuff as you often find on the political fringe.

But when critical speech is banned, there will be much social, moral, and intellectual damage since everyone has to tip-toe and skirt around the obvious and true, the very facts that are essential for better personal decisions and social policy.

And who can deny that PC bans a whole host of critical speech regarding certain favored groups such as Jews, homos, and blacks? This ban on critical speech once almost destroyed Daniel Patrick Moynihan for stating the obvious about welfare and black families. And BLM is the result of such corruption. Since we can’t be honestly critical of black pathologies and realities, we have this surreal situation where we have to treat black bullies and thugs as the main victims of society.

And Jewish Power, that used to be nearly synonymous with critical speech(against Wasp power), is now so utterly corrupt because it won’t tolerate critical speech about Jews and their influence.

And when ideology is drummed into kids from a young age, a kind of epi-instinct or epinstinct develops within them. A lot of kids raised on PC have an almost knee-jerk quasi-instinctive reaction to any fact or truth that pricks their precious PC bubble. They bark like crazed dogs cuz they can’t handle the truth or they leap into the PC pond like frogs in fear of the truth that is seen as bogeyman.

What they call ‘safe spaces’ isn’t for their physical safety. It’s to be protected from intellectual truth and critical thinking. They are like the Boy in the Plastic Mental Bubble. At least Travolta’s character wanted to come out of the bubble. Millennial snowflakes wanna hide.

I think we need to have an understanding among all sides that it is possible to have different positions(based on emotions) but nevertheless acknowledge certain facts and truths.

Positions are not entirely about facts. They are about emotions.

The Zionist position is emotionally pro-Jewish regardless of facts that undermine Zionism.

The Palestinian position is emotionally pro-Palestinians regardless of facts that undermine Palestinianism.

Such positions are about tribal identity, sense of belonging, us and them, and etc. While they may be rooted in facts of history — Jews and Arabs have existed over centuries and have developed unique cultures — , one’s loyalty to a particular position is often personal and emotion-laden.

People hold those positions rooted in emotions and passion, a sense of belonging and commitment(that not everyone in the tribe share but many do).

In contrast, facts exist regardless one’s position. So, even a diehard Zionist should accept the fact that Palestinians were ethnically expelled and still live under Occupation in West Bank. That is a fact.

And even a diehard Palestinian should acknowledge the fact that Palestinian terrorists have killed Jewish civilians just minding their daily business.

But according to PC, positions determine facts and inconvenient facts must be dismissed. They are ‘hate facts’.

So, since the PC position is ‘blacks are noble’, black thuggery is whitewashed with obfuscating terms like ‘teens’ and ‘youths’.

And since PC says we must regard Jews as victims forever, even what is clearly Jewish power and privilege in places like Hollywood is just referred to as ‘white privilege’.

Some people pay attention to Alt Right not necessarily for its positions. Some may not agree with or even be appalled by white identity movement or Alt Rights particular political or racial obsessions. But even non-Alt-Rightists sometimes realize that Alt Right is speaking far more honestly about the BARE FACTS of race, crime, intelligence, problems of diversity, homosexuality, sexual differences, and etc.

It’s like I used to read The Nation not for its positions but for the facts it dug up about certain dark aspects of capitalism. Since every position tends to prefer facts that back up and justify that position, it’s good to survey other positions that dig up facts overlooked by other positions.

The failure of MSM, especially since the end of the Cold War, has been its failure to expose and discuss the most consequential facts of power in America and the World. PC turns a blind eye to the sheer destructive power of black thuggery. It also turns a blind eye to all the reckless financial and foreign policy of globalism that is largely shaped by Zionists.

Suppose someone is anti-Marxist, but the capitalist news he reads is filled with lies and lies. In contrast, suppose the Marxist news, though loathsome from a positional perspective, is more accurate in the reporting of economic facts. He may then read the Marxist news for the facts if not for its ideology.

This is a valuable asset of Alt Right, and it shouldn’t turn it into its own brand of PC.

Replies:@eDResearch on how the human mind works indicates that it either became or has always been rigged to view the world in the way that would accumulate the most social capital for the human, not to view reality with any accuracy or objectivity. This happened for complex reasons that are nevertheless fairly obvious once you find out about what has been going on. Of course, the strength of this trait varies across individuals. And no, its not necessarily co-related with IQ.

A couple of dessert thoughts after Thanksgiving dinner: #1 – Kaepernick defends Castro, attacks Carceral State. #2 – Smiling Down kids banned in France because might make women who had abortions feel bad.

https://www.washingtonpost.com/news/early-lead/wp/2016/11/24/colin-kaepernick-grilled-by-miami-dolphins-reporter-over-fidel-castro-shirt/

http://www.huffingtonpost.com/entry/c-e-n-s-o-r-e-d-video-dear-future-mom_us_582f8e6fe4b0d28e55214ef6

This just about sums it up:

“The mean I.Q. of the population of eighth-graders in a city is known to be 100. You have selected a random sample of 50 children for a study of educational achievement. The first child tested has an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?”

I must be reading the question differently than the other commentators except for Opinionator. Or maybe I read these things too much like Hymie the Robot.

The answer to the question is stated in the question. The question is “what do you expect the mean IQ to be of the whole population”. Earlier, you are given the information “the mean IQ of the population (of eighth graders in a city) is known to be 100.” So the question itself states earlier that the mean IQ is 100. The mean IQ of the population is, in fact, 100, just as stated.

Its possible to quibble that the question asks for the mean IQ of “the whole population”, not of eighth graders, but in that case of course you are given no information whatsoever to even guess at the mean IQ of the whole population. Its reasonable that you are asked to infer the mean IQ of the whole population of eighth graders.

What is not possible to infer that you are asked for the mean IQ of the whole population of eighth graders, less the one eighth grader whose IQ is 150. “Whole population except for one person” stretches the definition of “whole population” too much. And its perfectly possible of course to find the answer to the mean IQ for the whole population of eighth graders, which is 100. Its interesting, at least to me, that so many commentators kept trying to answer a question they made up about the mean IQ of the whole population of eighth graders excluding one eighth grader, instead of, well, the actual question.

And of course, in a population where the mean IQ is 100, its perfectly possible and even likely that one particular individual within that population will deviate from the mean, even by fifty points. But its surprising how many otherwise intelligent people mentally translate “mean” to mean “all”, its almost as bad as the mean/ median confusion.

I think Trump's Achilles heel is he has an enormous sentimental attachment to his hometown. (A very conservative flaw, of course...ashes of our fathers and temples of our gods and all that.) He put pictures of the skyscrapers behind him in ads and actually defended the city to Ted Cruz. The tall buildings and fancy restaurants fit his Louis-XIV ego. He wants to be the king, a properly gold-foiled, regal king. We'll see what happens. He's surprised me before.

Yeah, he keeps surprising me!

Gambler’s fallacy

Trump picks Mike Pompeo, another Southern California-raised conservative, to head the CIA:

Here’s an article noting he was first in his class at West Point:

http://articles.latimes.com/1986-05-31/local/me-8260_1_west-point

I don’t know about West Point, but in normal colleges it is basically unheard of for an engineering major to graduate first, usually it is someone in a soft science or humanities.

Coincidentally, Pompeo’s ex-wife’s father was also president of the CIA…… Christians in Action, a Long Island charitable organization.

Pompeo last year shared a stage with Steve King and Geert Wilders at a Defeat Jihad Summit organized by Frank Gaffney.

One measure of judging a team is their defensive skills at CRUCIAL positions. And a cruciality is determined by how often the position comes into contact with the ball. Therefore, the Pitcher is defensively most important, followed by the Catcher, and followed by, wait and consider, the First Baseman position is third. Following in descending order would be the Shortstop, the other two infield members, and the outfielders. I don't have the stats on Third Base vs. Second Base, nor the outfielders, but probably the Second Base and the Center Field positions prevail.

The #3 First Baseman is obvious once you look into it. And that is where traditionally so many clumsy fielding but power hitters were stationed. But I remember Keith Hernandez taking over a game with his play when the opposition had a man on second with no outs. His defensive and attacking skills would force the opposing manager to forego the obvious sacrifice bunt.

Keith Hernandez was a great third baseman who played first base because he was left handed.

Steve Garvey had been a bad third baseman because he had a terrible arm, but he was a valuable defensive first baseman because he almost never failed to scoop up a throw in the dirt. Cey, Russell, and Lopes were told to aim low and Garvey would scoop it out. This vacuum cleaner knack of Garvey’s solidified the longest running infield in history: eight years, about twice any other foursome.

Replies:@MartyWildly OT: anyone else wondering if the Kardashian Klan is having money troubles, or maybe a blackmailer (though I'm having trouble imagining what could shame them into paying up)? First Kim has her own jewelry stolen so she can collect the insurance money and sell the jewels on the sly, now Kanye rants about his support for Trump before "having a nervous breakdown" so he can collect the insurance money?

Can you actually insure the full value of jewelry? Seems like a pretty bad business model.

He’s losing a lot of money doing this. Due to piracy and cheap streaming royalties, most big music acts make far more money touring, and when they are cancelled tickets are refunded.

“Nervous breakdown” was not the explanation, but “exhaustion” which here probably means prescription drug addiction.

The PED issue shouldn't be ignored -- use was widespread and it's reasonable to think that some teams were more likely than others to look the other way -- LaRussa's A's and Cards, Theo's Red Sox, late 90s Astros, Bonds era Giants all come to mind. Likely too that plenty of less successful teams also looked the other way with less memorable results.

I’m not aware of any franchises that did anything to cut down on the number of PED users on their teams. However, it’s pretty clear now that Oakland under Beane’s mentor Sandy Alderson, was a locus of PED use from the time Jose Canseco arrived in 1986.

https://en.wikipedia.org/wiki/Monty_Hall_problem

A number of experts in statistics and mathematics at first criticized her solution as incorrect, though in time it was granted that her solution was in fact correct: .

I don't know of another area of mathematics or hard science that seems to display the same level of epistemological confusion over what seems like the quite basic conceptual framework appropriate for a given real problem.

The Monty Hall problem and its solution really is difficult to conceptualize, and it is to vos Savant’s credit that she came up with that solution.

The simplest way I can think to conceptualize the problem is that the key is understanding that Monty Hall has to operate under certain rules, so that if you understand the rules, his action in revealing one of the two doors containing the goat does reveal information about one of the remaining doors, the door that the contestant might switch to.

Monty Hall has to open a door containing a goat, but can never open the door originally selected by the contestant, whether or not it contains a goat.

He does not reveal anything additional about the probability that the door first selected by the contestant contains the prize which remains one third. He can never open that door regardless. But the chance that the door he doesn’t select, out of the set of two he can selects, contains the prize goes from one third (half of two thirds) to two thirds (all of two thirds), the set of two the contestant doesn’t originally select being different from the set of three the contestant could originally select.

Replies:@RandalI took this to be to do with the limits of individual mental capacity (and their inherent stretch-ableness with effort, at least up to a point). I also see no reason to believe that there aren't truths that will be forever beyond any human capacity to stretch far enough to accommodate them. We might be able to build artificial intelligences to grasp them for us, but they won't ever be able to explain most of them to any of us. In effect, they will be able (given means to act) to do magic.

You can only be “sure” if you have the opportunity of evaluating the entire population. In terms of drawing a sample that has one major outlier with an IQ of 150, you have something akin to the uncertainty principle at operation, in that the more certainty that you have that you have someone with an IQ of 150 in the overall population, the less certainty you will have in knowing where he or she sits in that population or if there might be one or more others.

Except when Frank Robinson was managing him.

Detroit Tiger Justin Upton.

Literally couldn't hit anything until late August. Absolutely terrible in left field. The biggest free-agent bust of 2016 - six years for $133 million.

And then?

Between August 22 and the end of the season, Upton hit 18 home runs in seven weeks to finish with a career-high 31.

His left-field play also went from unwatchable to competent.

His batting average went from .190's to finish at .245.

Seven weeks is not a season. But it happened, and now his value is high enough to be tradeable again. Some other team might take on his contract because he improved so dramatically and finished so strongly.

For the reader who asked about the 1976-1980 misnorming Pentagon disaster, see this book mostly readable on Google:

I Want You!: The Evolution of the All-Volunteer Force

By Bernard D. Rostker, K. C. Yeh

p. 382 onward

Replies:@ScarletNumberThe misnormed-ASVAB passage starts on 392 in this edition.

To sharpen the point, change the question a little to "The first FORTY NINE children tested have an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?"

This rewording makes it clearer that people are inherently Bayesian and much more subtly rational than Kahneman assumes. When he says "the population of eighth-graders in a city is known to be 100," he really means that you have a strong prior that the mean is 100 because humans don't really "know" anything with utter certainty. But a strong prior is not absolute certainty. So, real humans will begin to question their strong priors if presented with enough contrary evidence because real humans are aware that we never really "know" anything with certainty.

Faced with 49 observations that undermine the assumption that the mean is 100, a rational person should question the assumption that the mean is 100. Indeed, it would be irrational to not question the assumption.

So, as you have pointed out repeatedly, Kahneman's little puzzles assume people are non-human machines endowed with god-like information, and when he claims to find that people are "irrational," he is just showing that he is willfully misunderstanding how subtly rational people actually are in the real world which is filled with uncertainty.

To sharpen the point, change the question a little to “The first FORTY NINE children tested have an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?”This is an very different question. It’s not even close to the original answer. would be expected to go up.

But, then I read some other comments of yours and realized that you are likely mathematically illiterate. Or maybe some kind of contrarian.

For instance, Kahneman doesn’t say whether … blahblahblah … Kahneman is being hypocritical. He wants us to accept with mathematical literalness a highly stylized world where we can “know” with certainty the population mean IQ, but, when he leaves out details, he wants us to casually fill in the blanks based on our human understanding of how other humans speak when speaking casually.

No, he just wants to know if you know how to correctly reason about probability from a basic set of assumptions.

You could read up on the aforementioned Monty Hall Problem, or the related Bertrand Coin Box problem, to try to get an idea of how badly your intuition will lead you astray if you don’t establish the exact problem and reason from those assumptions.

Replies:@utuThe mean IQ of the remaining 49 should still be “expected” to be 100. That’s like the fact that even if you toss heads with a coin the odds of the next toss being a head are still 50%. Even if you toss 10 heads in a row the next toss is still 50% heads or tails (assuming there is no unknown bias).

The IQ of the first kid has no bearing on the IQs of the remaining 49, except to change the expected standard deviation because of the smaller sample size.

This is an very different question. It’s not even close to the original answer. would be expected to go up.Edit didn’t help me here, where I was going to say that a correct application of this change to the problem would reinforce the original answer, ie. the expected average would be even higher at that point

“If the first child who takes the test scores 3.33 standard deviations above the expected mean, are you quite sure you have a random sample?”

Exactly my thinking. Goes along the lines of nerd Dexter asking dumb Bruce what odds he’d bet on a 21st coin toss after a string of 20 heads in a row with a fair coin. Bruce bets tails, reckoning its time is ripe. Smarmy Dexter explains the gambler’s fallacy and maintains even odds. Vinny breaks in and bets heads: ‘Heads 20 times in a row? Geddaf*ckoudda here – no way is that a fair coin.’

Trump’s commerce secretary pick Wilbur Ross is a major Magritte collector. Good taste!

LOL:Ivy, AbeReplies:@DesideriusWiki says he owns The Pilgrim:

http://www.renemagritte.org/the-pilgrim.jsp

https://en.wikipedia.org/wiki/Monty_Hall_problem

“the same level of epistemological confusion” – I do not think “epistemological confusion” applies to the theories of probability, statistics and stochastic processes. Everything there is well defined. The problems start in applications. The most sophisticated statistical analyses are not used by physicist or chemists who are well versed in mathematics. In physics and chemistry one designs experiments with fairly well isolated variables that are controlled so the process is very deterministic and the main concern of probability are just the errors. The most sophisticate statistical tools are used in medical science, sociology and psychology, i.e., by people who are not known for mathematical fluency. The reason they apply very sophisticated tools because they want to put some numbers to very random and often under-sampled phenomena and those who fund their research often demand quantification and numbers.

This is an very different question. It's not even close to the original answer. would be expected to go up.

But, then I read some other comments of yours and realized that you are likely mathematically illiterate. Or maybe some kind of contrarian.

For instance, Kahneman doesn’t say whether ... blahblahblah ... Kahneman is being hypocritical. He wants us to accept with mathematical literalness a highly stylized world where we can “know” with certainty the population mean IQ, but, when he leaves out details, he wants us to casually fill in the blanks based on our human understanding of how other humans speak when speaking casually.

No, he just wants to know if you know how to correctly reason about probability from a basic set of assumptions.

You could read up on the aforementioned Monty Hall Problem, or the related Bertrand Coin Box problem, to try to get an idea of how badly your intuition will lead you astray if you don't establish the exact problem and reason from those assumptions.

I agree. “mathematically illiterate” with a strong tendency for pontification.

Replies:@JeremiahJohnbalayaIt’s like a Trump rally , does he know that these are the people that elected him ? That Sailer really had very little to do with it . Think what you want but they have been steadfast while we found our way to them .

Do you have a good link to the Stripes era/incident? Nothing comes up quickly in my searches..

Here ya go: ASVAB Miscalibraton. Scroll down to page 70 to begin reading the details of the fiasco.

Why are you getting side tracked with info that one kid was 150. It is irrelevant for the solution of this problem. Say they posed the problem that the first kid’s IQ=X and asked to give formula for the best estimate of of mean of the sample of 50 kids that includes the kid.

Furthermore you and others here start speculating about probabilities haven one kid with IQ=150 and somebody even starts spouting about IQ of Ashkenazis. This is pointless and irrelevant and most importantly wrong. Approaching this problem you do not even know what is the distribution of the random variable. Who says it must be Gaussian? What if the distribution is binary? 50% has IQ=150 and 50% has IQ=50 with mean of 100. This is a mathematical problem that does not have to have anything to do with any reality that you know. It is mathematical reality in which clearly you do not seem to feel very comfortable.

Replies:@415 reasonsWait.

If the mean IQ of the whole sample of kids is 100, that means the mean IQ of 49 of the kids is 98.98, or basically 99, and then one kid has an IQ of 150.

If the mean IQ of the whole sample of kids is 101, that means that the mean IQ of 49 of the kids is 100, and then one kid has an IQ of 150.

Aren’t those both plausible scenarios?

Replies:@utuIf you add to a random sample of 49 one individual with IQ=150 then the best estimate of arithmetic average of sample of 50 is IQ=101.

The moment you obtained the information that among the 50 is one individual with IQ=150 your initial estimate of 100 must be replaced with 101 because you gained extra information so you changed the estimate. It would be incorrect to insist that the best estimate is still IQ=100 and then conclude that the best estimate of the average for the remaining 49 is IQ=98.98.

Why would the remaining ones be conditionally independent?Because that's implicit in the problem statement.

After removal, you no longer have a random sampleYes, you do

To better grasp that, suppose we recaracterize this exercise as picking a random 50-person sample and then removing the highest scoreBut that is NOT the same as the stated problem.

By far and away the most important (if not most difficult) practical thing in basic probability is in stating the problem accurately.

The only information you have is that the remaining 49 samples have an average IQ of 100. By definition.

In fact, the interesting thing about this sort of thing is in analyzing all the assumptions that are implicit in the statement of the problem. Which is what would be of interest in using it as an interviewing device.

“The only information you have is that the remaining 49 samples have an average IQ of 100. By definition.” – Not exactly. The value of 100 is the expected value of the mean. The actual average of 49 samples can be different. But 100 is the best estimate of that average.

http://imgur.com/a/AuAvA

Agreed.

Wiki says he owns The Pilgrim:

http://www.renemagritte.org/the-pilgrim.jsp

(49*(100*p-150) / (p-1) + 150) / 50

Notice that for a population of 50, meaning that the entire population is tested, the expected mean is exactly the initial population mean[100]And for a not-even-very-large original sample space of 1000, the answer is 100.95.

Replies:@hootsAnd for a not-even-very-large original sample space of 1000, the answer is 100.95.So 101 is the wrong answer. But worse than getting the answer wrong is getting the reasoning wrong. The Expectation for IQ tests #2-50 is not independent of IQ test #1. The dependence is forced by the problem statement.

After you remove the first of the 50, the remaining sample is no longer random.Sorry to pick on you, but this is exactly the kind of follow-your-intuition approach to probability that is wrongheaded.

The assumptions here is that the IQ scores are independent events, that the original sample size is large enough, and the 50 were chosen randomly. If the first X out of 50 have scores of 150+, the remaining (50 - X) still have an expected value of 100.

Now there are other questions that can be posed, such as, given these assumptions, what is the probability that the first one picked, or the first 49, or all 50 have an IQ of 150. And then you might say that there is a such-and-such chance that the sample really was random (which leads to the probably that the

null hypothesisof randomness was actually true).The assumptions here is that the IQ scores are independent events, that the original sample size is large enough, and the 50 were chosen randomly. If the first X out of 50 have scores of 150+, the remaining (50 – X) still have an expected value of 100.No. There is no sample size large enough for your statement to be true. Only if the first tested subject has an IQ of 100 does the expected value for the experiment remain equal to 100. See my previous comments.

Replies:@hootsThere is no sample size large enough for your statement to be true. Only if the first tested subject has an IQ of 100 does the expected value for the remaining (untested) population remain equal to 100. This is a condition made necessary by the problem statement that the mean for the whole population is 100. It is not a matter of interpretation.

http://buchanan.org/blog/ben-stein-buchanans-new-book-greatest-comeback-6564

Trump’s was the Greatestest Comback.

Replies:@CKYeah, I think this is the sharpest account of the problem as it was intended — expressing it by use of a conditional probability. Obviously, if we take out a member of the sample at random, we don’t know more about the 49 remaining. Examining that one member tells us nothing about the other 49. But we do know more about the 50 in the original sample, because we know about the one we have taken out and examined. We should adjust our expectations regarding the 50 in the light of what we know about the one.

As you say, the issue of replacement can mostly be ignored as a trivial detail.

No. There is no sample size large enough for your statement to be true. Only if the first tested subject has an IQ of 100 does the expected value for the experiment remain equal to 100. See my previous comments.

Sorry, I need to do a better job saying what I mean the first time. A revision:

There is no sample size large enough for your statement to be true. Only if the first tested subject has an IQ of 100 does the expected value for the remaining (untested) population remain equal to 100. This is a condition made necessary by the problem statement that the mean for the whole population is 100. It is not a matter of interpretation.

Replies:@JeremiahJohnbalayaUmderstanding the simple problems is what allows you to extend that rigor into more difficult problems, and even to see that they are difficult and that their preconditions are stingent.

Does anybody have an explanation for why Zito fell off a cliff in his post-A's career?

“Does anybody have an explanation for why Zito fell off a cliff in his post-A’s career?”Yes – Barry Zito was notorious for his laziness and lack of hard work. A physically talented player can survive on his athletic skills for a few seasons but without doing the hard work that player will quickly decline.

Steve Garvey had been a bad third baseman because he had a terrible arm, but he was a valuable defensive first baseman because he almost never failed to scoop up a throw in the dirt. Cey, Russell, and Lopes were told to aim low and Garvey would scoop it out. This vacuum cleaner knack of Garvey's solidified the longest running infield in history: eight years, about twice any other foursome.

Read a good story about Hernandez recently. The Cardinals, who already had his older brother in their system, waited till the 42nd round to draft Keith, because he quit his team as a senior.

To this day — and even with this very problem — I find that a very great deal of my time trying to solve problems found in books (typically as exercises) is spent trying to understand what the author intends by the question he asks. Is he getting at something fairly obvious, or something surprising? What is the exact context he is presupposing for the question?

I can’t even guess how much of the time I spend “learning” something that is wasted pursuing approaches to questions that simply miss the point the author is getting at, and which, I’m convinced, would be avoided entirely if either he could be clearer in his exposition or if I could ask him a few pointed questions.

Haven’t read all the comments yet so this may be redundant but: from what group (other than “children”) was the random sample selected?

https://en.wikipedia.org/wiki/Monty_Hall_problem

I’m especially struck by the apparent inability of Erdos, of all people, to fail to see the correctness of vos Savant’s solution. It really does suggest to me something that I’ve wondered about before: whether there isn’t a real difference between the ability to manipulate symbols, and work within a pre-existing conceptual framework (which Erdos was stupendously and famously good at) and the ability able to impose a conceptual framework on real world phenomena.

Coming up with genuinely novel models of the world — such as was required by, say, Galileo or Newton, or by Darwin, or by the founders of quantitative genetics — seems to require a very special kind of mind, not a “mere” mathematician.

Replies:@academic gossipReally this anecdote is another point against the Kahneman-Twersky view of humans as bad thinkers. One of the tendencies that makes top researchers able to do top work is that when calculations or data disagree with their preconceptions, they neither dismiss nor immediately trust the unexpected finding, but keep probing it in other ways. Erdos did not say that the standard Monty Hall calculation was wrong, but he did require further evidence to satisfy his doubts.

Another thing re Kahneman: there was a recent debunking of the "hot hand" paper as wrongly finding a null result by using underpowered statistics when a significant effect was there in the data. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2450479

Notice that for a population of 50, meaning that the entire population is tested, the expected mean is exactly the initial population mean[100]And for a not-even-very-large original sample space of 1000, the answer is 100.95.

And for a not-even-very-large original sample space of 1000, the answer is 100.95.So 101 is the wrong answer. But worse than getting the answer wrong is getting the reasoning wrong. The Expectation for IQ tests #2-50 is not independent of IQ test #1. The dependence is forced by the problem statement.

And for a not-even-very-large original sample space of 1000, the answer is 100.95.

Also, what do you mean “not-even-very-large”? My town had less than 50 eighth graders. The experiment wouldn’t have even been possible. Methods that only work for infinitely large populations are useless in the real world.

Media = Info-monopoly or Infonopoly.

I don't know about the authors' theories, but I think most bad decisions are based on either instinct or ideology, though if ideology takes over a society, it can almost function as a kind of second-nature instinct, an ideolonct or ideonct.

Instinct-driven bad decisions are usually connected to the pleasure principle or strong emotions. We can see this with people who eat too much sugary food even though they know it's bad for them. Or people who won't quit smoking even though tobacco is slowly killing them. Too much addictive pleasure. And too many guys go with bad women cuz of sexual attraction, and good girls often go with bad boys for same reason.

But the reasons could be emotional too. If you have a personal animus toward someone, you might want to do things to ruin him(even if it ruins you too) than do things that is mutually beneficial to both of you. Revenge Principle is a strong emotion.

Also, as humans are social animals, they instinctively want to be approved and liked. So, even if they think a certain proposal is bad, they may go along just to be part of the tribe.

Also, most humans are followers than leaders, would-be-employees than employers. So, they tend to follow whoever is considered the Best or Top Dog.

The instinct for approval and authority plays into Ideology-driven reasons for bad decisions. Most people follow whatever ideology that happens to be dominant.

And all ideologies have its sacred truths and taboos.

Under PC, what is called 'hate speech' is just Critical Speech.

Even if you don't call homos 'f--s', you will be called a 'homophobe' and hater if you are very critical of homo power and agenda. And even if you don't wave the Nazi flag, don't reject the Holocaust narrative, and don't call Jews 'k---s', you will be denounced as a hateful 'anti-Semite' if you're critical of Jewish power and Zionist agenda.

While I support all free speech, I can see why some speech is considered especially 'hateful'. Some people get a kick out of saying willfully offensive stuff about various peoples. But what is often called 'hateful' is merely critical without being mindlessly hostile and deranged. If anything, the most hateful passions are seen among the Progs who claim to oppose 'hate'.

If only mindlessly hateful speech were banned, it wouldn't do much harm to the cause of truth. Even though I'm for total freedom of speech, I might concede the world might be a better place if people didn't say such horrible stuff as you often find on the political fringe.

But when critical speech is banned, there will be much social, moral, and intellectual damage since everyone has to tip-toe and skirt around the obvious and true, the very facts that are essential for better personal decisions and social policy.

And who can deny that PC bans a whole host of critical speech regarding certain favored groups such as Jews, homos, and blacks? This ban on critical speech once almost destroyed Daniel Patrick Moynihan for stating the obvious about welfare and black families. And BLM is the result of such corruption. Since we can't be honestly critical of black pathologies and realities, we have this surreal situation where we have to treat black bullies and thugs as the main victims of society.

And Jewish Power, that used to be nearly synonymous with critical speech(against Wasp power), is now so utterly corrupt because it won't tolerate critical speech about Jews and their influence.

And when ideology is drummed into kids from a young age, a kind of epi-instinct or epinstinct develops within them. A lot of kids raised on PC have an almost knee-jerk quasi-instinctive reaction to any fact or truth that pricks their precious PC bubble. They bark like crazed dogs cuz they can't handle the truth or they leap into the PC pond like frogs in fear of the truth that is seen as bogeyman.

What they call 'safe spaces' isn't for their physical safety. It's to be protected from intellectual truth and critical thinking. They are like the Boy in the Plastic Mental Bubble. At least Travolta's character wanted to come out of the bubble. Millennial snowflakes wanna hide.

https://www.youtube.com/watch?v=BgesL8cVgmI

I think we need to have an understanding among all sides that it is possible to have different positions(based on emotions) but nevertheless acknowledge certain facts and truths.

Positions are not entirely about facts. They are about emotions.

The Zionist position is emotionally pro-Jewish regardless of facts that undermine Zionism.

The Palestinian position is emotionally pro-Palestinians regardless of facts that undermine Palestinianism.

Such positions are about tribal identity, sense of belonging, us and them, and etc. While they may be rooted in facts of history -- Jews and Arabs have existed over centuries and have developed unique cultures -- , one's loyalty to a particular position is often personal and emotion-laden.

People hold those positions rooted in emotions and passion, a sense of belonging and commitment(that not everyone in the tribe share but many do).

In contrast, facts exist regardless one's position. So, even a diehard Zionist should accept the fact that Palestinians were ethnically expelled and still live under Occupation in West Bank. That is a fact.

And even a diehard Palestinian should acknowledge the fact that Palestinian terrorists have killed Jewish civilians just minding their daily business.

But according to PC, positions determine facts and inconvenient facts must be dismissed. They are 'hate facts'.

So, since the PC position is 'blacks are noble', black thuggery is whitewashed with obfuscating terms like 'teens' and 'youths'.

And since PC says we must regard Jews as victims forever, even what is clearly Jewish power and privilege in places like Hollywood is just referred to as 'white privilege'.

Some people pay attention to Alt Right not necessarily for its positions. Some may not agree with or even be appalled by white identity movement or Alt Rights particular political or racial obsessions. But even non-Alt-Rightists sometimes realize that Alt Right is speaking far more honestly about the BARE FACTS of race, crime, intelligence, problems of diversity, homosexuality, sexual differences, and etc.

It's like I used to read The Nation not for its positions but for the facts it dug up about certain dark aspects of capitalism. Since every position tends to prefer facts that back up and justify that position, it's good to survey other positions that dig up facts overlooked by other positions.

The failure of MSM, especially since the end of the Cold War, has been its failure to expose and discuss the most consequential facts of power in America and the World. PC turns a blind eye to the sheer destructive power of black thuggery. It also turns a blind eye to all the reckless financial and foreign policy of globalism that is largely shaped by Zionists.

Suppose someone is anti-Marxist, but the capitalist news he reads is filled with lies and lies. In contrast, suppose the Marxist news, though loathsome from a positional perspective, is more accurate in the reporting of economic facts. He may then read the Marxist news for the facts if not for its ideology.

This is a valuable asset of Alt Right, and it shouldn't turn it into its own brand of PC.

Anon # 87 has a long but insightful comment.

Research on how the human mind works indicates that it either became or has always been rigged to view the world in the way that would accumulate the most social capital for the human, not to view reality with any accuracy or objectivity. This happened for complex reasons that are nevertheless fairly obvious once you find out about what has been going on. Of course, the strength of this trait varies across individuals. And no, its not necessarily co-related with IQ.

When I get to glory I’m a gonna sing , sing .

If the mean IQ of the whole sample of kids is 100, that means the mean IQ of 49 of the kids is 98.98, or basically 99, and then one kid has an IQ of 150.

If the mean IQ of the whole sample of kids is 101, that means that the mean IQ of 49 of the kids is 100, and then one kid has an IQ of 150.

Aren't those both plausible scenarios?

The best estimate of arithmetic average of 50 sample or 49 sample is the population mean, i.e., IQ=100.

If you add to a random sample of 49 one individual with IQ=150 then the best estimate of arithmetic average of sample of 50 is IQ=101.

The moment you obtained the information that among the 50 is one individual with IQ=150 your initial estimate of 100 must be replaced with 101 because you gained extra information so you changed the estimate. It would be incorrect to insist that the best estimate is still IQ=100 and then conclude that the best estimate of the average for the remaining 49 is IQ=98.98.

Replies:@ChebyshevThe IQ distribution is fine.

OT- “Why We Have Globalization to Thank for Thanksgiving ”

There’s more but it’s pretty dumb overall.

http://www.usnews.com/news/national-news/articles/2016-11-23/why-we-have-globalization-to-thank-for-thanksgiving

The problem comes down to semantics.

First, let’s not overthink this in terms of “sampling with replacement” and “sampling without replacement” and the idea of the 150 I.Q. kid being an outlier that somehow calls into question the known expected I.Q. of 100. Let’s accept the reasonable premise that the population is large enough that each sampled child’s I.Q. follows an independent and identically distributed probability having a mean value of 100. The

expected valuehas a rigorous mathematical definition here, and the expected value of any randomly selected child (or collection of children when adding all the scores and dividing by the number of kids) remains 100.You test one child and score them for an I.Q. of 150. That point is no longer random — you ran the trial of selecting one child and getting their score, and the score has that specific, known value. The remaining 49 kids, however, follow the same, unchanged probability distribution, and the expected I.Q. of the 49 kids is 49 times 100 divided by 49 = 100.

Therefore the expected value of the entire sample of 50 kids, where you “cheated” and looked at the score of the first child, is 1 times 150 plus 49 times 100 equals 5050, where 5050 divided by 50 gives 101. The first child in the sample no longer has the original random distribution; you “looked”, collapsing that child’s probability distribution to one that remains independent of the other children but is 150 with probability one. So the expected value of the entire sample, taking into account the “prior” that one student’s score is known in 101.

But the semantic weaseling, the Get Smart “Agent Hymie” effect, interprets the expected value over the entire 50 samples to be just that, 50 times 100 divided by 50 = 100 where the fact that one student just so happened to have a 150 on the test doesn’t change anything.

But it depends on how you set up your experiment. If you run trials of picking 50 students, where in one trial the first student scored 150 but in subsequent trials that student scores some different value according to the probability distribution, yeah, the expected score of the sample of 50 remains 100. But if after having observed the first student with a score of 150 you run trials of picking the remaining 49 students, then the expected score of the “entire sample” is 101.

The way the question is posed is ambiguous without context, but the context of how probability is taught in most serious institutions of higher learning is that the first student’s score of 150 is a prior that biases the expected value of the entire sample to 101.

Replies:@hoots(100 * (p-1) + 150) / p

Does this equal 100 as stated in the problem? No. Therefore, since the assumption of independence directly contradicts the conditions given, the assumption of independence is false in this problem. It is amazing the lengths to which so many people here are going to prove that they can't reason with probabilities. There are no semantic issues here.

The semantics is the last refuge of obfuscators, usually obfuscating their own ignorance.

The solution to the problem

“The mean I.Q. of the population of eighth-graders in a city is known to be 100. You have selected a random sample of 50 children for a study of educational achievement. The first child tested has an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?”

is Mean(The sample of 50)=101. But now let us ask a much harder question. How well the value of 101 estimates the actual mean of the sample? To answer this question one needs to know what is the standard deviation (or its estimate) of IQ’s in the city . From the fact that in the sample of 50 there is at least one with the IQ=150 can one get some, perhaps crude, estimate of the standard deviation?

Replies:@CKSteve, don’t approve this comment: go enjoy Thanksgiving evening!

First, let's not overthink this in terms of "sampling with replacement" and "sampling without replacement" and the idea of the 150 I.Q. kid being an outlier that somehow calls into question the known expected I.Q. of 100. Let's accept the reasonable premise that the population is large enough that each sampled child's I.Q. follows an independent and identically distributed probability having a mean value of 100. The

expected valuehas a rigorous mathematical definition here, and the expected value of any randomly selected child (or collection of children when adding all the scores and dividing by the number of kids) remains 100.You test one child and score them for an I.Q. of 150. That point is no longer random -- you ran the trial of selecting one child and getting their score, and the score has that specific, known value. The remaining 49 kids, however, follow the same, unchanged probability distribution, and the expected I.Q. of the 49 kids is 49 times 100 divided by 49 = 100.

Therefore the expected value of the entire sample of 50 kids, where you "cheated" and looked at the score of the first child, is 1 times 150 plus 49 times 100 equals 5050, where 5050 divided by 50 gives 101. The first child in the sample no longer has the original random distribution; you "looked", collapsing that child's probability distribution to one that remains independent of the other children but is 150 with probability one. So the expected value of the entire sample, taking into account the "prior" that one student's score is known in 101.

But the semantic weaseling, the Get Smart "Agent Hymie" effect, interprets the expected value over the entire 50 samples to be just that, 50 times 100 divided by 50 = 100 where the fact that one student just so happened to have a 150 on the test doesn't change anything.

But it depends on how you set up your experiment. If you run trials of picking 50 students, where in one trial the first student scored 150 but in subsequent trials that student scores some different value according to the probability distribution, yeah, the expected score of the sample of 50 remains 100. But if after having observed the first student with a score of 150 you run trials of picking the remaining 49 students, then the expected score of the "entire sample" is 101.

The way the question is posed is ambiguous without context, but the context of how probability is taught in most serious institutions of higher learning is that the first student's score of 150 is a prior that biases the expected value of the entire sample to 101.

Why does no one seem to notice that assigning an expected value of 100 to untested students violates the very first statement in the problem? If the expected IQ of each of the untested students is 100, then the expected mean for the entire population p is

(100 * (p-1) + 150) / p

Does this equal 100 as stated in the problem? No. Therefore, since the assumption of independence directly contradicts the conditions given, the assumption of independence is false in this problem. It is amazing the lengths to which so many people here are going to prove that they can’t reason with probabilities. There are no semantic issues here.

Agree:ben tillmanReplies:@utuIf you start with a false premise any conclusion will do. Stopped reading at this point.

Avg(p)=[IQ(1)+...+IQ(p)]/p ≈ m=100

The symbol "≈" stands for approximation or an estimate.

Now if we know that one element in the sample for has IQ=150 then your formula

Avg(p)≈(100 * (p-1) + 150) / p (=101 for p=50)

gives the best estimate of the average of that sample. We do not know what are individual values of IQ's of first p-1 elements but we know that their average is best estimated by m=100 and the IQ of the p-th element is 150. This explains why the formula is valid.

You asked "Does this equal 100 as stated in the problem? " and correctly answered that it does not. This equals to 101 for p=50. This is so because we utilize the extra information we had that one element's IQ is known, so no longer m=100 is the best estimate we can come up with for the Avg(p).

Then you proceed with the nonsense about independence, etc. This problem has nothing to do with statistical or random independence, though we presume that the sample was randomly selected. Why do we presume it, because we must if we want to find the answer. W/o the assumption of independence of sampling anything goes and the information about the mean of population is meaningless, pointless. Somebody who studied a bit of probability and statistics would have known it so it would be expected of him to make proper assumptions.

You also said "It is amazing the lengths to which so many people here are going to prove that they can’t reason with probabilities." and I agree with it except that you fall in the category of the people who are confused. The bottom line is that it is hard to beat good rigorous education. Confidence, cockiness, chutzpah will not replace it, except for reality you probably saw in the movies. Snap out of it. You are not in the movie.

It’s now official. The post election freakout is the Satanic Panic for this decade.

NYT is heavily partisan, but it is still the most important newspaper in America, and it is not officially a political paper. It’s not like Trump is giving interviews to Mother Jones or The Nation.

Besides, even the enemies can do some good for our side.

We saw this with Glenn Beck who has turned full retard PC. (It began with his idea of handing out soccer balls to illegal aliens few yrs back.)

Beck is out to hurt us, but there is the Poo Boy Appleseed Factor. It may do us some good.

It’s like a bear. It will eat apples just to eat apples. It is only to please itself and cares not for the fate of the apples it devours. And the apples are devoured in the stomach of the bear… but the seeds pass out of the other end with the poo, and the poo fertilized the apple seeds that sprout and grow into more apple trees.

So, it’s not always a bad thing that the media are covering our side(even if with a lot of shi*), especially when certain people and ideas on our side are not yet household names.

Take Sailer. Most Americans haven’t heard of him. So, even though Glenn Beck mentioned him negatively on Anderson Cooper, the word got out to a lot of people. Beck mentioned Sailer to devour and destroy him, but Sailer seeds pass out of the other end and are fertilized by Beck’s poop.

The whole MSM are trying to devour our side and trying to make Trump look more extreme. And it does it by attacking certain individuals and ideas, but the Poo Boy Factor only spreads the seeds of those ideas far and wide.

The bear doesn’t know that it is actually helping the spread the apple seeds around to grow more apple trees.

We should welcome the Poo Boys of the media.

Billy Beane tells the story of teammate Lenny Dykstra (who rose with Beane in the Mets minor league system) who just outright competed--not caring who he was facing. Meanwhile, Beane over-thought every situation, and intimidated himself into mediocrity. After batting practice prior to a game against the Phillies, Dykstra asked Beane who were they facing on the mound. When Beane answered, "Lefty, Steve Carlton," Dykstra asked, "What's he throw?" Beane was agog that Dykstra appeared unfamiliar with and unflustered by the most dominating left handed pitcher then in baseball. Beane knew everything about Carlton. Dykstra went to the plate to collect hits.

When it came to the "head" game, it was the 5-tool star Beane who didn't excel.

Dykstra was also on ‘roids, cocaine, and likely several other substances at the time. The drugs probably deprived him of the mental ability to contemplate being scared, but the substances gave his body the ability to catch up to the fastball. Sometimes pure physicality is all you need.

But the beane/Dykstra situation reminds me of the old tale about Jimmy Stewart, later in life, on a small private plane that was hitting turbulence in a big storm. The pilot was scared, but Stewart was terrified beyond belief, and with good reason: Stewart was a legitimate war hero as a pilot, flying many missions into enemy territory, winning medals, and wound up retiring as a Brigadier General. So while the pilot (who wasn’t that good) knew

halfof the things that could go possibly wrong, Stewart kneweverythingthat could go wrong.In short, ignorance is bliss.

Replies:@Steve SailerFirst, let's not overthink this in terms of "sampling with replacement" and "sampling without replacement" and the idea of the 150 I.Q. kid being an outlier that somehow calls into question the known expected I.Q. of 100. Let's accept the reasonable premise that the population is large enough that each sampled child's I.Q. follows an independent and identically distributed probability having a mean value of 100. The

expected valuehas a rigorous mathematical definition here, and the expected value of any randomly selected child (or collection of children when adding all the scores and dividing by the number of kids) remains 100.You test one child and score them for an I.Q. of 150. That point is no longer random -- you ran the trial of selecting one child and getting their score, and the score has that specific, known value. The remaining 49 kids, however, follow the same, unchanged probability distribution, and the expected I.Q. of the 49 kids is 49 times 100 divided by 49 = 100.

Therefore the expected value of the entire sample of 50 kids, where you "cheated" and looked at the score of the first child, is 1 times 150 plus 49 times 100 equals 5050, where 5050 divided by 50 gives 101. The first child in the sample no longer has the original random distribution; you "looked", collapsing that child's probability distribution to one that remains independent of the other children but is 150 with probability one. So the expected value of the entire sample, taking into account the "prior" that one student's score is known in 101.

But the semantic weaseling, the Get Smart "Agent Hymie" effect, interprets the expected value over the entire 50 samples to be just that, 50 times 100 divided by 50 = 100 where the fact that one student just so happened to have a 150 on the test doesn't change anything.

But it depends on how you set up your experiment. If you run trials of picking 50 students, where in one trial the first student scored 150 but in subsequent trials that student scores some different value according to the probability distribution, yeah, the expected score of the sample of 50 remains 100. But if after having observed the first student with a score of 150 you run trials of picking the remaining 49 students, then the expected score of the "entire sample" is 101.

The way the question is posed is ambiguous without context, but the context of how probability is taught in most serious institutions of higher learning is that the first student's score of 150 is a prior that biases the expected value of the entire sample to 101.

“The problem comes down to semantics.”

The semantics is the last refuge of obfuscators, usually obfuscating their own ignorance.

Coming up with genuinely novel models of the world -- such as was required by, say, Galileo or Newton, or by Darwin, or by the founders of quantitative genetics -- seems to require a very special kind of mind, not a "mere" mathematician.

The Erdos anecdote is interesting but not really that revealing of anything. Mathematicians in most fields, including the ones Erdos worked in (including the probabilistic stuff) don’t usually need to think much about decisions based on partial information, or even conditional probabilities. There is no reason his intuition on Monty Hall should have been much better than a random smart person thinking through the problem, or superior to that of a frequent poker player. He was also 77 years old and had been on amphetamines for 20 years at the time he heard the problem, so not necessarily the genius he once was.

Really this anecdote is another point against the Kahneman-Twersky view of humans as bad thinkers. One of the tendencies that makes top researchers able to do top work is that when calculations or data disagree with their preconceptions, they neither dismiss nor immediately trust the unexpected finding, but keep probing it in other ways. Erdos did not say that the standard Monty Hall calculation was wrong, but he did require further evidence to satisfy his doubts.

Another thing re Kahneman: there was a recent debunking of the “hot hand” paper as wrongly finding a null result by using underpowered statistics when a significant effect was there in the data. http://papers.ssrn.com/sol3/papers.cfm?abstract_id=2450479

https://en.wikipedia.org/wiki/Monty_Hall_problem

candid_observer wrote

That’s because probability is not about the universe. Probability is about our knowledge of the universe–and to this day, physicists, chemists, economists, and most other scientists have no idea there’s a distinction between the two when they use probability in their work, and the mathematicians who do know it don’t realize that it matters.

Probability is a measure of ignorance.

Young students through college level students are taught probability badly, where they are misguided from the beginning. They think a statement of probability is a statement of the world–like the probability that a die comes up 5 is 1/6. They think this 1/6 is some truth about our physical universe.

But in fact, the statement that probability that a 6-sided die comes up 5 is 1/6 is really “Given we have no knowledge of the physical system and therefore we have no reason to expect one outcome more than any other, all outcomes are equally likely to occur. six possible outcomes, so a given one (in this case 5,) comes up 1 of those 6. We call that 1/6.”

All of that is predicated on our ignorance of the physical system. There’s nothing inherently random about flipping a coin. There CAN’T BE, it’s a completely Newtonian physical system. Specify the inputs (position and momentum of the coin), and the forces you put on it, and the outputs are completely determined. BUT, in practice, most of us don’t carefully control the inputs to the flipped coin, and small perturbations affect the outcome. If we KNOW how to toss a coin perfectly, then that “FAIR COIN” can still always come out heads.

That fundamental misunderstanding is why even profs who should know better get confused about whether an already-tossed-coin has a probability of 1/2 of being head (“but it already happened! the outcome is fixed! how can the probability not be H or T???” “Because probability is a statement of OUR knowledge of th system–we knew nothing about the tossing, so we have no preference for H or T. Probability is 1/2.”)

That’s why economists *like those at GMU) using Bayesian reasoning make Really Stupid mistakes, because they think these prior probabilities EXIST in the world like a platonic solid, when in fact, they are a function of our ignorance.

It’s also why people get confused by the seemingly absurd statements like “DJT has a 25% chance of winning the election”–that’s really a statement measuring the ignorance someone has to the outcome. Silver’s knowledge, say, of the electoral system gave him the known number of electoral votes in the DJT side and in the HRC side, and the ones where they didn’t know (X county in PA, Y county in NC) could be totalled up to account for some number of circumstances…and about of quarter of those scenarios had DJT winning.

A statement about sample means is again, a statment of ignorance. The whole point of the mean is that you DON’T know the underlying reasons for the thing you’re measuring–if you did, you wouldn’t need to sample! you’d predict it!

The idea that prior probabilities –like the mean IQ of all 8th graders is “known to be” 100–are real is absurd. Again, the probability is a statement of ignorance. To claim you KNOW the mean to be something is absurd, because the expected value is the expectation GIVEN YOUR ignorance.

Expected values (that’s another word for “mean”, here) can be determined because you actually HAVE a process you’re measuring (i.e. you gained knowledge and decreased your ignorance), or you actually sampled a population. And the law of large numbers helps you to know how likely your sample value is off from the whole population value (not very, overall.)

A good way to deal with the 150 IQ as first sample problem is with log likelihood. I may post that later tonight. It might be elucidating…

Replies:@Steve SailerBack in the 1970s the USGA had a golf-ball hitting robot, Iron Byron, for testing whether golf ball brands were too bouncy. One interesting thing was they found it hard to build a robot putter to be really consistently accurate because rolling the ball over the grass caused a lot of random micro-deflections.

Of course, this is not the same as the usual coin toss, where you call heads or tails in the air and let the coin drop on the ground. But Alice does seem to have a very good point about the nature of probability as a measure of ignorance instead of a physical system.

I don't really agree with this statement of yours, at least as I understand it:

In the problem in question, it is merely assumed that we know that the average of all 8th graders in the city is 100. One might imagine any number of scenarios in which this would be well established, and yet in a given random sample we don't know a priori the IQs of those in the sample, or the mean of the sample. But more important than imagining a scenario under which these assumptions would, in fact, be true, is simply to grant them and work from them. From the standpoint of our current problem, the relevant a priori and a posteriori probabilities are: what do we reasonably believe about the mean of the sample of 50 just on the basis of the assumption that the mean of the population is 100, and that the sample is randomly selected, and what do we reasonably believe about the mean of the sample of 50 if we know further that a randomly selected member has IQ 150?Conditional probabilities, and the notion of independence, allow us to frame this problem in a sharp way, and draw reasonable conclusions.

Conditional probabilities, independence, and other basic ideas in probability serve as the effective, simplifying framework in which to analyze problems so that we can call them right. But their application is often subtle and difficult enough that very smart people can get things wrong if they either don't bother to apply them or do so in a sloppy way or are distracted from their correct application by countervailing but ultimately confused intuitions.

I'd say that that's what went on in the Monty Hall case. In understanding it, one needs to restrict oneself to the question: what do we know, and when did we know it? If one follows this rule carefully, the correct answer pops out of the analysis. If a contestant chooses one of the three doors, then, because he knows nothing about them and is doing so at random, his chances of getting the correct door are 1/3. When Monty Hall chooses a door with a goat behind it, he knows of course which door the contestant chose, and chose a door that had a goat in the light of this. Does the contestant now have any further information as to whether his choice of door was correct? No: Monty Hall would open a door with a goat behind it whether or not the door chosen by the contestant had a car behind it. The chances that the chosen door was right cannot improve from 1/3 because the contestant has no further knowledge about it. But how about the remaining door? Before Monty Hall opened a door with the goat behind it, the contestant would also give that door 1/3 chance of having a car behind it. But after Monty Hall opens the door with the goat behind it, the contestant knows more about that door: it is one of only two possible doors with a car behind it. Now the chances that it has a car behind it are 1/2. 1/2 is bigger than 1/3, so the correct decision is for the contestant to switch doors.

This is basically all there is to the problem. And it's important to grasp that this is all there is to it, because other countervailing intuitions simply muddy the picture. Knowing what's correct in many cases involves not only having the right explanation, but understanding that all other approaches miss the real point, and can be dismissed.

The distinction between 'deterministic' and 'random' is far more obscure in theory than many people realize, and in practice there isn't one.

I Want You!: The Evolution of the All-Volunteer Force

By Bernard D. Rostker, K. C. Yeh

p. 382 onward

The entire book is made available via the RAND Corporation. However, it excludes Yeh’s name.

The misnormed-ASVAB passage starts on 392 in this edition.

As a busy 70 year old, he's learned his time is valuable. He never had a Shakespearean tongue, but found that developing it would have hindered his ability (i.e. made him waste more time) trying to convince many diverse groups of people to hop on board his projects. Instead of learning how to convince anyone via a honey tongue, he learned the art of sizing someone up to see if they were worth talking to and if they would listen to him (hence his mega-fast dismissal of the Ali G clown):

https://www.youtube.com/watch?v=sP5ElraFHHE

I see the Left is going to go the whole Bush 43/Dan Quayle route and make fun of Trump's speaking abilities to claim that he's stupid. That's a grand plan, I hope they continue it; I never want to interrupt my opponents while they're making a mistake.

Perhaps the last "great" (i.e. traditionally stereotypical) Republican speaker was St. Reagan. Bush 41 was middling; the Dana Carvey SNL impression captured his style perfectly. Cheney was a good speaker, but the Left painted everything he did in a Sith Emperor light. W. gave really good prepared speeches (his post-9/11 speech was awesome) but off the cuff his style was easy to insult. Quayle's potato gaffe haunted him, but his

Murphy Brownspeech was quite excellent, if you ever listen to it. And what they did to Palin....just disgusting.Having watched Trump live at rallies, the man is an excellent speaker: funny, clear, bright, confident, and in total command. He speaks like an old-time union boss to his fellow workers during a rally, which isn't surprising, given his business was in the construction industry. The Left is determined to convince us he's a buffoon and idiot, and they're going to fail, because he's taken fighting with the media to a whole other level, which is awesome to watch, and their classist nature is coming through with each put down.

I agree. The sound bites they play on the media do nothing to showcase Trump’s verbal agility in person.

When I saw him in person he would take a thread, and someone would yell out a theme, and he would start running with that theme before tying it into his original talking point and using it as a springboard into his next talking point.

For example, at Fountain Hills I remember him talking about the Iran deal, and someone yells out “Build the Wall!” and he starts running with it, before talking about Islamic terrorists using the southern border to get into the US, and then seguing into vetting refugees. His entire speech had that tempo.

But the anonymous eeyore contingent around here thinks he’s an idiot. Lawl.

Trump is the most unconventional person elected president. There are no normal or typical comparisons to make--he will surprise and disappoint as no one before him. But projecting personal prejudices about the NYT onto Trump is a wasted effort. The NYT needs access to Trump more than Trump needs the NYT. Assume Trump knows this. Four years is a long time.

Bro, I’ve been saying the same thing. But the people who have been miserably wrong the last year don’t take time to reflect on how they were wrong.

Oh no. Time to climb back into gimp suit because *insert this week’s non issue here*.

Probability is a measure of ignorance.

Young students through college level students are taught probability badly, where they are misguided from the beginning. They think a statement of probability is a statement of the world--like the probability that a die comes up 5 is 1/6. They think this 1/6 is some truth about our physical universe.

But in fact, the statement that probability that a 6-sided die comes up 5 is 1/6 is really "Given we have no knowledge of the physical system and therefore we have no reason to expect one outcome more than any other, all outcomes are equally likely to occur. six possible outcomes, so a given one (in this case 5,) comes up 1 of those 6. We call that 1/6."

All of that is predicated on our ignorance of the physical system. There's nothing inherently random about flipping a coin. There CAN'T BE, it's a completely Newtonian physical system. Specify the inputs (position and momentum of the coin), and the forces you put on it, and the outputs are completely determined. BUT, in practice, most of us don't carefully control the inputs to the flipped coin, and small perturbations affect the outcome. If we KNOW how to toss a coin perfectly, then that "FAIR COIN" can still always come out heads.

That fundamental misunderstanding is why even profs who should know better get confused about whether an already-tossed-coin has a probability of 1/2 of being head ("but it already happened! the outcome is fixed! how can the probability not be H or T???" "Because probability is a statement of OUR knowledge of th system--we knew nothing about the tossing, so we have no preference for H or T. Probability is 1/2.")

That's why economists *like those at GMU) using Bayesian reasoning make Really Stupid mistakes, because they think these prior probabilities EXIST in the world like a platonic solid, when in fact, they are a function of our ignorance.

It's also why people get confused by the seemingly absurd statements like "DJT has a 25% chance of winning the election"--that's really a statement measuring the ignorance someone has to the outcome. Silver's knowledge, say, of the electoral system gave him the known number of electoral votes in the DJT side and in the HRC side, and the ones where they didn't know (X county in PA, Y county in NC) could be totalled up to account for some number of circumstances...and about of quarter of those scenarios had DJT winning.

A statement about sample means is again, a statment of ignorance. The whole point of the mean is that you DON'T know the underlying reasons for the thing you're measuring--if you did, you wouldn't need to sample! you'd predict it!

The idea that prior probabilities --like the mean IQ of all 8th graders is "known to be" 100--are real is absurd. Again, the probability is a statement of ignorance. To claim you KNOW the mean to be something is absurd, because the expected value is the expectation GIVEN YOUR ignorance.

Expected values (that's another word for "mean", here) can be determined because you actually HAVE a process you're measuring (i.e. you gained knowledge and decreased your ignorance), or you actually sampled a population. And the law of large numbers helps you to know how likely your sample value is off from the whole population value (not very, overall.)

A good way to deal with the 150 IQ as first sample problem is with log likelihood. I may post that later tonight. It might be elucidating...

Has anybody built a coin-flipping machine that can make a fair coin come up heads or tails more than, say 95% of the time? It seems like it would be doable in a vacuum, but air currents would cause knuckleball-like random effects so that it would be hard to do on, say, the 50 yard line of a football stadium.

Back in the 1970s the USGA had a golf-ball hitting robot, Iron Byron, for testing whether golf ball brands were too bouncy. One interesting thing was they found it hard to build a robot putter to be really consistently accurate because rolling the ball over the grass caused a lot of random micro-deflections.

Replies:@AliceAnd yes, it's doable in a vacuum, and it takes very careful conditions. It's certainly difficult, and that's why practically speaking, we consider any given coin toss "fair".

But the beane/Dykstra situation reminds me of the old tale about Jimmy Stewart, later in life, on a small private plane that was hitting turbulence in a big storm. The pilot was scared, but Stewart was terrified beyond belief, and with good reason: Stewart was a legitimate war hero as a pilot, flying many missions into enemy territory, winning medals, and wound up retiring as a Brigadier General. So while the pilot (who wasn't that good) knew

halfof the things that could go possibly wrong, Stewart kneweverythingthat could go wrong.In short, ignorance is bliss.

Frank Capra tells that story about Jimmy Stewart in his autobiography: they’ve hired a private plane to fly them across Texas to a promotional event for “It’s a Wonderful Life” in 1946 and fog cover up the ground. Capra initially assumes that well the pilot must know some pilot trick for finding an airport in the fog, but then he notices that Colonel Stewart isn’t so confident.

Replies:@Steve SailerBut one thing to keep in mind is that the great anecdotes in movie directors’ memoirs are the products of the best storytellers in the world and they may have been fixed in postproduction.

Replies:@WhoeverFirst off, the whole idea that an experienced pilot would be "terrified beyond belief" when encountering turbulence is absurd. Add on top of that that this person is a combat veteran who exercised command authority. You cannot function in a combat situation if you do not have an absolutely iron grip on your emotions. And you cannot exert effective leadership if you are visibly agitated in any way, let alone exhibiting fear. And the idea that you would display a lack of confidence in a subordinate's ability to carry out the mission without any proof.... It's just nonsense.

In any case, here's a real story about how Stewart behaved during a real emergency when aboard a B-52 returning to Guam from a sortie over Viet Nam in 1966: Mr. Stewart Goe to Viet Nam.

The pertinent passage:

As Amos flew into the abort area north of Andersen, the crew started to calculate the flaps-up landing data: airspeed plus-35 knots; landing roll—longer; if drag chute failure—50 percent longer. He then escorted Stewart to the instructor navigator position on the plane’s lower deck. “If I lose control of the aircraft,” Amos said, “I will call out over the intercom ‘bailout’ three times and activate the bailout light. The navigator will be the first to go, creating a large hole by his downward ejection seat.” Amos reassured Stewart that he would do everything he could to regain control of the bomber and would be the last to leave the aircraft.“Do you understand, General Stewart?” Amos asked.

“Yes, Captain Amos, I understand,” Stewart very calmly answered in his familiar granular voice.

(...) the great anecdotes in movie directors’ memoirs are the products of the best storytellers in the world and they may have been fixed in postproduction.1) As Goethe once remarked, the very nature of storytelling is to play/work with the tension between poetry and truth - that's why he named his autobiography From my Life - Poetry and Truth.

2) There is no storytelling (not even a bad one) without being fixed in postproduction so to speak, because wording events is by it's very nature "post".

3) Movie directors know how to make movies, which is not exactly the same as telling a story. There's many reasons for this, maybe the main diffrence between the storyteller (writer) and the director is, that film is a collective enterprise.

(Some directors are good storytellers - but others are merely technicians, providing free space for the actors, cameramen, the writers etc. - and still succeed (sometimes)).

4) Goethe points out, that it makes some sense to let the public phantasise about who is the best poet and who's the worst (=best and worst storyteller). But in the end, in the realm of the arts, the superlative makes lesser sense than say in realms, where you can measure the results (sports are a good example - if not a perfect one - the importance of a coach can hardly be completely objectified, I'd guess).

The most important effect of art can't be nailed down/ fixed / measured at all, because what's being influenced and formed by art are things like taste, like being impressed/moved, getting used to doubts and so on, that are best articulated in a medium, whose most important feature is it's unprecise nature - and the name of this part social and part individual medium, which makes matters even less explicit, is: Language.

Oh, how a fat, crazy ego blinds a person.

Stein's current Michigan total is 51,000 votes. The vote gap there between Trump and Clinton is only 10,700 votes. It's pretty obvious that Stein flipped the state to Trump, since most of her votes would have gone to the other leftist female on the ballot if Stein hadn't chosen to run. In Wisconsin, The Trump-Clinton gap is 27,000 votes, and Stein got 31,000 there. She may have cost Clinton the state of Wisconsin, too.

But the most recent totals have Trump ahead 70,000 votes in Pennsylvania. There's no way the Democratic machine will be able to manufacture enough votes to win the state on a recount. The gap is too big, and I'm sure the Democratic machine was cheating with all their might to manufacture votes for Clinton during the election anyway.

To flip the electoral college to Clinton, Trump would have to lose all three states. If he wins only one of them on a recount, even the state with the lowest number of electoral votes, Wisconsin, Trump's still got 270 votes, and he wins the presidency.

Still, I'm pleased to see math-impaired, irrational and indignant Democrats spending their money on a recount instead of using that cash to pay their mortgates/rent, medical bills, kid's college education, or building up a nest egg to tide themselves over hard times. Letting your fury spend your cash is always a bad idea and helps you on your way to winning a Darwin Award, but Democrats are ruled by emotions, not reason.

There is no way those are small donors. The counts were running systematically at 3 a.m. when most of America is asleep.

That is either a bot or a lot of foreign donors.

Replies:@Anonymous NephewSteve,

The sabermetrics and other moneyball-math seems interesting and a glaring indictment of traditional baseball scouting, but I think what is missing in the understanding is that such math may only work among the Major League teams. The players have been sifted out so thoroughly and are so well-balanced that such mathematics may yield a useful result.

This does not seem to work as well with minor league teams or, earlier, college and high school teams. Let’s say you are observing a High School baseball game. You have a pitcher who is destined to be a future Hall of Famer, facing up against a batter who is very good athlete, but not destined for professional teams. The future Hall of Famer will easily strike out this guy, but is the pitching statistic generated of any material usefulness to a baseball scout? Such data is likely not a very good indicator because the wheat and the chaff have not been sufficiently separated.

After all, Major League baseball players don’t just appear out of the ether. They have to come from farm teams in the Minor Leagues. And Minor League players don’t just appear out of the ether. They come from college and high school teams. There is such a wide range of professional and unprofessional talent that the gameplay stats alone can’t tell you anything. You have to observe the individual player like you would race horse.

I suspect that baseball scouts were simply applying these tried and true metrics to the Major Leagues and not simply being overly traditionalist.

Replies:@Steve SailerWell, the odds of picking a 150+ IQ child from a 100 IQ average population are about 1 in 2000, so if we’re doing the red flag thing, my next question would be, are there 2000 children in this city?

The sabermetrics and other moneyball-math seems interesting and a glaring indictment of traditional baseball scouting, but I think what is missing in the understanding is that such math may only work among the Major League teams. The players have been sifted out so thoroughly and are so well-balanced that such mathematics may yield a useful result.

This does not seem to work as well with minor league teams or, earlier, college and high school teams. Let's say you are observing a High School baseball game. You have a pitcher who is destined to be a future Hall of Famer, facing up against a batter who is very good athlete, but not destined for professional teams. The future Hall of Famer will easily strike out this guy, but is the pitching statistic generated of any material usefulness to a baseball scout? Such data is likely not a very good indicator because the wheat and the chaff have not been sufficiently separated.

After all, Major League baseball players don't just appear out of the ether. They have to come from farm teams in the Minor Leagues. And Minor League players don't just appear out of the ether. They come from college and high school teams. There is such a wide range of professional and unprofessional talent that the gameplay stats alone can't tell you anything. You have to observe the individual player like you would race horse.

I suspect that baseball scouts were simply applying these tried and true metrics to the Major Leagues and not simply being overly traditionalist.

Right, sabermetrics doesn’t work (yet) at the high school level, which is from where teams typically draft future Hall of Famers. Everybody in high school who is headed for the majors is a shortstop who bats .500, but then so are other guys. High school statistics, I assume, are too messy to be terribly reliable.

Probability is a measure of ignorance.

Young students through college level students are taught probability badly, where they are misguided from the beginning. They think a statement of probability is a statement of the world--like the probability that a die comes up 5 is 1/6. They think this 1/6 is some truth about our physical universe.

But in fact, the statement that probability that a 6-sided die comes up 5 is 1/6 is really "Given we have no knowledge of the physical system and therefore we have no reason to expect one outcome more than any other, all outcomes are equally likely to occur. six possible outcomes, so a given one (in this case 5,) comes up 1 of those 6. We call that 1/6."

All of that is predicated on our ignorance of the physical system. There's nothing inherently random about flipping a coin. There CAN'T BE, it's a completely Newtonian physical system. Specify the inputs (position and momentum of the coin), and the forces you put on it, and the outputs are completely determined. BUT, in practice, most of us don't carefully control the inputs to the flipped coin, and small perturbations affect the outcome. If we KNOW how to toss a coin perfectly, then that "FAIR COIN" can still always come out heads.

That fundamental misunderstanding is why even profs who should know better get confused about whether an already-tossed-coin has a probability of 1/2 of being head ("but it already happened! the outcome is fixed! how can the probability not be H or T???" "Because probability is a statement of OUR knowledge of th system--we knew nothing about the tossing, so we have no preference for H or T. Probability is 1/2.")

That's why economists *like those at GMU) using Bayesian reasoning make Really Stupid mistakes, because they think these prior probabilities EXIST in the world like a platonic solid, when in fact, they are a function of our ignorance.

It's also why people get confused by the seemingly absurd statements like "DJT has a 25% chance of winning the election"--that's really a statement measuring the ignorance someone has to the outcome. Silver's knowledge, say, of the electoral system gave him the known number of electoral votes in the DJT side and in the HRC side, and the ones where they didn't know (X county in PA, Y county in NC) could be totalled up to account for some number of circumstances...and about of quarter of those scenarios had DJT winning.

A statement about sample means is again, a statment of ignorance. The whole point of the mean is that you DON'T know the underlying reasons for the thing you're measuring--if you did, you wouldn't need to sample! you'd predict it!

The idea that prior probabilities --like the mean IQ of all 8th graders is "known to be" 100--are real is absurd. Again, the probability is a statement of ignorance. To claim you KNOW the mean to be something is absurd, because the expected value is the expectation GIVEN YOUR ignorance.

Expected values (that's another word for "mean", here) can be determined because you actually HAVE a process you're measuring (i.e. you gained knowledge and decreased your ignorance), or you actually sampled a population. And the law of large numbers helps you to know how likely your sample value is off from the whole population value (not very, overall.)

A good way to deal with the 150 IQ as first sample problem is with log likelihood. I may post that later tonight. It might be elucidating...

I actually used to play a fun game of coin toss with myself. If I wanted a quarter to come up heads, I would position the quarter on my thumb with the tails side up, spin it in the ear with a fixed amount of force for a certain distance, catch it, then slap it on my other hand. It would come up heads every time, or at least often enough for a “streak.”

Of course, this is not the same as the usual coin toss, where you call heads or tails in the air and let the coin drop on the ground. But Alice does seem to have a very good point about the nature of probability as a measure of ignorance instead of a physical system.

Replies:@Alicehttps://www.amazon.com/Probability-Theory-E-T-Jaynes/dp/0521592712/ref=mt_hardcover?_encoding=UTF8&me=

The book, Probability Theory, the Logic of Science was published posthumously. It's a collection of lecture notes and papers over time that actually explainz Bayesian inference and statistical inference, and when it's valid. He is very clear about what probability theory MEANS in a physics way, and what "inference" means.

We usually understand inference in a causal way--certain conditions CAUSE events. But in Bayesian theory, we use the word "inference" to talk about what we can KNOW about a system. The math of Bayesian inference says we've got wet streets, so we KNOW it rained. But that doesn't mean the wet streets CAUSE rain. Most economists and social scientists don't understand this and confuse us with their ignorance.

I highly recommend it.

OT

“Should Some Knowledge Be Forbidden? The Case of Cognitive Differences Research”

http://www.journals.uchicago.edu/doi/full/10.1086/687863

That is either a bot or a lot of foreign donors.

Anon – could you go into more detail? I presume you’re talking about the fundraising for Jill Stein’s recount-but-only-in-swing-states-Trump-won appeal.

(100 * (p-1) + 150) / p

Does this equal 100 as stated in the problem? No. Therefore, since the assumption of independence directly contradicts the conditions given, the assumption of independence is false in this problem. It is amazing the lengths to which so many people here are going to prove that they can't reason with probabilities. There are no semantic issues here.

“If the expected IQ of each of the untested students is 100”

If you start with a false premise any conclusion will do. Stopped reading at this point.

Replies:@hootsOh, how a fat, crazy ego blinds a person.

Stein's current Michigan total is 51,000 votes. The vote gap there between Trump and Clinton is only 10,700 votes. It's pretty obvious that Stein flipped the state to Trump, since most of her votes would have gone to the other leftist female on the ballot if Stein hadn't chosen to run. In Wisconsin, The Trump-Clinton gap is 27,000 votes, and Stein got 31,000 there. She may have cost Clinton the state of Wisconsin, too.

But the most recent totals have Trump ahead 70,000 votes in Pennsylvania. There's no way the Democratic machine will be able to manufacture enough votes to win the state on a recount. The gap is too big, and I'm sure the Democratic machine was cheating with all their might to manufacture votes for Clinton during the election anyway.

To flip the electoral college to Clinton, Trump would have to lose all three states. If he wins only one of them on a recount, even the state with the lowest number of electoral votes, Wisconsin, Trump's still got 270 votes, and he wins the presidency.

Still, I'm pleased to see math-impaired, irrational and indignant Democrats spending their money on a recount instead of using that cash to pay their mortgates/rent, medical bills, kid's college education, or building up a nest egg to tide themselves over hard times. Letting your fury spend your cash is always a bad idea and helps you on your way to winning a Darwin Award, but Democrats are ruled by emotions, not reason.

Some claimed that there were discrepancies:

The Election was Stolen – Here’s How…

http://www.gregpalast.com/election-stolen-heres/

Why Jill Stein is going with it is really a good question? She already collected more for the recount in 24h than what she had for her own election.

Jill Stein, Working for NEOCON Team Hillary, is Pushing for Recount in 3 Key States to Make Killary Our President (or to kick off civil war)

https://willyloman.wordpress.com/2016/11/24/jill-stein-working-for-neocon-team-hillary-is-pushing-for-recount-in-3-key-states-to-make-killary-our-president-or-to-kick-off-civil-war/

The simplest way I can think to conceptualize the problem is that the key is understanding that Monty Hall has to operate under certain rules, so that if you understand the rules, his action in revealing one of the two doors containing the goat does reveal information about one of the remaining doors, the door that the contestant might switch to.

Monty Hall has to open a door containing a goat, but can never open the door originally selected by the contestant, whether or not it contains a goat.

He does not reveal anything additional about the probability that the door first selected by the contestant contains the prize which remains one third. He can never open that door regardless. But the chance that the door he doesn't select, out of the set of two he can selects, contains the prize goes from one third (half of two thirds) to two thirds (all of two thirds), the set of two the contestant doesn't originally select being different from the set of three the contestant could originally select.

I found it to be a notably counter-intuitive result, which I found difficult to retain (I nowadays find that a formulation similar to yours allows me to do so). In fact when I first contemplated it (a couple of decades ago now) the effect seemed to me to be similar to my experience with some other philosophical, spiritual and mathematical arguments, where I could grasp them with direct and sustained mental effort, and gain a very strong feeling of their correctness, but later found that this intuitive grasp had gone, and whilst knowing intellectually that I had been convinced of their rightness, I could no longer explain the point to others, and would have to work through it again from first principles to convince myself again.

I took this to be to do with the limits of individual mental capacity (and their inherent stretch-ableness with effort, at least up to a point). I also see no reason to believe that there aren’t truths that will be forever beyond any human capacity to stretch far enough to accommodate them. We might be able to build artificial intelligences to grasp them for us, but they won’t ever be able to explain most of them to any of us. In effect, they will be able (given means to act) to do magic.

Replies:@utuI had an experience of the states of knowing w/o actual knowing because I was wrong. It was in a dream. A solution to mathematical problem was revealed to me with all the psychological and emotional ramification of knowing. But when I woke up and thought of it, the solution was incorrect.

Maybe another formulation, which I think is mathematically identical, would help- call it the Vizzini Wine problem after Ted Cruz's favorite film. Vizzini and you have to each drink a cup of wine by you choosing one chalice from a set of three Vizzini has prepared knowing he has poisoned 2 out of 3 of the cups. After you choose a cup Vizzini must spill one of the two remaining chalices, after which you either keep the chalice you first picked, or switch it for the remaining untouched one. Then you and Vizzini both drink fully and at the same time from 2 chalices.

I think the underlying psychological intuition helps here- Vizzini would never spill the unpoisoned cup as that means certain death for him, and while you may have lucked out and picked the safe cup on the first try, you have marginally more "information" about the cup Vizzini chooses not to spill being a bit more likely to be safe, since under every possible situation this never be the worst, and sometimes may be the better of the two cups he has to choose from knowing with perfect information which one is bad.

Probably the best scene to me from the Google-advert movie THE INTERNSHIP was when Vince Vaughn and Owen Wilson hand-wave their way through a Google interview question by challenging the basic assumptions to come up with a trivial, "correct" solution by taking all the complexity out it. This is probably the worst thing you can do in such an actual interview, since your interviewers are more interested in your thought processes (even if your solution is ultimately wrong) than if you short-circuit the question entirely by arguing and nitpicking its basic "reality". So it is funny reading about 30% of the response here basically doing the same thing and questioning things like whether the true mean of the population can be known, when that is a given of the question.

Trump's was the Greatestest Comback.

You can only have a comeback if you were actually behind or defeated. At no time was Pres. Trump either.

Who has the correct answer?

Why would you even make a judgement after the first sample?

Replies:@utu"The mean I.Q. of the population of eighth-graders in a city is known to be 100. You have selected a random sample of 50 children for a study of educational achievement. The first child tested has an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?"

is Mean(The sample of 50)=101. But now let us ask a much harder question. How well the value of 101 estimates the actual mean of the sample? To answer this question one needs to know what is the standard deviation (or its estimate) of IQ's in the city . From the fact that in the sample of 50 there is at least one with the IQ=150 can one get some, perhaps crude, estimate of the standard deviation?

That a population Y has some value whose average is x, does not mean that any sample y drawn from that population will have the same average x. If you draw many random samples of size y from the population, the expected value of the average of the sample values will be x with a standard deviation of some currently unknown value. The value of x for sample size of 1 does not give you any information about the value of X for the population it came from.

Replies:@utuWRONG! First of all you did not mean to use the term average. You meant to use the term mean.

It is really trivial. It follows form the fact that the expected value, i.e., the mean is a linear function of random variable. From the linearity it follows that the mean of sum of variables is equal to the mean of sums.

The mean is the best estimator of the actual average. So if the actual average of IQ's of John, Susan,..., Jim_N is unknown but we know that all of them were randomly drawn from the population with mean IQ=IQ_mean, then the IQ_mean is the best estimator of the average of IQ's of John,...

There is no sample size large enough for your statement to be true. Only if the first tested subject has an IQ of 100 does the expected value for the remaining (untested) population remain equal to 100. This is a condition made necessary by the problem statement that the mean for the whole population is 100. It is not a matter of interpretation.

You are being needlessly pedantic, ie too clever by half. The really important thing is to understand how to reason from the assumptions underlying the probability distribution function, the claims of independence, etc.

Umderstanding the simple problems is what allows you to extend that rigor into more difficult problems, and even to see that they are difficult and that their preconditions are stingent.

In rereading, it seems a bit harsh, but oh well

Replies:@FactsAreImportantBack in the 1970s the USGA had a golf-ball hitting robot, Iron Byron, for testing whether golf ball brands were too bouncy. One interesting thing was they found it hard to build a robot putter to be really consistently accurate because rolling the ball over the grass caused a lot of random micro-deflections.

Yes, someone did, and yes, the fair coin came up heads a very large number of times in a row. Tyler Cowen even blogged about it years ago, but wasn’t smart enough to understand it, and put it under “things that make you go hmmmm”. I’ll try to find the post.

And yes, it’s doable in a vacuum, and it takes very careful conditions. It’s certainly difficult, and that’s why practically speaking, we consider any given coin toss “fair”.

Of course, this is not the same as the usual coin toss, where you call heads or tails in the air and let the coin drop on the ground. But Alice does seem to have a very good point about the nature of probability as a measure of ignorance instead of a physical system.

There’s a terrific textbook that teaches all of this very well. It’s by E.T Jaynes, a brilliant physicist who knew everything about stat mech and thermo. He invented the model (and knew it was just a model) known a MAXENT, Maximum Entropy model, to explain physical systems. He’s discovered what’s known as the Jaynes-Cummings model for two-state atoms.

The book, Probability Theory, the Logic of Science was published posthumously. It’s a collection of lecture notes and papers over time that actually explainz Bayesian inference and statistical inference, and when it’s valid. He is very clear about what probability theory MEANS in a physics way, and what “inference” means.

We usually understand inference in a causal way–certain conditions CAUSE events. But in Bayesian theory, we use the word “inference” to talk about what we can KNOW about a system. The math of Bayesian inference says we’ve got wet streets, so we KNOW it rained. But that doesn’t mean the wet streets CAUSE rain. Most economists and social scientists don’t understand this and confuse us with their ignorance.

I highly recommend it.

Replies:@mapI like what you write, but I think this is actually incorrect. If you look at the wikipedia entry of Bayes' Theorem, you see that the formulation of Bayes' Theory assumes a great deal of known probabilities: you assume that the prior probability, the likelihood and the marginal likelihood are largely known. This might make sense in the context of experimental design, where the observed data are generated by a scientist running an experiment, but I don't see how useful that is when you have no control over an existing physical system. That's the situation that economists and social scientists find themselves in.

Physical scientists operate in the lucky universe of systems being described as a small number of independent variables that have a lot of predictive and explanatory power. Economists do not. Apply Bayesian Inference in this context looks sloppy.

Imagine that the French physicists discovered that when using the 1 meter standard held in Sèvres the average length of fish caught in Seine increase by about 0.3% per year. Would they conclude that the fish gets longer or that the 1 meter standard gets shorter. They would use to measure say the height of the Eiffel tower and if its height remained constant to within significantly less then 0.3%/year they would conclude that the fish indeed are getting longer. If the length of all objects measured with 1-meter standard were getting larger by about 0.3%/year they most likely would conclude that their standard sucks, that something is wrong with it. The IQ researcher have no luxury to use the IQ test, i.e., their 1 meter standard to anything else but to the fish. So form the strictly epistemological point of view the so called the Flynn effect is just more elegant tautology hiding the shaky foundation on which the IQ research stands.

Hmm. Thought-provoking take.

So what’s the “wrong” answer K&T are trying to elicit to show that people are not fully rational? 150? To show that people tend to underweight priors?

Unless I badly misremember what I learned in college statistics courses, if you’re 3.3 SD’s above the mean, you’re maybe one or two out of a thousand, not a hundred. I’d be suspicious.

The binomial distribution says that the chance of success of 1 (or more) trial(s) out of 50 at 2.1453% each is 70.9%.

“Well, the probability of a member of a 100 IQ population having a 150 IQ is 2.1453%.”

No, it’s 1 in 2330.67, 0.0429%

And the question was whether the very first in the sample of 50 had that score, not whether one out of the 50 did, which is a different question. Given the inverse rarity (2330) is much higher than the sample size(50) the odds of having one person in the group that high or higher is better estimated by dividing the inverse rarity by the size of the sample, or about 1 in 26 or 27, given a population large enough, and with smaller populations the gain in precision from using a fancier method to calculate will be spurious. Quite likely the s.d. isn’t exactly the same size as assumed, so for an s.d. between 14 and 16 over the

whole interval from 100 to 150(generally not true since the distribution of equal-interval measures of intelligence is closer to log-normal, compressing a wide range of ability into the top normal-curve IQ scores) , the error bounds on the inverse rarity would be between 1125 and 5633, so trying for more than one digit of precision is a waste of time.Replies:@Steve SailerA. It was a perfectly legit coincidence?

B. You've somehow screwed up (e.g., you are giving the test for small children to large children, or your sample isn't as random as you hoped, or the kids at this school took the same IQ test last week in a different study, or something else).

This is an interesting way to put it.

One of the unwritten rules of test-taking is to use all the information available (if possible) and it is a red flag to not use all the information. (Students usually consider it a trick when extraneous information is included.)

This rule comes from understanding how humans interact, that if someone tells you something, they mean for you to ascribe some meaning to it. It is a social cue and comes from a deep understanding of how people communicate. Kahneman sometimes exploits this social understanding by tricking test-takers into being “illogical” by trying to ascribe meaning to superfluous information such as in the librarian question.

Interestingly, in the test-taker question, he gives us relevant information, but expects us to ignore it based on unstated social conventions. He tells us we are researchers, but expects us to ignore that research is often flawed: random samples are often not random, researchers are often too quick to rely on prior research, and some schools do better on IQ tests than others. A robotic test-taker would take all this into account because Kahneman doesn’t explicitly say to ignore it. But, Kahneman will label us as irrational if we don’t apply the unstated social convention that we pretend all data is flawless and all research designs well-executed.

So sometimes Kahneman faults us for using all the information given, and sometimes he faults us for failing to use information not provided.

Answers from 101 to 150 are valid depending on your assumption about the strength of your prior on the population mean being 100. It depends on the strength of the prior information that the mean population IQ is 100.

Sorry about the tone. I can get carried away.

Steve really doesn’t like me pointing out the masochist eeyores in his commentarist continue to bray. News at 11.

(49*(100*p-150) / (p-1) + 150) / 50

The authors are presumably aware of first-week statistics, as well as lots of wrinkles you haven’t considered.

Correcting for sampling with replacement is spurious precision under every real circumstance where one has enough data to have a decent hope of getting even p less than 0.05 for effect sizes of the magnitude seen in psychology. It’s also spurious precision even in this ideal case because the population size is presumed to be much much larger than the sample size.

The belief that each person “has an IQ” is wrong, anyway — the likelihood that the person who scored 150 in the sample also scored that high in the norming sample is going to be fairly low, perhaps 1 in 3, even without ceiling effects. Likely a retest would be in the low-to-mid 140s, (depending on the test-retest correlation, which would likely be 0.9 – 0.95). Not only are there uncertainties, but even the magnitude of the uncertainties is also uncertain.

Replies:@hoots“That a population Y has some value whose average is x, does not mean that any sample y drawn from that population will have the same average x.”

WRONG! First of all you did not mean to use the term average. You meant to use the term mean.

It is really trivial. It follows form the fact that the expected value, i.e., the mean is a linear function of random variable. From the linearity it follows that the mean of sum of variables is equal to the mean of sums.

The mean is the best estimator of the actual average. So if the actual average of IQ’s of John, Susan,…, Jim_N is unknown but we know that all of them were randomly drawn from the population with mean IQ=IQ_mean, then the IQ_mean is the best estimator of the average of IQ’s of John,…

Replies:@CKThe population failed to deliver on the sum of the sample means.

Because that what they ask as to do.

I took this to be to do with the limits of individual mental capacity (and their inherent stretch-ableness with effort, at least up to a point). I also see no reason to believe that there aren't truths that will be forever beyond any human capacity to stretch far enough to accommodate them. We might be able to build artificial intelligences to grasp them for us, but they won't ever be able to explain most of them to any of us. In effect, they will be able (given means to act) to do magic.

I like you insights into the mental states of knowing. I have similar recollections. I just add one more.

I had an experience of the states of knowing w/o actual knowing because I was wrong. It was in a dream. A solution to mathematical problem was revealed to me with all the psychological and emotional ramification of knowing. But when I woke up and thought of it, the solution was incorrect.

Reminds me of something Pedro said when asked what goes through his mind when he’s playing first base in the late innings of a close, important game.

I paraphrase closely: “First, I pray that they don’t hit the ball to me. Then I pray they don’t hit it to Sax.

I think the story has been “massaged.”

First off, the whole idea that an experienced pilot would be “terrified beyond belief” when encountering turbulence is absurd. Add on top of that that this person is a combat veteran who exercised command authority. You cannot function in a combat situation if you do not have an absolutely iron grip on your emotions. And you cannot exert effective leadership if you are visibly agitated in any way, let alone exhibiting fear. And the idea that you would display a lack of confidence in a subordinate’s ability to carry out the mission without any proof…. It’s just nonsense.

In any case, here’s a real story about how Stewart behaved during a real emergency when aboard a B-52 returning to Guam from a sortie over Viet Nam in 1966: Mr. Stewart Goe to Viet Nam.

The pertinent passage:

As Amos flew into the abort area north of Andersen, the crew started to calculate the flaps-up landing data: airspeed plus-35 knots; landing roll—longer; if drag chute failure—50 percent longer. He then escorted Stewart to the instructor navigator position on the plane’s lower deck. “If I lose control of the aircraft,” Amos said, “I will call out over the intercom ‘bailout’ three times and activate the bailout light. The navigator will be the first to go, creating a large hole by his downward ejection seat.” Amos reassured Stewart that he would do everything he could to regain control of the bomber and would be the last to leave the aircraft.“Do you understand, General Stewart?” Amos asked.

“Yes, Captain Amos, I understand,” Stewart very calmly answered in his familiar granular voice.

Replies:@Harry BaldwinI took this to be to do with the limits of individual mental capacity (and their inherent stretch-ableness with effort, at least up to a point). I also see no reason to believe that there aren't truths that will be forever beyond any human capacity to stretch far enough to accommodate them. We might be able to build artificial intelligences to grasp them for us, but they won't ever be able to explain most of them to any of us. In effect, they will be able (given means to act) to do magic.

The solution to this (BTW, Wikipedia says Marilyn von Savant did not first solve this problem, only helped popularize the correct answer to it) was so counter-intuitive to me that it has actually stuck in my head for decades, since the answer was so bizarre.

Maybe another formulation, which I think is mathematically identical, would help- call it the Vizzini Wine problem after Ted Cruz’s favorite film. Vizzini and you have to each drink a cup of wine by you choosing one chalice from a set of three Vizzini has prepared knowing he has poisoned 2 out of 3 of the cups. After you choose a cup Vizzini must spill one of the two remaining chalices, after which you either keep the chalice you first picked, or switch it for the remaining untouched one. Then you and Vizzini both drink fully and at the same time from 2 chalices.

I think the underlying psychological intuition helps here- Vizzini would never spill the unpoisoned cup as that means certain death for him, and while you may have lucked out and picked the safe cup on the first try, you have marginally more “information” about the cup Vizzini chooses not to spill being a bit more likely to be safe, since under every possible situation this never be the worst, and sometimes may be the better of the two cups he has to choose from knowing with perfect information which one is bad.

Probably the best scene to me from the Google-advert movie THE INTERNSHIP was when Vince Vaughn and Owen Wilson hand-wave their way through a Google interview question by challenging the basic assumptions to come up with a trivial, “correct” solution by taking all the complexity out it. This is probably the worst thing you can do in such an actual interview, since your interviewers are more interested in your thought processes (even if your solution is ultimately wrong) than if you short-circuit the question entirely by arguing and nitpicking its basic “reality”. So it is funny reading about 30% of the response here basically doing the same thing and questioning things like whether the true mean of the population can be known, when that is a given of the question.

Probability is a measure of ignorance.

It’s certainly true that in most cases probabilities are about what we know, and not about what is the case in reality. One exception to this, though, seems to be certain quantum phenomena. The best theories we have of quantum events give only probabilistic predictions about, say, in the two slit experiment, through which side a given photon will pass. There is no underlying deterministic account of these events.

I don’t really agree with this statement of yours, at least as I understand it:

In the problem in question, it is merely assumed that we know that the average of all 8th graders in the city is 100. One might imagine any number of scenarios in which this would be well established, and yet in a given random sample we don’t know a priori the IQs of those in the sample, or the mean of the sample. But more important than imagining a scenario under which these assumptions would, in fact, be true, is simply to grant them and work from them. From the standpoint of our current problem, the relevant a priori and a posteriori probabilities are: what do we reasonably believe about the mean of the sample of 50 just on the basis of the assumption that the mean of the population is 100, and that the sample is randomly selected, and what do we reasonably believe about the mean of the sample of 50 if we know further that a randomly selected member has IQ 150?

Conditional probabilities, and the notion of independence, allow us to frame this problem in a sharp way, and draw reasonable conclusions.

Conditional probabilities, independence, and other basic ideas in probability serve as the effective, simplifying framework in which to analyze problems so that we can call them right. But their application is often subtle and difficult enough that very smart people can get things wrong if they either don’t bother to apply them or do so in a sloppy way or are distracted from their correct application by countervailing but ultimately confused intuitions.

I’d say that that’s what went on in the Monty Hall case. In understanding it, one needs to restrict oneself to the question: what do we know, and when did we know it? If one follows this rule carefully, the correct answer pops out of the analysis. If a contestant chooses one of the three doors, then, because he knows nothing about them and is doing so at random, his chances of getting the correct door are 1/3. When Monty Hall chooses a door with a goat behind it, he knows of course which door the contestant chose, and chose a door that had a goat in the light of this. Does the contestant now have any further information as to whether his choice of door was correct? No: Monty Hall would open a door with a goat behind it whether or not the door chosen by the contestant had a car behind it. The chances that the chosen door was right cannot improve from 1/3 because the contestant has no further knowledge about it. But how about the remaining door? Before Monty Hall opened a door with the goat behind it, the contestant would also give that door 1/3 chance of having a car behind it. But after Monty Hall opens the door with the goat behind it, the contestant knows more about that door: it is one of only two possible doors with a car behind it. Now the chances that it has a car behind it are 1/2. 1/2 is bigger than 1/3, so the correct decision is for the contestant to switch doors.

This is basically all there is to the problem. And it’s important to grasp that this is all there is to it, because other countervailing intuitions simply muddy the picture. Knowing what’s correct in many cases involves not only having the right explanation, but understanding that all other approaches miss the real point, and can be dismissed.

Replies:@utuRead Jaynes. Then we can argue quantum probabilities. He argues quite clearly that most physicists don't get it. If you read Bohr's arguments with Einstein very carefully you see he just keeps making this point: the probabilities aren't telling us about reality. they're just saying what we know about reality.

But it is ABSURD to play with the toy problem given.

Let's phrase it this way:

I take an IQ test. It comes back 150. What's the probability the test is false?

The occurrence of an IQ geq 150 is a .05% event. The occurrence of a test reporting an IQ geq 150 given an IQ of geq 150 is 98%, say. The occurrence of a test reporting an IQ geq 150 given an IQ of less than 150 is 5%.

Prob(I have iq geq 150 given test geq 150) x prob(test geq 150) = prob(test geq 150 given I have geq 150) x prob (I have iq geq 150)

Prob(I have iq geq 150 given test geq 150) = (.98 x .0005)/(prob(test geq 150 given iq geq 150)prob(iq geq 150) + prob(test geq 150 given iq less 150)(prob(iq less 150)))

Bottom equals

.98x.0005 + .5x.9995 =.05

Computing the while shebang,

Altogether, the probability is 1%.

That is, the probability I have a 150 iq given a test says I do, with some very reasonable numbers for accuracy, is 1%.

1%. 99% the test is false.

And were supposed to start this stupid parlor trick with a "it's well established that the mean iq is 100 and the first test is 150".

So either the mean is wrong or the sample is.

Yes, there may be a value in an economist or social scientist Modeling AS IF it were true, in order to establish some other relation. But here it's a parlor trick.

Even in the best case, it's a model. It's not real. The Map is not the territory. Yes, models have to be simplifying or they are the size of the universe, and models can elucidate other relationships. Still, just a model.

But most of the problems in probability come from forgetting it's a model, or misunderstanding the model. Because the Monty Hall problem is exactly the problem of How Much Knowledge Do We Have. We had more than most people realized--he Never Opens the best reward!-- because they kept oversimplifying the model.

Almost all arguments about probability results come from a disagreement about what we know, or assumptions about what's unknown.

Intolerance, bigotry, separatism, and insularity are the key to wellness for many progressives. Good to know.

I wonder what

Sweet Homesays when you play it backwards.(…) the great anecdotes in movie directors’ memoirs are the products of the best storytellers in the world and they may have been fixed in postproduction.1) As Goethe once remarked, the very nature of storytelling is to play/work with the tension between poetry and truth – that’s why he named his autobiography From my Life – Poetry and Truth.

2) There is no storytelling (not even a bad one) without being fixed in postproduction so to speak, because wording events is by it’s very nature “post”.

3) Movie directors know how to make movies, which is not exactly the same as telling a story. There’s many reasons for this, maybe the main diffrence between the storyteller (writer) and the director is, that film is a collective enterprise.

(Some directors are good storytellers – but others are merely technicians, providing free space for the actors, cameramen, the writers etc. – and still succeed (sometimes)).

4) Goethe points out, that it makes some sense to let the public phantasise about who is the best poet and who’s the worst (=best and worst storyteller). But in the end, in the realm of the arts, the superlative makes lesser sense than say in realms, where you can measure the results (sports are a good example – if not a perfect one – the importance of a coach can hardly be completely objectified, I’d guess).

The most important effect of art can’t be nailed down/ fixed / measured at all, because what’s being influenced and formed by art are things like taste, like being impressed/moved, getting used to doubts and so on, that are best articulated in a medium, whose most important feature is it’s unprecise nature – and the name of this part social and part individual medium, which makes matters even less explicit, is: Language.

If you add to a random sample of 49 one individual with IQ=150 then the best estimate of arithmetic average of sample of 50 is IQ=101.

The moment you obtained the information that among the 50 is one individual with IQ=150 your initial estimate of 100 must be replaced with 101 because you gained extra information so you changed the estimate. It would be incorrect to insist that the best estimate is still IQ=100 and then conclude that the best estimate of the average for the remaining 49 is IQ=98.98.

Yeah

Imagine that instead of taking a random sample of fifty children and throwing away the first element, you’d taken a random sample of forty-nine children. What would you expect to be the mean of your sample if the population it’s taken from has a mean of one-hundred?

You’d expect the mean to be the same as the population, of course.

Replies:@OpinionatorAs a busy 70 year old, he's learned his time is valuable. He never had a Shakespearean tongue, but found that developing it would have hindered his ability (i.e. made him waste more time) trying to convince many diverse groups of people to hop on board his projects. Instead of learning how to convince anyone via a honey tongue, he learned the art of sizing someone up to see if they were worth talking to and if they would listen to him (hence his mega-fast dismissal of the Ali G clown):

https://www.youtube.com/watch?v=sP5ElraFHHE

I see the Left is going to go the whole Bush 43/Dan Quayle route and make fun of Trump's speaking abilities to claim that he's stupid. That's a grand plan, I hope they continue it; I never want to interrupt my opponents while they're making a mistake.

Perhaps the last "great" (i.e. traditionally stereotypical) Republican speaker was St. Reagan. Bush 41 was middling; the Dana Carvey SNL impression captured his style perfectly. Cheney was a good speaker, but the Left painted everything he did in a Sith Emperor light. W. gave really good prepared speeches (his post-9/11 speech was awesome) but off the cuff his style was easy to insult. Quayle's potato gaffe haunted him, but his

Murphy Brownspeech was quite excellent, if you ever listen to it. And what they did to Palin....just disgusting.Having watched Trump live at rallies, the man is an excellent speaker: funny, clear, bright, confident, and in total command. He speaks like an old-time union boss to his fellow workers during a rally, which isn't surprising, given his business was in the construction industry. The Left is determined to convince us he's a buffoon and idiot, and they're going to fail, because he's taken fighting with the media to a whole other level, which is awesome to watch, and their classist nature is coming through with each put down.

Is that the one in which he said Islam was peaceful, and that we shouldn’t commit assault and battery, which we already knew?

I would use “awesome” in the adverbial sense, to modify quite another adjective or two.

No, it's 1 in 2330.67, 0.0429%

And the question was whether the very first in the sample of 50 had that score, not whether one out of the 50 did, which is a different question. Given the inverse rarity (2330) is much higher than the sample size(50) the odds of having one person in the group that high or higher is better estimated by dividing the inverse rarity by the size of the sample, or about 1 in 26 or 27, given a population large enough, and with smaller populations the gain in precision from using a fancier method to calculate will be spurious. Quite likely the s.d. isn't exactly the same size as assumed, so for an s.d. between 14 and 16 over the

whole interval from 100 to 150(generally not true since the distribution of equal-interval measures of intelligence is closer to log-normal, compressing a wide range of ability into the top normal-curve IQ scores) , the error bounds on the inverse rarity would be between 1125 and 5633, so trying for more than one digit of precision is a waste of time.If you start IQ testing and the very first subject scores a 150, what are the odds that:

A. It was a perfectly legit coincidence?

B. You’ve somehow screwed up (e.g., you are giving the test for small children to large children, or your sample isn’t as random as you hoped, or the kids at this school took the same IQ test last week in a different study, or something else).

Replies:@AliceI phrased it as: the subject got an iq test score of 150. What's the probability the subject has an iq of 150?

The answer is 1%.

That is, given a standard distribution with mean 100 and a 150 is .05% of distribution, and given some reasonable assumptions about the accuracy od the test (98% true positive and 5% false positive) a 150 from that distribution is overwhelmingly a false response.

I'm sorry I didn't know how to format it nicely. I email it to you tomorrow if you ask nicely.

I don't really agree with this statement of yours, at least as I understand it:

In the problem in question, it is merely assumed that we know that the average of all 8th graders in the city is 100. One might imagine any number of scenarios in which this would be well established, and yet in a given random sample we don't know a priori the IQs of those in the sample, or the mean of the sample. But more important than imagining a scenario under which these assumptions would, in fact, be true, is simply to grant them and work from them. From the standpoint of our current problem, the relevant a priori and a posteriori probabilities are: what do we reasonably believe about the mean of the sample of 50 just on the basis of the assumption that the mean of the population is 100, and that the sample is randomly selected, and what do we reasonably believe about the mean of the sample of 50 if we know further that a randomly selected member has IQ 150?Conditional probabilities, and the notion of independence, allow us to frame this problem in a sharp way, and draw reasonable conclusions.

Conditional probabilities, independence, and other basic ideas in probability serve as the effective, simplifying framework in which to analyze problems so that we can call them right. But their application is often subtle and difficult enough that very smart people can get things wrong if they either don't bother to apply them or do so in a sloppy way or are distracted from their correct application by countervailing but ultimately confused intuitions.

I'd say that that's what went on in the Monty Hall case. In understanding it, one needs to restrict oneself to the question: what do we know, and when did we know it? If one follows this rule carefully, the correct answer pops out of the analysis. If a contestant chooses one of the three doors, then, because he knows nothing about them and is doing so at random, his chances of getting the correct door are 1/3. When Monty Hall chooses a door with a goat behind it, he knows of course which door the contestant chose, and chose a door that had a goat in the light of this. Does the contestant now have any further information as to whether his choice of door was correct? No: Monty Hall would open a door with a goat behind it whether or not the door chosen by the contestant had a car behind it. The chances that the chosen door was right cannot improve from 1/3 because the contestant has no further knowledge about it. But how about the remaining door? Before Monty Hall opened a door with the goat behind it, the contestant would also give that door 1/3 chance of having a car behind it. But after Monty Hall opens the door with the goat behind it, the contestant knows more about that door: it is one of only two possible doors with a car behind it. Now the chances that it has a car behind it are 1/2. 1/2 is bigger than 1/3, so the correct decision is for the contestant to switch doors.

This is basically all there is to the problem. And it's important to grasp that this is all there is to it, because other countervailing intuitions simply muddy the picture. Knowing what's correct in many cases involves not only having the right explanation, but understanding that all other approaches miss the real point, and can be dismissed.

“Now the chances that it has a car behind it are 1/2. 1/2 is bigger than 1/3, so the correct decision is for the contestant to switch doors.” – No, the chances are 2/3 which is bigger than 1/3. The sum of probabilities of all possible events must equal to 1. You got the right answer about switching to next door only because you know what the answer suppose to be. You still do not understand it.

Replies:@candid_observerBut the more basic point I made was that the opening of the door by Monty Hall tells the contestant nothing about what lies behind the original door, so the chances that that was right remains at 1/3. That's where most people get the wrong answer, because they assume the chances for the two doors not opened by Monty Hall must be the same. Obviously, the remaining chances after removing 1/3 is 2/3, and the only door it can attach to is the door that one can switch to.

First off, the whole idea that an experienced pilot would be "terrified beyond belief" when encountering turbulence is absurd. Add on top of that that this person is a combat veteran who exercised command authority. You cannot function in a combat situation if you do not have an absolutely iron grip on your emotions. And you cannot exert effective leadership if you are visibly agitated in any way, let alone exhibiting fear. And the idea that you would display a lack of confidence in a subordinate's ability to carry out the mission without any proof.... It's just nonsense.

In any case, here's a real story about how Stewart behaved during a real emergency when aboard a B-52 returning to Guam from a sortie over Viet Nam in 1966: Mr. Stewart Goe to Viet Nam.

The pertinent passage:

As Amos flew into the abort area north of Andersen, the crew started to calculate the flaps-up landing data: airspeed plus-35 knots; landing roll—longer; if drag chute failure—50 percent longer. He then escorted Stewart to the instructor navigator position on the plane’s lower deck. “If I lose control of the aircraft,” Amos said, “I will call out over the intercom ‘bailout’ three times and activate the bailout light. The navigator will be the first to go, creating a large hole by his downward ejection seat.” Amos reassured Stewart that he would do everything he could to regain control of the bomber and would be the last to leave the aircraft.“Do you understand, General Stewart?” Amos asked.

“Yes, Captain Amos, I understand,” Stewart very calmly answered in his familiar granular voice.

Speaking of veterans of the Army Air Corps, Joseph Heller, who flew 60 bombing missions from May to October in 1944, was so frightened by the experience he refused to fly commercial for decades after he returned home.

You'd expect the mean to be the same as the population, of course.

Was this in reply to someone?

Well, you’re right about the ultimate odds being 2/3 for the switch — I got a little careless at the end there.

But the more basic point I made was that the opening of the door by Monty Hall tells the contestant nothing about what lies behind the original door, so the chances that that was right remains at 1/3. That’s where most people get the wrong answer, because they assume the chances for the two doors not opened by Monty Hall must be the same. Obviously, the remaining chances after removing 1/3 is 2/3, and the only door it can attach to is the door that one can switch to.

Correcting for sampling with replacement is spurious precision under every real circumstance where one has enough data to have a decent hope of getting even p less than 0.05 for effect sizes of the magnitude seen in psychology. It's also spurious precision even in this ideal case because the population size is presumed to be much much larger than the sample size.

The belief that each person "has an IQ" is wrong, anyway -- the likelihood that the person who scored 150 in the sample also scored that high in the norming sample is going to be fairly low, perhaps 1 in 3, even without ceiling effects. Likely a retest would be in the low-to-mid 140s, (depending on the test-retest correlation, which would likely be 0.9 - 0.95). Not only are there uncertainties, but even the magnitude of the uncertainties is also uncertain.

Fine points, but not relevant to solving the simple word problem presented. If researchers can’t get the simple word problems with highly-constrained conditions correct, then they have little chance of making useful inferences in the real world.

If you start with a false premise any conclusion will do. Stopped reading at this point.

Of course it’s false. That was the point. If you had been reading for comprehension instead of disqualification, you would have seen the simple proof that it’s false. Because it’s false, your answer of 101 is incorrect. But feel free to continue solving the problem that’s in your head instead of the one that was presented.

Probability is a measure of ignorance.

Well, no. It’s been known for centuries that there are Newtonian systems that are chaotically unpredictable.

The distinction between ‘deterministic’ and ‘random’ is far more obscure in theory than many people realize, and in practice there isn’t one.

Replies:@candid_observerQuantum events, of course, as I mentioned in another comment, are different. So far as I know they are the only types of events which even theoretically are not susceptible to prediction.

Chaotic systems are different, with different math forms. We can talk about chaotic attractors or other kind of systems with too many variables to keep track of, but these aren't them. These are determinable entirely from accurate Newtonian mechanics.

This conversation has been enlightening.

I learned that when statisticians hear the first “mean” they hear “mean of a distribution.” Strangely, they hear the second “mean” as “arithmetic mean.” The former allows them to assume that the average IQ (arithmetic mean) for the entire population is still uncertain but with a known distribution mean of 100. Therefore, the statistician assumes that the population of eighth-graders can be treated as infinite for sampling purposes (confirmed by many comments here), allowing each IQ measurement to be independent, resulting in an expected arithmetic mean of 101 for the sample.

IMHO the statisticians are being inconsistent as well as imposing an extremely non-physical assumption that isn’t present in the problem statement. No population has infinite size. For the type of experiment described, the population would actually be very small. When a Bayesian like myself reads this problem, both instances of “mean” are automatically interpreted as “arithmetic mean.” This is done because it is the only interpretation that does not lead to absurdities. The phrase “is known to be” rules out “mean of a distribution” since distributions do not have an objective existence and therefore their properties cannot be known. The Bayesian hears something akin to:

“Scored IQ tests for the entire population of eighth-graders in a city are sitting in a stack. The average (arithmetic mean) score is 100. You pick a sample group of 50 tests from the stack at random. The first score examined is 150. What do you expect the mean IQ score to be for the whole sample of 50 tests?”

Hopefully it is obvious that the answer is not 101.

Notice that the Bayesian interpretation is a physically realizable scenario, unlike the setup imagined by the statistician which requires imaginary populations. The constraints corresponding to the more natural interpretation (finite population and non-objective probability) also demand the removal of the independence assumption.

Of course the authors probably had some formal statistics background, and therefore had the non-Bayesian interpretation in mind, especially if they gave 101 as the answer. Boo.

The distinction between 'deterministic' and 'random' is far more obscure in theory than many people realize, and in practice there isn't one.

It’s true that, for practical purposes, we can’t treat many systems, such as the behavior of individual molecules in a large volume of gas, or likely “chaotic” systems such as the weather, as deterministic. But aren’t those simply systems in which we don’t know all the details of the relevant initial conditions, and which, if we did know them, we would, in theory, be able to predict their behavior?

Quantum events, of course, as I mentioned in another comment, are different. So far as I know they are the only types of events which even theoretically are not susceptible to prediction.

Replies:@melendwyr1) All classical systems we know of are composed of quantum systems. And the behavior of the world at the quantum level can trivially be made to influence things on the classical level more than enough for an unaided human to perceive. (Schroedinger's Cat is the traditional example, but you can probably look around your home and find more.)

2) We can't know the state of the weather with completeness. Even ignoring the possibility of 'amplified' quantum processes, attempts to measure the weather to the necessary degree of completeness would change its conditions. And even a highly accurate but imperfect model will rapidly diverge from reality; weather is inherently chaotic, as opposed to (for example) projectile movement.

A deterministic system whose precise state is unknowable, and a random system whose future states are undefined, aren't distinguishable from the perspective of limited information, no matter how extensive.

(100 * (p-1) + 150) / p

Does this equal 100 as stated in the problem? No. Therefore, since the assumption of independence directly contradicts the conditions given, the assumption of independence is false in this problem. It is amazing the lengths to which so many people here are going to prove that they can't reason with probabilities. There are no semantic issues here.

“Of course it’s false. That was the point.” Listen, you still do not get it. The statement “If the expected IQ of each of the untested students is 100” does not make sense. There is no such a thing like “expected IQ of each of the untested students”. Each student has specific IQ whether it is expected or not. For every sample of p students from the population with known mean m=100 (this is well defined concept) one can estimate (not calculate!) what is the arithmetic average of IQ’s in that sample. The best estimate is the mean m=100 regardless of the size of the sample whether p=10, 49, 50 or 1000. This follows from the definitions of arithmetic average and the mean (expected value) of random variable. And this is true regardless what is the distribution of the variable whether Gaussian, uniform, binomial, Poisson, etc. Anyway, can write that

Avg(p)=[IQ(1)+…+IQ(p)]/p ≈ m=100

The symbol “≈” stands for approximation or an estimate.

Now if we know that one element in the sample for has IQ=150 then your formula

Avg(p)≈(100 * (p-1) + 150) / p (=101 for p=50)

gives the best estimate of the average of that sample. We do not know what are individual values of IQ’s of first p-1 elements but we know that their average is best estimated by m=100 and the IQ of the p-th element is 150. This explains why the formula is valid.

You asked “Does this equal 100 as stated in the problem? ” and correctly answered that it does not. This equals to 101 for p=50. This is so because we utilize the extra information we had that one element’s IQ is known, so no longer m=100 is the best estimate we can come up with for the Avg(p).

Then you proceed with the nonsense about independence, etc. This problem has nothing to do with statistical or random independence, though we presume that the sample was randomly selected. Why do we presume it, because we must if we want to find the answer. W/o the assumption of independence of sampling anything goes and the information about the mean of population is meaningless, pointless. Somebody who studied a bit of probability and statistics would have known it so it would be expected of him to make proper assumptions.

You also said “It is amazing the lengths to which so many people here are going to prove that they can’t reason with probabilities.” and I agree with it except that you fall in the category of the people who are confused. The bottom line is that it is hard to beat good rigorous education. Confidence, cockiness, chutzpah will not replace it, except for reality you probably saw in the movies. Snap out of it. You are not in the movie.

Replies:@hoots-------------------------------------------

function test_estimators

populationSize = 70; %effect is stronger for smaller populations

sampleSize = 50;

avgIQ = 100; %average IQ for the population

testedIQ = 150; %IQ of the single subject tested

errorDelta = 1e-9; %for verifying the population average IQ generated

nruns = 10000;

utu_totalError = 0; %initialize

hoots_totalError = 0; %initialize

%average untested IQ is fixed if testedIQ is fixed

avgUntestedIQ = (avgIQ*populationSize - testedIQ) / (populationSize-1);

for i = 1:nruns

%generate a random population that has average IQ = avgIQ and one member with IQ == 150

%place the tested IQ at element 1 for convenience

IQs = [testedIQ, (populationSize-1) * myrandsimplex(avgUntestedIQ,populationSize-1)];

%verify this set meets the problem conditions

if IQs(1) ~= testedIQ || abs(sum(IQs)/populationSize - avgIQ) > errorDelta

disp('IQ set is no good!')

break;

end

%draw a random sample of sampleSize IQs that includes the member with IQ == 150

sampleIQs = [testedIQ, IQs(ceil((sampleSize-1)*rand(1,sampleSize-1))+1)];

%get the average for the sample

sampleAvg = mean(sampleIQs);

%apply utu's estimate

utu_estimate = (avgIQ * (sampleSize-1) + testedIQ) / sampleSize;

utu_totalError = utu_totalError + (utu_estimate - sampleAvg); %cumulative error

%apply hoots' estimate

hoots_estimate = ((sampleSize-1) * (avgIQ*populationSize-testedIQ) / (populationSize-1) + testedIQ) / sampleSize;

hoots_totalError = hoots_totalError + (hoots_estimate - sampleAvg); %cumulative error

end

disp(['utu''s average error: ', num2str(utu_totalError/nruns)]);

disp(['hoots'' average error: ', num2str(hoots_totalError/nruns)]);

end

%this function gives a uniform distribution of values between 0.5*avg and 1.5*avg, summing to avg

%the shape of the distribution isn't important for this problem, try any distribution you like

function X = myrandsimplex(avg,D)

% D: dimension of the (bounded) simplex

% X: sum(X) = avg

X = .5 + rand(1,D); %values from .5 to 1.5

X = avg*bsxfun(@rdivide,X,sum(X,2)); % Normalize samples

end

-------------------------------------------

I don't really agree with this statement of yours, at least as I understand it:

In the problem in question, it is merely assumed that we know that the average of all 8th graders in the city is 100. One might imagine any number of scenarios in which this would be well established, and yet in a given random sample we don't know a priori the IQs of those in the sample, or the mean of the sample. But more important than imagining a scenario under which these assumptions would, in fact, be true, is simply to grant them and work from them. From the standpoint of our current problem, the relevant a priori and a posteriori probabilities are: what do we reasonably believe about the mean of the sample of 50 just on the basis of the assumption that the mean of the population is 100, and that the sample is randomly selected, and what do we reasonably believe about the mean of the sample of 50 if we know further that a randomly selected member has IQ 150?Conditional probabilities, independence, and other basic ideas in probability serve as the effective, simplifying framework in which to analyze problems so that we can call them right. But their application is often subtle and difficult enough that very smart people can get things wrong if they either don't bother to apply them or do so in a sloppy way or are distracted from their correct application by countervailing but ultimately confused intuitions.

I'd say that that's what went on in the Monty Hall case. In understanding it, one needs to restrict oneself to the question: what do we know, and when did we know it? If one follows this rule carefully, the correct answer pops out of the analysis. If a contestant chooses one of the three doors, then, because he knows nothing about them and is doing so at random, his chances of getting the correct door are 1/3. When Monty Hall chooses a door with a goat behind it, he knows of course which door the contestant chose, and chose a door that had a goat in the light of this. Does the contestant now have any further information as to whether his choice of door was correct? No: Monty Hall would open a door with a goat behind it whether or not the door chosen by the contestant had a car behind it. The chances that the chosen door was right cannot improve from 1/3 because the contestant has no further knowledge about it. But how about the remaining door? Before Monty Hall opened a door with the goat behind it, the contestant would also give that door 1/3 chance of having a car behind it. But after Monty Hall opens the door with the goat behind it, the contestant knows more about that door: it is one of only two possible doors with a car behind it. Now the chances that it has a car behind it are 1/2. 1/2 is bigger than 1/3, so the correct decision is for the contestant to switch doors.

This is basically all there is to the problem. And it's important to grasp that this is all there is to it, because other countervailing intuitions simply muddy the picture. Knowing what's correct in many cases involves not only having the right explanation, but understanding that all other approaches miss the real point, and can be dismissed.

Trying to one up me by citing

*quantum mechanics*as an example of ‘real’ probabilities is rich. As Feynman famously said, we can’t explain quantum to a high school student, so we don’t really understand it.Read Jaynes. Then we can argue quantum probabilities. He argues quite clearly that most physicists don’t get it. If you read Bohr’s arguments with Einstein very carefully you see he just keeps making this point: the probabilities aren’t telling us about reality. they’re just saying what we know about reality.

But it is ABSURD to play with the toy problem given.

Let’s phrase it this way:

I take an IQ test. It comes back 150. What’s the probability the test is false?

The occurrence of an IQ geq 150 is a .05% event. The occurrence of a test reporting an IQ geq 150 given an IQ of geq 150 is 98%, say. The occurrence of a test reporting an IQ geq 150 given an IQ of less than 150 is 5%.

Prob(I have iq geq 150 given test geq 150) x prob(test geq 150) = prob(test geq 150 given I have geq 150) x prob (I have iq geq 150)

Prob(I have iq geq 150 given test geq 150) = (.98 x .0005)/(prob(test geq 150 given iq geq 150)prob(iq geq 150) + prob(test geq 150 given iq less 150)(prob(iq less 150)))

Bottom equals

.98x.0005 + .5x.9995 =.05

Computing the while shebang,

Altogether, the probability is 1%.

That is, the probability I have a 150 iq given a test says I do, with some very reasonable numbers for accuracy, is 1%.

1%. 99% the test is false.

And were supposed to start this stupid parlor trick with a “it’s well established that the mean iq is 100 and the first test is 150”.

So either the mean is wrong or the sample is.

Yes, there may be a value in an economist or social scientist Modeling AS IF it were true, in order to establish some other relation. But here it’s a parlor trick.

Even in the best case, it’s a model. It’s not real. The Map is not the territory. Yes, models have to be simplifying or they are the size of the universe, and models can elucidate other relationships. Still, just a model.

But most of the problems in probability come from forgetting it’s a model, or misunderstanding the model. Because the Monty Hall problem is exactly the problem of How Much Knowledge Do We Have. We had more than most people realized–he Never Opens the best reward!– because they kept oversimplifying the model.

Almost all arguments about probability results come from a disagreement about what we know, or assumptions about what’s unknown.

The distinction between 'deterministic' and 'random' is far more obscure in theory than many people realize, and in practice there isn't one.

No one has ever claimed coin tossing or die rolling exhibits chaotic behavior.

Chaotic systems are different, with different math forms. We can talk about chaotic attractors or other kind of systems with too many variables to keep track of, but these aren’t them. These are determinable entirely from accurate Newtonian mechanics.

Replies:@utuSystems governed by Newtonian mechanics can be chaotic. Chaotic systems are chaotic not because there are too many variables that need to be kept track of.

The coin toss poses the problem because there many variables that affect the outcome which is very sensitive to minute changes of some of these variables.

With this question "I take an IQ test. It comes back 150. What’s the probability the test is false? did you try to show off that you are fluent in Bayesian probability calculation? I am sorry but you failed.

You must define what does "the test is false" mean. Is it that instead of 150 it should be 149? Is it like a measurement error? The test output is the number. The test really does not measure anything that could be measured by different means. This test just produce numbers. The question of being false or not false does not make really sense. In the IQ research there is no true IQ that you could compare it to.

When you step on the scale and read 150lb this number will be different from your actual weight. Would you ask then what is the probability that 150lb is false? No. But if you did the answer in the real world is that the probability is 1. It is certain that 150lb is false. Your weight changes on atomic level by attaching or detaching atoms on microsecond time scale. The sensible question is about the error, i.e., what is the error of that scale and with what confidence level? Is my weight 150lb within ±1.5lb with confidence level of, say 95% or ±3lb with confidence level of 99%?

Because of convenience the probability density factions (PDF) for real phenomena often is expressed as a continuos function rather then the discrete function even if the phenomena is discrete (has granularity) in nature. Gaussian function is used in IQ research and it has a continuous PDF. Because of this fact stating that the probability that IQ=150 is, say, 0.05 is incorrect. The probability for any discrete value is always ZERO for a continuous PDF. You could say, however, that P(150≤IQ<151)=0.05.

Is there something else you would like to impress us with?

An 'attractor' is a state or set of states that a system tends towards across time; it's well known that chaotic systems can still possess attractor states.

'Keeping track' of variables isn't the issue. The three-body problem is a well-known example of a purely Newtonian system that has no precise solution. A pendulum suspended from another pendulum is another.

For God's sake, you can find videos on YouTube of the chaotic behavior of dual pendulums. It's a common example in children's books on science.

A. It was a perfectly legit coincidence?

B. You've somehow screwed up (e.g., you are giving the test for small children to large children, or your sample isn't as random as you hoped, or the kids at this school took the same IQ test last week in a different study, or something else).

Steve, I did the bayesian work below to address just this.

I phrased it as: the subject got an iq test score of 150. What’s the probability the subject has an iq of 150?

The answer is 1%.

That is, given a standard distribution with mean 100 and a 150 is .05% of distribution, and given some reasonable assumptions about the accuracy od the test (98% true positive and 5% false positive) a 150 from that distribution is overwhelmingly a false response.

I’m sorry I didn’t know how to format it nicely. I email it to you tomorrow if you ask nicely.

Chaotic systems are different, with different math forms. We can talk about chaotic attractors or other kind of systems with too many variables to keep track of, but these aren't them. These are determinable entirely from accurate Newtonian mechanics.

What is “accurate Newtonian mechanics”? Is there inaccurate one? Just curious.

Systems governed by Newtonian mechanics can be chaotic. Chaotic systems are chaotic not because there are too many variables that need to be kept track of.

The coin toss poses the problem because there many variables that affect the outcome which is very sensitive to minute changes of some of these variables.

With this question “I take an IQ test. It comes back 150. What’s the probability the test is false? did you try to show off that you are fluent in Bayesian probability calculation? I am sorry but you failed.

You must define what does “the test is false” mean. Is it that instead of 150 it should be 149? Is it like a measurement error? The test output is the number. The test really does not measure anything that could be measured by different means. This test just produce numbers. The question of being false or not false does not make really sense. In the IQ research there is no true IQ that you could compare it to.

When you step on the scale and read 150lb this number will be different from your actual weight. Would you ask then what is the probability that 150lb is false? No. But if you did the answer in the real world is that the probability is 1. It is certain that 150lb is false. Your weight changes on atomic level by attaching or detaching atoms on microsecond time scale. The sensible question is about the error, i.e., what is the error of that scale and with what confidence level? Is my weight 150lb within ±1.5lb with confidence level of, say 95% or ±3lb with confidence level of 99%?

Because of convenience the probability density factions (PDF) for real phenomena often is expressed as a continuos function rather then the discrete function even if the phenomena is discrete (has granularity) in nature. Gaussian function is used in IQ research and it has a continuous PDF. Because of this fact stating that the probability that IQ=150 is, say, 0.05 is incorrect. The probability for any discrete value is always ZERO for a continuous PDF. You could say, however, that P(150≤IQ<151)=0.05.

Is there something else you would like to impress us with?

Replies:@AliceYou can play bayesian or you can play real probability density functions. But don't mix them. You want a continuous work up? We can do that. But don't claim you sampled a 150 from the given distribution.

You can play with the bayesian inference to show how bad our intuition is, but that same claim shows the claimed example isnt gonna happen.

If you want to argue the practical--that in practice, we can't know various physical outcomes, then Mirabile dictu! Probability is a measure of ignorance, like I said.

Systems governed by Newtonian mechanics can be chaotic. Chaotic systems are chaotic not because there are too many variables that need to be kept track of.

The coin toss poses the problem because there many variables that affect the outcome which is very sensitive to minute changes of some of these variables.

With this question "I take an IQ test. It comes back 150. What’s the probability the test is false? did you try to show off that you are fluent in Bayesian probability calculation? I am sorry but you failed.

You must define what does "the test is false" mean. Is it that instead of 150 it should be 149? Is it like a measurement error? The test output is the number. The test really does not measure anything that could be measured by different means. This test just produce numbers. The question of being false or not false does not make really sense. In the IQ research there is no true IQ that you could compare it to.

When you step on the scale and read 150lb this number will be different from your actual weight. Would you ask then what is the probability that 150lb is false? No. But if you did the answer in the real world is that the probability is 1. It is certain that 150lb is false. Your weight changes on atomic level by attaching or detaching atoms on microsecond time scale. The sensible question is about the error, i.e., what is the error of that scale and with what confidence level? Is my weight 150lb within ±1.5lb with confidence level of, say 95% or ±3lb with confidence level of 99%?

Because of convenience the probability density factions (PDF) for real phenomena often is expressed as a continuos function rather then the discrete function even if the phenomena is discrete (has granularity) in nature. Gaussian function is used in IQ research and it has a continuous PDF. Because of this fact stating that the probability that IQ=150 is, say, 0.05 is incorrect. The probability for any discrete value is always ZERO for a continuous PDF. You could say, however, that P(150≤IQ<151)=0.05.

Is there something else you would like to impress us with?

Oh, more sleight of hand. Gosh, the probability any individual sample of a continuous PDF is 0, so that 150 has probability 0 too.

You can play bayesian or you can play real probability density functions. But don’t mix them. You want a continuous work up? We can do that. But don’t claim you sampled a 150 from the given distribution.

You can play with the bayesian inference to show how bad our intuition is, but that same claim shows the claimed example isnt gonna happen.

If you want to argue the practical–that in practice, we can’t know various physical outcomes, then Mirabile dictu! Probability is a measure of ignorance, like I said.

Let me try one more time:

I have a machine that deposits polymers on a wafer. The machine deposits polymers with mean mass of 100g and a standard deviation of 15g.

I decide to sample 50 such processes. The first one I measure has a mass of 150 grams.

Do I then a) spend my time figuring out what the conditional probability is of the mean for the sample given that outcome, or do I b) immediately start looking for the error in my measurements or in my beliefs about the mean and standard deviation of the machine’s process?

Replies:@utuI am glad you have a machine, but it should not distract you form a simple problem that was posted:

"The mean I.Q. of the population of eighth-graders in a city is known to be 100. You have selected a random sample of 50 children for a study of educational achievement. The first child tested has an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?"

In this problem there is nothing about measurements, and errors of measurements and there is no need to invoke conditional probability. Some people here got side tracked because the problem is about the IQ's. Let me rephrase the problem as follows:

The average age in the city is Xavg=27 years old. We selected a random sample of N citizens of the city. Question 1: What is the best estimate of the average age of the sample? Question 2: We know that one person in the sample is X1=67 years old. What is the best estimate of the average age of the sample?

Answer1: Xavg

Answer2: [Xavg*(N-1)+X1]/N

Furthermore you and others here start speculating about probabilities haven one kid with IQ=150 and somebody even starts spouting about IQ of Ashkenazis. This is pointless and irrelevant and most importantly wrong. Approaching this problem you do not even know what is the distribution of the random variable. Who says it must be Gaussian? What if the distribution is binary? 50% has IQ=150 and 50% has IQ=50 with mean of 100. This is a mathematical problem that does not have to have anything to do with any reality that you know. It is mathematical reality in which clearly you do not seem to feel very comfortable.

It’s not clear that it’s a pure math problem, it’s from Kahneman who is a psychologist and behavioral economist not a mathematician. If it is supposed to have anything to do with real world I.Q.s there is a mountain of empirical evidence that they are in fact normally distributed.

Replies:@utuHowever how well the mean of population estimates the mean of the sample depends on sample size N and the type of distribution. As N increases the differences due to the distribution types also diminishes because of the central limit theorem. But Kahneman did not ask about the error of the estimate.

I found it rather depressing to see quite a few commenters going far off on a tangent and missing the point of a simple math problem. How could one hope for people to come to agreements on beliefs, politics or philosophy where they do not share common definitions and the language is much less precise than in mathematics?

Individual prowess and momentum that these guys’ feats engenders are much more important in a short series. The team with the studs wins a 5- or 7-game series.

Steve Garvey had been a bad third baseman because he had a terrible arm, but he was a valuable defensive first baseman because he almost never failed to scoop up a throw in the dirt. Cey, Russell, and Lopes were told to aim low and Garvey would scoop it out. This vacuum cleaner knack of Garvey's solidified the longest running infield in history: eight years, about twice any other foursome.

The Red $ox’ George Scott was another 3rd-to-1st convert who could really pick it. Strength up the middle is paramount in baseball, but if your infield is iffy you need a good fielder at 1st.

Alice knows more about probability than I do, but I will try to explain what I think her basic point is in simpler terms, using the Monty Hall problem.

In physical reality, or God’s universe, there is a car behind one door and a goat behind the other two. This physical reality doesn’t change regardless of what you do or Monty Hall does, and is not affected by probability. The position of the car is just there and independent of the other actors.

But the player doesn’t know what is behind each door, so probability is a tool to measure the likelihood of a given door being associated with the car. And while the door with the car is always there, the probability of a particular door hiding a car will change as the player gives more information. Monty Hall gives the player more information.

According to the rules, Monty Hall can not give the player any additional information about the door he or she first selected. Without additional information the probability here can’t change. The aggregate probability of the two non-selected doors will therefore not change, there remains a two thirds chance that one of them will have the car, and a one third chance the door the player first selected will have a car. But he does give more information about each of the two non-selected doors, by showing that one of them definitely has a goat (the probability of it having the car going from one third to zero). The trick is understanding that this gives more information about the other non-selected door too and therefore changes its probability.

And with the student IQ problem, I think that commentators here are getting too into the weeds into trying to ascertain the actual IQs, when the problem doesn’t give us much information at all, some of the information it gives us is not relevant. All it tells is is that the mean of a whole population is known to be 100, so our best guess is that the mean of a random sample drawn from that population will be 100, absent other information which we don’t have. Since its perfectly possible for there to be an outlier of 150 in this sample, finding one such outlier tells us nothing. There is a 2% chance in any given instance of an outlier of 150 occuring in a sample of with a mean of 100 as some commentators have pointed out, meaning one in five. According to the laws of probability, this means in any given sample of fifty with a mean of 100, and outlier of 150 will almost always occur, as will an outlier of 50 (well, standard deviation, but we are not given that information either). The mean of the sample of fifty out of a population with a mean of one hundred will most likely be one hundred, absent other information, and alot of people here are busy adding information that has not been given into this problem. This doesn’t mean that that is the actual mean, just our best guess.

Replies:@candid_observerAn IQ above 131 is about 2%

Also not sure how you got from 2% to one in five Not true per above, but more importantly the problem stated was: "The first child tested has an I.Q. of 150." That means a probability of 1/2000 for the case they specified (i.e. first child, not "present in the entire sample").

IQ is typically quoted with an SD of 15 so that is what I am assuming.

In physical reality, or God's universe, there is a car behind one door and a goat behind the other two. This physical reality doesn't change regardless of what you do or Monty Hall does, and is not affected by probability. The position of the car is just there and independent of the other actors.

But the player doesn't know what is behind each door, so probability is a tool to measure the likelihood of a given door being associated with the car. And while the door with the car is always there, the probability of a particular door hiding a car will change as the player gives more information. Monty Hall gives the player more information.

According to the rules, Monty Hall can not give the player any additional information about the door he or she first selected. Without additional information the probability here can't change. The aggregate probability of the two non-selected doors will therefore not change, there remains a two thirds chance that one of them will have the car, and a one third chance the door the player first selected will have a car. But he does give more information about each of the two non-selected doors, by showing that one of them definitely has a goat (the probability of it having the car going from one third to zero). The trick is understanding that this gives more information about the other non-selected door too and therefore changes its probability.

And with the student IQ problem, I think that commentators here are getting too into the weeds into trying to ascertain the actual IQs, when the problem doesn't give us much information at all, some of the information it gives us is not relevant. All it tells is is that the mean of a whole population is known to be 100, so our best guess is that the mean of a random sample drawn from that population will be 100, absent other information which we don't have. Since its perfectly possible for there to be an outlier of 150 in this sample, finding one such outlier tells us nothing. There is a 2% chance in any given instance of an outlier of 150 occuring in a sample of with a mean of 100 as some commentators have pointed out, meaning one in five. According to the laws of probability, this means in any given sample of fifty with a mean of 100, and outlier of 150 will almost always occur, as will an outlier of 50 (well, standard deviation, but we are not given that information either). The mean of the sample of fifty out of a population with a mean of one hundred will most likely be one hundred, absent other information, and alot of people here are busy adding information that has not been given into this problem. This doesn't mean that that is the actual mean, just our best guess.

There’s no reason to take the original problem to be about the “real” IQ of the population, or of individual members. The problem demonstrates the same intended point if it is posed entirely in terms of “measured” IQs. Considering the question of “real” IQ is a distraction from that point.

Replies:@utuThe fact that IQ’s are normally distributed has no bearing on the solution of the problem. The solution is the same whether IQ is normal or not.

However how well the mean of population estimates the mean of the sample depends on sample size N and the type of distribution. As N increases the differences due to the distribution types also diminishes because of the central limit theorem. But Kahneman did not ask about the error of the estimate.

I found it rather depressing to see quite a few commenters going far off on a tangent and missing the point of a simple math problem. How could one hope for people to come to agreements on beliefs, politics or philosophy where they do not share common definitions and the language is much less precise than in mathematics?

Quantum events, of course, as I mentioned in another comment, are different. So far as I know they are the only types of events which even theoretically are not susceptible to prediction.

Two points:

1) All classical systems we know of are composed of quantum systems. And the behavior of the world at the quantum level can trivially be made to influence things on the classical level more than enough for an unaided human to perceive. (Schroedinger’s Cat is the traditional example, but you can probably look around your home and find more.)

2) We can’t know the state of the weather with completeness. Even ignoring the possibility of ‘amplified’ quantum processes, attempts to measure the weather to the necessary degree of completeness would change its conditions. And even a highly accurate but imperfect model will rapidly diverge from reality; weather is inherently chaotic, as opposed to (for example) projectile movement.

A deterministic system whose precise state is unknowable, and a random system whose future states are undefined, aren’t distinguishable from the perspective of limited information, no matter how extensive.

Replies:@candid_observerSo what's "unknowable" in this context, theoretically speaking? Doesn't it make more sense to call these limitations "practical"? If we can make a perfect prediction of a given system only by use of a computer which would be so large it could never be constructed in the lifetime of the universe, or a computer whose circuits must operate faster than the speed of light will permit, is that a theoretical limitation or, more plausibly, a practical one?

As I said, I exclude quantum events, because, at minimum, we don't at this time have anything resembling a deterministic account of them, rather than a probabilistic one. The only interesting question is whether there is even a possibility of something resembling a deterministic explanation of them. From what I understand, Bell's Theorem, among other things, seems to present pretty good evidence that we'll never get a satisfactory deterministic account.

On the other hand, while it's easy to contrive experiments like Schrodinger's Cat that allow quantum events directly to affect macro outcomes, in ordinary systems it seems this virtually never happens. I suppose that in some chaotic systems even the trivial effects of quantum events might alter conditions enough that, somewhere far, far down the line the system takes an otherwise unpredicted turn, even with perfect knowledge of its initial conditions.

Chaotic systems are different, with different math forms. We can talk about chaotic attractors or other kind of systems with too many variables to keep track of, but these aren't them. These are determinable entirely from accurate Newtonian mechanics.

Wrong.

Wrong.

An ‘attractor’ is a state or set of states that a system tends towards across time; it’s well known that chaotic systems can still possess attractor states.

‘Keeping track’ of variables isn’t the issue. The three-body problem is a well-known example of a purely Newtonian system that has no precise solution. A pendulum suspended from another pendulum is another.

For God’s sake, you can find videos on YouTube of the chaotic behavior of dual pendulums. It’s a common example in children’s books on science.

Agree:utuI have a machine that deposits polymers on a wafer. The machine deposits polymers with mean mass of 100g and a standard deviation of 15g.

I decide to sample 50 such processes. The first one I measure has a mass of 150 grams.

Do I then a) spend my time figuring out what the conditional probability is of the mean for the sample given that outcome, or do I b) immediately start looking for the error in my measurements or in my beliefs about the mean and standard deviation of the machine's process?

“I have a machine that deposits polymers on a wafer. The machine deposits polymers with mean mass of 100g and a standard deviation of 15g.”

I am glad you have a machine, but it should not distract you form a simple problem that was posted:

“The mean I.Q. of the population of eighth-graders in a city is known to be 100. You have selected a random sample of 50 children for a study of educational achievement. The first child tested has an I.Q. of 150. What do you expect the mean I.Q. to be for the whole sample?”

In this problem there is nothing about measurements, and errors of measurements and there is no need to invoke conditional probability. Some people here got side tracked because the problem is about the IQ’s. Let me rephrase the problem as follows:

The average age in the city is Xavg=27 years old. We selected a random sample of N citizens of the city. Question 1: What is the best estimate of the average age of the sample? Question 2: We know that one person in the sample is X1=67 years old. What is the best estimate of the average age of the sample?

Answer1: Xavg

Answer2: [Xavg*(N-1)+X1]/N

Agree:melendwyrWhat are the odds that “Alice” is in fact a she?

Exactly! Why do you think some people here are so easily distracted and go astray? Is it because they are IQ buffs here at iSteve? If the problem was about apples and their weight would they get it right? Or is it their innumeracy, that they never had to solve any problem in elementary statistics?

1) All classical systems we know of are composed of quantum systems. And the behavior of the world at the quantum level can trivially be made to influence things on the classical level more than enough for an unaided human to perceive. (Schroedinger's Cat is the traditional example, but you can probably look around your home and find more.)

2) We can't know the state of the weather with completeness. Even ignoring the possibility of 'amplified' quantum processes, attempts to measure the weather to the necessary degree of completeness would change its conditions. And even a highly accurate but imperfect model will rapidly diverge from reality; weather is inherently chaotic, as opposed to (for example) projectile movement.

A deterministic system whose precise state is unknowable, and a random system whose future states are undefined, aren't distinguishable from the perspective of limited information, no matter how extensive.

I think you’re putting a lot of weight here on the term “unknowable”. I had originally made a distinction between deterministic systems which are, in theory, perfectly predictable, but not practically so. Is a chaotic system such as the weather whose full initial conditions are unknowable for what seem to be practical, not theoretical, reasons really unpredictable in the same sense as with quantum events? For a system like the weather, its chaotic behavior springs from the major long run consequences of quite trivial differences in initial conditions. But does it importantly differ from our failure to understand fully the behavior of gas molecules in a tank? In the case of the gas molecules, we can’t, for what I would regard as practical reasons, know all of the relevant behavior of the molecules, and so make perfect predictions. This is true even though these molecules may be considered to conform perfectly to simple Newtonian projectile laws. Every once in forever the gas molecules in a tank will all go to a corner of the tank, which we would be able theoretically to predict exactly if we knew all the behavior of the individual gas molecules, but can only think about probabilistically in the absence of that complete detail.

So what’s “unknowable” in this context, theoretically speaking? Doesn’t it make more sense to call these limitations “practical”? If we can make a perfect prediction of a given system only by use of a computer which would be so large it could never be constructed in the lifetime of the universe, or a computer whose circuits must operate faster than the speed of light will permit, is that a theoretical limitation or, more plausibly, a practical one?

As I said, I exclude quantum events, because, at minimum, we don’t at this time have anything resembling a deterministic account of them, rather than a probabilistic one. The only interesting question is whether there is even a possibility of something resembling a deterministic explanation of them. From what I understand, Bell’s Theorem, among other things, seems to present pretty good evidence that we’ll never get a satisfactory deterministic account.

On the other hand, while it’s easy to contrive experiments like Schrodinger’s Cat that allow quantum events directly to affect macro outcomes, in ordinary systems it seems this virtually never happens. I suppose that in some chaotic systems even the trivial effects of quantum events might alter conditions enough that, somewhere far, far down the line the system takes an otherwise unpredicted turn, even with perfect knowledge of its initial conditions.

Avg(p)=[IQ(1)+...+IQ(p)]/p ≈ m=100

The symbol "≈" stands for approximation or an estimate.

Now if we know that one element in the sample for has IQ=150 then your formula

Avg(p)≈(100 * (p-1) + 150) / p (=101 for p=50)

gives the best estimate of the average of that sample. We do not know what are individual values of IQ's of first p-1 elements but we know that their average is best estimated by m=100 and the IQ of the p-th element is 150. This explains why the formula is valid.

You asked "Does this equal 100 as stated in the problem? " and correctly answered that it does not. This equals to 101 for p=50. This is so because we utilize the extra information we had that one element's IQ is known, so no longer m=100 is the best estimate we can come up with for the Avg(p).

Then you proceed with the nonsense about independence, etc. This problem has nothing to do with statistical or random independence, though we presume that the sample was randomly selected. Why do we presume it, because we must if we want to find the answer. W/o the assumption of independence of sampling anything goes and the information about the mean of population is meaningless, pointless. Somebody who studied a bit of probability and statistics would have known it so it would be expected of him to make proper assumptions.

You also said "It is amazing the lengths to which so many people here are going to prove that they can’t reason with probabilities." and I agree with it except that you fall in the category of the people who are confused. The bottom line is that it is hard to beat good rigorous education. Confidence, cockiness, chutzpah will not replace it, except for reality you probably saw in the movies. Snap out of it. You are not in the movie.

Please tell me you’re merely an academic or hobbyist with lots of time on your hands and that you are not attempting to inflict the world with any actual engineering, especially anything related to civil, medical, etc. that might have a public safety impact. Also, I’d love to know where you studied if you don’t mind.

Avg(p)=[IQ(1)+...+IQ(p)]/p ≈ m=100

The symbol "≈" stands for approximation or an estimate.

Now if we know that one element in the sample for has IQ=150 then your formula

Avg(p)≈(100 * (p-1) + 150) / p (=101 for p=50)

gives the best estimate of the average of that sample. We do not know what are individual values of IQ's of first p-1 elements but we know that their average is best estimated by m=100 and the IQ of the p-th element is 150. This explains why the formula is valid.

You asked "Does this equal 100 as stated in the problem? " and correctly answered that it does not. This equals to 101 for p=50. This is so because we utilize the extra information we had that one element's IQ is known, so no longer m=100 is the best estimate we can come up with for the Avg(p).

You also said "It is amazing the lengths to which so many people here are going to prove that they can’t reason with probabilities." and I agree with it except that you fall in the category of the people who are confused. The bottom line is that it is hard to beat good rigorous education. Confidence, cockiness, chutzpah will not replace it, except for reality you probably saw in the movies. Snap out of it. You are not in the movie.

Since the math alone isn’t doing the trick, maybe some data would help. This runs in Octave and should work in Matlab too. Play with the parameters and run it as many times as you need to convince yourself that your estimate is biased and mine is not.

——————————————-

function test_estimators

populationSize = 70; %effect is stronger for smaller populations

sampleSize = 50;

avgIQ = 100; %average IQ for the population

testedIQ = 150; %IQ of the single subject tested

errorDelta = 1e-9; %for verifying the population average IQ generated

nruns = 10000;

utu_totalError = 0; %initialize

hoots_totalError = 0; %initialize

%average untested IQ is fixed if testedIQ is fixed

avgUntestedIQ = (avgIQ*populationSize – testedIQ) / (populationSize-1);

for i = 1:nruns

%generate a random population that has average IQ = avgIQ and one member with IQ == 150

%place the tested IQ at element 1 for convenience

IQs = [testedIQ, (populationSize-1) * myrandsimplex(avgUntestedIQ,populationSize-1)];

%verify this set meets the problem conditions

if IQs(1) ~= testedIQ || abs(sum(IQs)/populationSize – avgIQ) > errorDelta

disp(‘IQ set is no good!’)

break;

end

%draw a random sample of sampleSize IQs that includes the member with IQ == 150

sampleIQs = [testedIQ, IQs(ceil((sampleSize-1)*rand(1,sampleSize-1))+1)];

%get the average for the sample

sampleAvg = mean(sampleIQs);

%apply utu’s estimate

utu_estimate = (avgIQ * (sampleSize-1) + testedIQ) / sampleSize;

utu_totalError = utu_totalError + (utu_estimate – sampleAvg); %cumulative error

%apply hoots’ estimate

hoots_estimate = ((sampleSize-1) * (avgIQ*populationSize-testedIQ) / (populationSize-1) + testedIQ) / sampleSize;

hoots_totalError = hoots_totalError + (hoots_estimate – sampleAvg); %cumulative error

end

disp([‘utu”s average error: ‘, num2str(utu_totalError/nruns)]);

disp([‘hoots” average error: ‘, num2str(hoots_totalError/nruns)]);

end

%this function gives a uniform distribution of values between 0.5*avg and 1.5*avg, summing to avg

%the shape of the distribution isn’t important for this problem, try any distribution you like

function X = myrandsimplex(avg,D)

% D: dimension of the (bounded) simplex

% X: sum(X) = avg

X = .5 + rand(1,D); %values from .5 to 1.5

X = avg*bsxfun(@rdivide,X,sum(X,2)); % Normalize samples

end

——————————————-

Replies:@resYour comment says:

%the shape of the distribution isn’t important for this problem, try any distribution you like

Perhaps, but if that is so, why not use something more realistic (normal being the obvious choice)?

Avg(p)=[IQ(1)+...+IQ(p)]/p ≈ m=100

The symbol "≈" stands for approximation or an estimate.

Now if we know that one element in the sample for has IQ=150 then your formula

Avg(p)≈(100 * (p-1) + 150) / p (=101 for p=50)

Oh, and if you feel tempted to complain about my code, I challenge you to provide your own version that properly models the population and sample described in the problem. The single tested IQ value is a subset of the sample and the sample is a subset of the population. The sample is an otherwise-randomly drawn set from the population.

In physical reality, or God's universe, there is a car behind one door and a goat behind the other two. This physical reality doesn't change regardless of what you do or Monty Hall does, and is not affected by probability. The position of the car is just there and independent of the other actors.

But the player doesn't know what is behind each door, so probability is a tool to measure the likelihood of a given door being associated with the car. And while the door with the car is always there, the probability of a particular door hiding a car will change as the player gives more information. Monty Hall gives the player more information.

According to the rules, Monty Hall can not give the player any additional information about the door he or she first selected. Without additional information the probability here can't change. The aggregate probability of the two non-selected doors will therefore not change, there remains a two thirds chance that one of them will have the car, and a one third chance the door the player first selected will have a car. But he does give more information about each of the two non-selected doors, by showing that one of them definitely has a goat (the probability of it having the car going from one third to zero). The trick is understanding that this gives more information about the other non-selected door too and therefore changes its probability.

And with the student IQ problem, I think that commentators here are getting too into the weeds into trying to ascertain the actual IQs, when the problem doesn't give us much information at all, some of the information it gives us is not relevant. All it tells is is that the mean of a whole population is known to be 100, so our best guess is that the mean of a random sample drawn from that population will be 100, absent other information which we don't have. Since its perfectly possible for there to be an outlier of 150 in this sample, finding one such outlier tells us nothing. There is a 2% chance in any given instance of an outlier of 150 occuring in a sample of with a mean of 100 as some commentators have pointed out, meaning one in five. According to the laws of probability, this means in any given sample of fifty with a mean of 100, and outlier of 150 will almost always occur, as will an outlier of 50 (well, standard deviation, but we are not given that information either). The mean of the sample of fifty out of a population with a mean of one hundred will most likely be one hundred, absent other information, and alot of people here are busy adding information that has not been given into this problem. This doesn't mean that that is the actual mean, just our best guess.

So much misinformation in this whole thread. For example:

No. Very far off (a factor of 40!). An IQ above 150 is about 1/2000 (0.05%) per http://www.iqcomparisonsite.com/iqtable.aspx

An IQ above 131 is about 2%

Also not sure how you got from 2% to one in five

Not true per above, but more importantly the problem stated was: “The first child tested has an I.Q. of 150.” That means a probability of 1/2000 for the case they specified (i.e. first child, not “present in the entire sample”).

IQ is typically quoted with an SD of 15 so that is what I am assuming.

-------------------------------------------

function test_estimators

populationSize = 70; %effect is stronger for smaller populations

sampleSize = 50;

avgIQ = 100; %average IQ for the population

testedIQ = 150; %IQ of the single subject tested

errorDelta = 1e-9; %for verifying the population average IQ generated

nruns = 10000;

utu_totalError = 0; %initialize

hoots_totalError = 0; %initialize

%average untested IQ is fixed if testedIQ is fixed

avgUntestedIQ = (avgIQ*populationSize - testedIQ) / (populationSize-1);

for i = 1:nruns

%generate a random population that has average IQ = avgIQ and one member with IQ == 150

%place the tested IQ at element 1 for convenience

IQs = [testedIQ, (populationSize-1) * myrandsimplex(avgUntestedIQ,populationSize-1)];

%verify this set meets the problem conditions

if IQs(1) ~= testedIQ || abs(sum(IQs)/populationSize - avgIQ) > errorDelta

disp('IQ set is no good!')

break;

end

%draw a random sample of sampleSize IQs that includes the member with IQ == 150

sampleIQs = [testedIQ, IQs(ceil((sampleSize-1)*rand(1,sampleSize-1))+1)];

%get the average for the sample

sampleAvg = mean(sampleIQs);

%apply utu's estimate

utu_estimate = (avgIQ * (sampleSize-1) + testedIQ) / sampleSize;

utu_totalError = utu_totalError + (utu_estimate - sampleAvg); %cumulative error

%apply hoots' estimate

hoots_estimate = ((sampleSize-1) * (avgIQ*populationSize-testedIQ) / (populationSize-1) + testedIQ) / sampleSize;

hoots_totalError = hoots_totalError + (hoots_estimate - sampleAvg); %cumulative error

end

disp(['utu''s average error: ', num2str(utu_totalError/nruns)]);

disp(['hoots'' average error: ', num2str(hoots_totalError/nruns)]);

end

%this function gives a uniform distribution of values between 0.5*avg and 1.5*avg, summing to avg

%the shape of the distribution isn't important for this problem, try any distribution you like

function X = myrandsimplex(avg,D)

% D: dimension of the (bounded) simplex

% X: sum(X) = avg

X = .5 + rand(1,D); %values from .5 to 1.5

X = avg*bsxfun(@rdivide,X,sum(X,2)); % Normalize samples

end

-------------------------------------------

Why in the world did you choose a uniform distribution from (approx) 50 to 150 to represent IQ? That is nothing like a true IQ distribution.

Your comment says:

%the shape of the distribution isn’t important for this problem, try any distribution you like

Perhaps, but if that is so, why not use something more realistic (normal being the obvious choice)?

Replies:@hootsYour comment says:

%the shape of the distribution isn’t important for this problem, try any distribution you like

Perhaps, but if that is so, why not use something more realistic (normal being the obvious choice)?

No particular reason. As I mentioned, go ahead and change rand(1,D) to randn(1,D) or any other distribution you like. The mean performance of the estimators is the same regardless. Note that the problem included no statement about the distribution other than its mean. Therefore, the answer cannot depend on the type of distribution chosen.

Well, except for the problem explicitly referring to I.Q., which in the real world has a known distribution (roughly normal, but perhaps with fat tails). And I also think given they used a specific value (150) it is reasonable to assume a real world typical SD like 15.

I pretty much agree with this. But as I mentioned earlier, if you’re going to choose arbitrarily why not choose something which at least resembles the real world? (I was surprised you did that as an engineer. I thought physicists were the ones who preferred spherical cows?)

This type of problem tends to annoy me. iSteve hit the nail on the head with “I think I know what answer Kahneman wants.” (that was my reaction as well and my first instinct was indeed what Kahneman wanted) I have spent far too much time trying to make that kind of call when people try to be smart/tricky beyond their (or my ; ) ability. The usual examples involve making very specific (sometimes implicit and/or non-real world!) assumptions while ignoring important real world issues (i.e. of the kind which if as an engineer you received in a spec would prompt a request for clarification, at least if you wanted any chance of your creation working properly in the real world).

A good example of such assumptions can be seen in your code (in addition to using a uniform distribution). To make things work you had to explicitly renormalize the generated random numbers so their mean was

exactlythe expected mean (I noticed this yesterday when I modified your code to use a normal distribution to double check in MATLAB, FWIW I had some character encoding issues with minuses and single quotes in your posted code, maybe caused by the UR posting software or by Octave/MATLAB differences). The errors introduced by random sampling for small populations were larger than utu’s error (also really relevant only for very small populations) you were complaining about (thus triggering your error checking code).Replies:@hootsHowever, this problem is solved without ANY choice of prior. We don't need one. It is a mathematical fact that the answer is exactly the same regardless of the distribution chosen.

I went ahead and wrote an improved script that should eliminate any confusion about the sampling method. I even drew from a gaussian distribution to show that we get the same result. This script tabulates the performance of both estimators for any result of the single IQ measurement. The error bars represent a standard error of the mean, which goes as sqrt(variance/n) and thus are larger at the extremes simply because there are fewer occurring cases at those values (smaller n) due to drawing from a gaussian.

As you point out, we have to normalize the population IQs to get the required mean, since that is what the problem states.

-------------------------------------------

function test_estimators_mean_and_varOfMean

populationSize = 100; %effect is stronger for smaller populations

sampleSize = 50;

avgIQ = 100; %average IQ for the population

errorDelta = 1e-12; %for verifying the population average IQ generated

nruns = 10000;

numBins = 200;

utuError_mean = zeros(1,numBins); %initialize

utuError_s2 = zeros(1,numBins); %initialize

hootsError_mean = zeros(1,numBins); %initialize

hootsError_s2 = zeros(1,numBins); %initialize

count_records = zeros(1,numBins);

h = waitbar(0);

for i = 1:nruns

%generate the population IQs meeting problem constraints

popIQs = generateIQset(avgIQ, populationSize);

%verify this set meets the problem conditions given

if length(popIQs) ~= populationSize || abs(mean(popIQs) - avgIQ) > errorDelta

disp('IQ set is no good!')

break;

end

%shuffle the IQ set

popIQs = popIQs(randperm(populationSize));

%elements (1 : sampleSize) of popIQs are the sample

sampleIQs = popIQs(1:sampleSize);

%element 1 is the tested member

testedIQ = sampleIQs(1); % == popIQs(1)

%get the average for the sample

sampleAvg = mean(sampleIQs);

%apply utu's estimate

utu_estimate = (avgIQ * (sampleSize-1) + testedIQ) / sampleSize;

utu_error = utu_estimate - sampleAvg;

%apply hoots' estimate

hoots_error = hoots_estimate - sampleAvg;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%record results in bins corresponding to tested IQ, one bin for each rounded IQ value

bin = round(testedIQ);

count_records(bin) = count_records(bin) + 1;

n = count_records(bin);

%update mean and standard deviation for utu's estimator error

utuPreviousMean = utuError_mean(bin);

utuError_mean(bin) = (utu_error + (n-1)*utuPreviousMean) / n;

if n == 2

utuError_s2(bin) = (1/n) * (utu_error - utuPreviousMean) * (utu_error - utuPreviousMean);

elseif n > 2

utuError_s2(bin) = (n-2)/(n-1) * utuError_s2(bin) + (1/n) * (utu_error - utuPreviousMean) * (utu_error - utuPreviousMean);

end

%update mean and standard deviation for hoots' estimator error

hootsPreviousMean = hootsError_mean(bin);

hootsError_mean(bin) = (hoots_error + (n-1)*hootsPreviousMean) / n;

if n == 2

hootsError_s2(bin) = (1/n) * (hoots_error - hootsPreviousMean) * (hoots_error - hootsPreviousMean);

elseif n > 2

hootsError_s2(bin) = (n-2)/(n-1) * hootsError_s2(bin) + (1/n) * (hoots_error - hootsPreviousMean) * (hoots_error - hootsPreviousMean);

end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

if ~rem(i,round(nruns/100))

waitbar(i/nruns, h);

end

end

close(h)

figure;

hold on;

h1 = errorbar(utuError_mean, sqrt(utuError_s2 ./ count_records), 'r.');

h2 = errorbar(hootsError_mean, sqrt(hootsError_s2 ./ count_records), '.');

z = axis; axis([40 160 -1.5 1.5]);

legend([h1 h2], 'mean error of utu''s estimator', 'mean error of hoots'' estimator')

xlabel('Tested IQ value')

ylabel('Mean Error')

title(['Results for population size = ', num2str(populationSize), ' sample size = ', num2str(sampleSize), ' average IQ = ', num2str(avgIQ)]);

figure; plot(count_records,'.')

title('number of records')

xlabel('Tested IQ value')

end

% generateIQset(avgIQ, populationSize)

% may draw from any distribution as long as the result meets two conditions:

%

% 1) IQset has size populationSize

% 2) mean(IQset) == avgIQ

%

function IQset = generateIQset(avgIQ, populationSize)

lowIQ = 15;

highIQ = 185;

%gaussian distribution

stdev = 15; %realistic variance

IQset = avgIQ + stdev*randn(1,populationSize);

%prevent out of bounds values

IQset = min(max(IQset,lowIQ), highIQ);

%normalize so that mean(IQset) == avgIQ

IQset = IQset + avgIQ - mean(IQset);

end

Plot of Estimator Performance

The choice of distribution will only affect the size of the error bars, not the mean performance. You can always compensate by just making more runs.

utu, if you're still lurking, please stare at that plot for a minute and then seek help.

Who has the correct answer?

The correct answer is “there’s no way to know”, unless you know the “n” for the “population of eighth-graders in [the] city”.

I assume the answer Kahneman wants is that the expected mean of the sample is now 150(.02) + 100(.98) = 101, since he assumes we "know" with certainty that the expected mean of the remaining 49 draws is 100. But this is flawed because for actual humans, absolute faith that we "know" the population mean is irrational.

To be picky, even Kahneman's answer of 101 is wrong (as Hoots points out). We are sampling without replacement, so even if the population mean is 100, we have removed one student with 150, so the mean of the remaining sample is expected to be LESS than 100. How much less depends on the population size relative to the sample size. In the extreme, if the population is only 50, then we expect the mean of the entire sample of 50 (including the first student with 150) to be 100, not 101.

That’s not being picky at all. Isn’t your “picky” analysis — i.e., determining the average of the other 49 in the sample — the meat of the problem?

Sure you have to finish by multiplying that average by 49, adding 150, and then dividing that sum by 50, but Kahneman is testing our ability to determine the average of the other 49 in the sample.

RIght.

https://www.amazon.com/Probability-Theory-E-T-Jaynes/dp/0521592712/ref=mt_hardcover?_encoding=UTF8&me=

The book, Probability Theory, the Logic of Science was published posthumously. It's a collection of lecture notes and papers over time that actually explainz Bayesian inference and statistical inference, and when it's valid. He is very clear about what probability theory MEANS in a physics way, and what "inference" means.

We usually understand inference in a causal way--certain conditions CAUSE events. But in Bayesian theory, we use the word "inference" to talk about what we can KNOW about a system. The math of Bayesian inference says we've got wet streets, so we KNOW it rained. But that doesn't mean the wet streets CAUSE rain. Most economists and social scientists don't understand this and confuse us with their ignorance.

I highly recommend it.

“We usually understand inference in a causal way–certain conditions CAUSE events. But in Bayesian theory, we use the word “inference” to talk about what we can KNOW about a system. The math of Bayesian inference says we’ve got wet streets, so we KNOW it rained. But that doesn’t mean the wet streets CAUSE rain. Most economists and social scientists don’t understand this and confuse us with their ignorance.”

I like what you write, but I think this is actually incorrect. If you look at the wikipedia entry of Bayes’ Theorem, you see that the formulation of Bayes’ Theory assumes a great deal of known probabilities: you assume that the prior probability, the likelihood and the marginal likelihood are largely known. This might make sense in the context of experimental design, where the observed data are generated by a scientist running an experiment, but I don’t see how useful that is when you have no control over an existing physical system. That’s the situation that economists and social scientists find themselves in.

Physical scientists operate in the lucky universe of systems being described as a small number of independent variables that have a lot of predictive and explanatory power. Economists do not. Apply Bayesian Inference in this context looks sloppy.

This type of problem tends to annoy me. iSteve hit the nail on the head with "I think I know what answer Kahneman wants." (that was my reaction as well and my first instinct was indeed what Kahneman wanted) I have spent far too much time trying to make that kind of call when people try to be smart/tricky beyond their (or my ; ) ability. The usual examples involve making very specific (sometimes implicit and/or non-real world!) assumptions while ignoring important real world issues (i.e. of the kind which if as an engineer you received in a spec would prompt a request for clarification, at least if you wanted any chance of your creation working properly in the real world).

A good example of such assumptions can be seen in your code (in addition to using a uniform distribution). To make things work you had to explicitly renormalize the generated random numbers so their mean was

exactlythe expected mean (I noticed this yesterday when I modified your code to use a normal distribution to double check in MATLAB, FWIW I had some character encoding issues with minuses and single quotes in your posted code, maybe caused by the UR posting software or by Octave/MATLAB differences). The errors introduced by random sampling for small populations were larger than utu's error (also really relevant only for very small populations) you were complaining about (thus triggering your error checking code).Yes, by making the problem explicitly related to I.Q., the distributions we might choose as priors are strongly influenced. That’s a feature of almost all real world problems.

However, this problem is solved without ANY choice of prior. We don’t need one. It is a mathematical fact that the answer is exactly the same regardless of the distribution chosen.

I went ahead and wrote an improved script that should eliminate any confusion about the sampling method. I even drew from a gaussian distribution to show that we get the same result. This script tabulates the performance of both estimators for any result of the single IQ measurement. The error bars represent a standard error of the mean, which goes as sqrt(variance/n) and thus are larger at the extremes simply because there are fewer occurring cases at those values (smaller n) due to drawing from a gaussian.

As you point out, we have to normalize the population IQs to get the required mean, since that is what the problem states.

——————————————-

function test_estimators_mean_and_varOfMean

populationSize = 100; %effect is stronger for smaller populations

sampleSize = 50;

avgIQ = 100; %average IQ for the population

errorDelta = 1e-12; %for verifying the population average IQ generated

nruns = 10000;

numBins = 200;

utuError_mean = zeros(1,numBins); %initialize

utuError_s2 = zeros(1,numBins); %initialize

hootsError_mean = zeros(1,numBins); %initialize

hootsError_s2 = zeros(1,numBins); %initialize

count_records = zeros(1,numBins);

h = waitbar(0);

for i = 1:nruns

%generate the population IQs meeting problem constraints

popIQs = generateIQset(avgIQ, populationSize);

%verify this set meets the problem conditions given

if length(popIQs) ~= populationSize || abs(mean(popIQs) – avgIQ) > errorDelta

disp(‘IQ set is no good!’)

break;

end

%shuffle the IQ set

popIQs = popIQs(randperm(populationSize));

%elements (1 : sampleSize) of popIQs are the sample

sampleIQs = popIQs(1:sampleSize);

%element 1 is the tested member

testedIQ = sampleIQs(1); % == popIQs(1)

%get the average for the sample

sampleAvg = mean(sampleIQs);

%apply utu’s estimate

utu_estimate = (avgIQ * (sampleSize-1) + testedIQ) / sampleSize;

utu_error = utu_estimate – sampleAvg;

%apply hoots’ estimatehoots_estimate = ((sampleSize-1) * (avgIQ*populationSize-testedIQ) / (populationSize-1) + testedIQ) / sampleSize;

hoots_error = hoots_estimate – sampleAvg;

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

%record results in bins corresponding to tested IQ, one bin for each rounded IQ value

bin = round(testedIQ);

count_records(bin) = count_records(bin) + 1;

n = count_records(bin);

%update mean and standard deviation for utu’s estimator error

utuPreviousMean = utuError_mean(bin);

utuError_mean(bin) = (utu_error + (n-1)*utuPreviousMean) / n;

if n == 2

utuError_s2(bin) = (1/n) * (utu_error – utuPreviousMean) * (utu_error – utuPreviousMean);

elseif n > 2

utuError_s2(bin) = (n-2)/(n-1) * utuError_s2(bin) + (1/n) * (utu_error – utuPreviousMean) * (utu_error – utuPreviousMean);

end

%update mean and standard deviation for hoots’ estimator error

hootsPreviousMean = hootsError_mean(bin);

hootsError_mean(bin) = (hoots_error + (n-1)*hootsPreviousMean) / n;

if n == 2

hootsError_s2(bin) = (1/n) * (hoots_error – hootsPreviousMean) * (hoots_error – hootsPreviousMean);

elseif n > 2

hootsError_s2(bin) = (n-2)/(n-1) * hootsError_s2(bin) + (1/n) * (hoots_error – hootsPreviousMean) * (hoots_error – hootsPreviousMean);

end

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

if ~rem(i,round(nruns/100))

waitbar(i/nruns, h);

end

end

close(h)

figure;

hold on;

h1 = errorbar(utuError_mean, sqrt(utuError_s2 ./ count_records), ‘r.’);

h2 = errorbar(hootsError_mean, sqrt(hootsError_s2 ./ count_records), ‘.’);

z = axis; axis([40 160 -1.5 1.5]);

legend([h1 h2], ‘mean error of utu”s estimator’, ‘mean error of hoots” estimator’)

xlabel(‘Tested IQ value’)

ylabel(‘Mean Error’)

title([‘Results for population size = ‘, num2str(populationSize), ‘ sample size = ‘, num2str(sampleSize), ‘ average IQ = ‘, num2str(avgIQ)]);

figure; plot(count_records,’.’)

title(‘number of records’)

xlabel(‘Tested IQ value’)

end

% generateIQset(avgIQ, populationSize)

% may draw from any distribution as long as the result meets two conditions:

%

% 1) IQset has size populationSize

% 2) mean(IQset) == avgIQ

%

function IQset = generateIQset(avgIQ, populationSize)

lowIQ = 15;

highIQ = 185;

%gaussian distribution

stdev = 15; %realistic variance

IQset = avgIQ + stdev*randn(1,populationSize);

%prevent out of bounds values

IQset = min(max(IQset,lowIQ), highIQ);

%normalize so that mean(IQset) == avgIQ

IQset = IQset + avgIQ – mean(IQset);

end

This type of problem tends to annoy me. iSteve hit the nail on the head with "I think I know what answer Kahneman wants." (that was my reaction as well and my first instinct was indeed what Kahneman wanted) I have spent far too much time trying to make that kind of call when people try to be smart/tricky beyond their (or my ; ) ability. The usual examples involve making very specific (sometimes implicit and/or non-real world!) assumptions while ignoring important real world issues (i.e. of the kind which if as an engineer you received in a spec would prompt a request for clarification, at least if you wanted any chance of your creation working properly in the real world).

A good example of such assumptions can be seen in your code (in addition to using a uniform distribution). To make things work you had to explicitly renormalize the generated random numbers so their mean was

exactlythe expected mean (I noticed this yesterday when I modified your code to use a normal distribution to double check in MATLAB, FWIW I had some character encoding issues with minuses and single quotes in your posted code, maybe caused by the UR posting software or by Octave/MATLAB differences). The errors introduced by random sampling for small populations were larger than utu's error (also really relevant only for very small populations) you were complaining about (thus triggering your error checking code).If you bump up nruns, you’ll see something like this:

Plot of Estimator Performance

The choice of distribution will only affect the size of the error bars, not the mean performance. You can always compensate by just making more runs.

utu, if you’re still lurking, please stare at that plot for a minute and then seek help.

Replies:@resRegarding population sizes. The problem statement says: "population of eighth-graders in a city"

Chicago (as an example) has >390,000 students

http://www.chicagotribune.com/ct-chicago-public-schools-enrollment-met-20151023-story.html

so assuming this refers to K-12 and is relatively evenly distributed we have about 30,000 students in eighth grade.

This is far from the populations of 70 or 100 you have been using.

Plot of Estimator Performance

The choice of distribution will only affect the size of the error bars, not the mean performance. You can always compensate by just making more runs.

utu, if you're still lurking, please stare at that plot for a minute and then seek help.

Not sure if we’re talking past each other somehow. My comment was that utu’s error (I think it would be more accurate to call it a large population assumption, but perhaps utu can clarify) was only relevant for small population sizes. I don’t see what bumping up nruns has to do with population size.

Regarding population sizes. The problem statement says: “population of eighth-graders in a city”

Chicago (as an example) has >390,000 students

http://www.chicagotribune.com/ct-chicago-public-schools-enrollment-met-20151023-story.html

so assuming this refers to K-12 and is relatively evenly distributed we have about 30,000 students in eighth grade.

This is far from the populations of 70 or 100 you have been using.

Replies:@hootsRegarding population sizes. The problem statement says: "population of eighth-graders in a city"

Chicago (as an example) has >390,000 students

http://www.chicagotribune.com/ct-chicago-public-schools-enrollment-met-20151023-story.html

so assuming this refers to K-12 and is relatively evenly distributed we have about 30,000 students in eighth grade.

This is far from the populations of 70 or 100 you have been using.

Maybe we’re talking past each other. I’ll reemphasize, because it seems under-appreciated, that the distribution of IQs has no bearing on the solution. Yes, utu’s estimate will be close for large populations. I chose a small population for simulation because that makes it quicker to experimentally verify the correctness of the solution versus the error in utu’s formula. It also confirms that the methodology is sound and can be applied to problems where assuming an infinitely large population is inappropriate.

Replies:@resIt would be interesting to graph how the errors compare with changing population size (perhaps on a log scale) rather than changing initial IQ.

Regarding population sizes. The problem statement says: "population of eighth-graders in a city"

Chicago (as an example) has >390,000 students

http://www.chicagotribune.com/ct-chicago-public-schools-enrollment-met-20151023-story.html

This is far from the populations of 70 or 100 you have been using.

I chose to simulate a low population just to provide quicker confirmation of the solution. For large populations, utu’s formula will be close.

Replies:@resschoolshave more than 70 eighth graders!) example to draw conclusions strong enough to justify your comment 215?The thing is you don’t need to assume an infinitely large population. Just assume a population large enough that the error from ignoring the effect of removing the first person from the population becomes smaller than other relevant errors (e.g. sampling error). If I run your code with a population of 700 (smaller than most cities eighth grade populations IMHO) the effect is almost gone.

It would be interesting to graph how the errors compare with changing population size (perhaps on a log scale) rather than changing initial IQ.

Understood. But surely as an engineer you understand the danger of relying on a single non-realistic (I believe most

schoolshave more than 70 eighth graders!) example to draw conclusions strong enough to justify your comment 215?Replies:@hootsI don't understand why you would argue for a less-accurate solution just because it's error is very low in limiting cases. Why not just use the solution that gives zero expected error? It's not like the cost of accuracy is high here. The equation is simple, and was arrived at in seconds. See comment #6.

schoolshave more than 70 eighth graders!) example to draw conclusions strong enough to justify your comment 215?I did not rely on any assumptions in my analysis. The solution accounts for any population size, including the large population limit, if you decide it’s appropriately applied in your particular real-world test. According to NCES data, the average class size in 2008-2009 looks to be in the 100-150 range. I know they said “city” but my point is that the large population assumption would probably be a bad one in the majority of the country’s schools. A method that only works with large populations can’t be applied to a host of real-world problems. Fortunately, we can do better.

I don’t understand why you would argue for a less-accurate solution just because it’s error is very low in limiting cases. Why not just use the solution that gives zero expected error? It’s not like the cost of accuracy is high here. The equation is simple, and was arrived at in seconds. See comment #6.

Replies:@resBecause we weren't given the populationtherefore we can't use it to solve the problem as posed.Also because in a reasonable size city the large population limit gives a small error (e.g. smaller than the sampling error from choosing the other 49 subjects) and simplifies other calculations like computing the sample variance/SD as I do below. Except we can't,

because we don't know p.Your comment #6 analysis did not rely on any assumptions (well, except for needing to know p), but your code most certainly did (admittedly they could be changed, but 70 was a grossly non-realistic value for eighth grade population in a city and using the uniform distribution for IQ was also grossly unrealistic).

I should let this go since you have been civil to me (and thanks for providing code to make the discussion more concrete), but I'm pretty peeved about your comment to utu and don't see you taking any ownership for the shortcomings in your approach (conflating school and city?! come on, that's weak, the problem statement clearly said city) after savaging utu's IMHO reasonable approximation (not saying I agree with all of his comments, and his last paragraph kind of asked for a nasty response now that I have reread it).

Surely at some point in your engineering education you made use of approximations/idealizations to simplify analysis? IMHO knowing how to do that intelligently (and when not to!) is one of the keystones of an engineering education (with another being knowing how to make tradeoffs in multiple dimensions like cost, robustness, manufacturability, etc.).

While we're getting all anal retentive here, a really good answer would also note the expected variability of our sample. I believe that would be (assuming a typical IQ distribution with mean 100 and SD 15) an SD of 15/sqrt(49) which is 2.1. That's a useful number to know when considering the magnitude of any other errors/approximations in our solution.

And to step back a moment (see my earliest comments above) I think doing an analysis at this level of detail makes little sense until we have figured out how we got an initial sample subject with an IQ 3.3SD above the mean (i.e. a 1/2000 event). That is what I think a real engineer would/should focus on here.

WRONG! First of all you did not mean to use the term average. You meant to use the term mean.

It is really trivial. It follows form the fact that the expected value, i.e., the mean is a linear function of random variable. From the linearity it follows that the mean of sum of variables is equal to the mean of sums.

The mean is the best estimator of the actual average. So if the actual average of IQ's of John, Susan,..., Jim_N is unknown but we know that all of them were randomly drawn from the population with mean IQ=IQ_mean, then the IQ_mean is the best estimator of the average of IQ's of John,...

Not wrong and thank you for pointing out that I am not wrong. Any sample drawn from a population with mean ( average value ) Y will have a value (y) that is not necessarily Y. Sample versus Population. Sample mean does not equal Population Mean. Example: If the sample mean = the population mean; then Hillary Clinton is president. Because every sample taken before the election said that the population was going to give her the electoral college majority.

The population failed to deliver on the sum of the sample means.

I don't understand why you would argue for a less-accurate solution just because it's error is very low in limiting cases. Why not just use the solution that gives zero expected error? It's not like the cost of accuracy is high here. The equation is simple, and was arrived at in seconds. See comment #6.

Because we weren’t given the populationtherefore we can’t use it to solve the problem as posed.Also because in a reasonable size city the large population limit gives a small error (e.g. smaller than the sampling error from choosing the other 49 subjects) and simplifies other calculations like computing the sample variance/SD as I do below.

Except we can’t,

because we don’t know p.Your comment #6 analysis did not rely on any assumptions (well, except for needing to know p), but your code most certainly did (admittedly they could be changed, but 70 was a grossly non-realistic value for eighth grade population in a city and using the uniform distribution for IQ was also grossly unrealistic).

I should let this go since you have been civil to me (and thanks for providing code to make the discussion more concrete), but I’m pretty peeved about your comment to utu and don’t see you taking any ownership for the shortcomings in your approach (conflating school and city?! come on, that’s weak, the problem statement clearly said city) after savaging utu’s IMHO reasonable approximation (not saying I agree with all of his comments, and his last paragraph kind of asked for a nasty response now that I have reread it).

Surely at some point in your engineering education you made use of approximations/idealizations to simplify analysis? IMHO knowing how to do that intelligently (and when not to!) is one of the keystones of an engineering education (with another being knowing how to make tradeoffs in multiple dimensions like cost, robustness, manufacturability, etc.).

While we’re getting all anal retentive here, a really good answer would also note the expected variability of our sample. I believe that would be (assuming a typical IQ distribution with mean 100 and SD 15) an SD of 15/sqrt(49) which is 2.1. That’s a useful number to know when considering the magnitude of any other errors/approximations in our solution.

And to step back a moment (see my earliest comments above) I think doing an analysis at this level of detail makes little sense until we have figured out how we got an initial sample subject with an IQ 3.3SD above the mean (i.e. a 1/2000 event). That is what I think a real engineer would/should focus on here.

Replies:@hootsImagine being asked to compute the trajectory of a satellite of mass m about a planet with mass M. With some thought, you realize that the volume of the planet is a crucial factor, and assuming that M is a point mass would only be appropriate for very distant orbits. In this case it isn't wrong to include a term for the planet's radius in the solution just because the problem didn't give a radius value. In fact, your instructor might just deduct points if you failed to do so.

In my opinion, those who use the large population solution are the ones applying an overly-restrictive assumption (akin to the point-mass assumption in my example). You're right that we don't know the population. Given that we don't know p, why assume that it's so big that we can treat it as infinite? Assuming a large population is a very strong assumption. Why is it better than assuming the population is 349? Maybe it's good enough in this case because "city" really does always imply some minimum (relatively large) size, but that's yet another assumption.

Furthermore, the formula resulting from the large-population assumption has the additional downside of obscuring the fact that population size is THE critical factor in the problem. The reason I had to put a population choice in the script is because there's no such thing as an infinite population. (It's a nice example of how a coding exercise might force one to notice crucial overlooked factors.) Basically it's just a more general and powerful solution. If we wanted to, we could even build on the p-dependent solution by applying a distribution representing our uncertainty about the population size. Basically, the variable p is inescapable. We are forced to apply some value for any reasonable interpretation of the problem.

Anyway, that's my take.

Because we weren't given the populationtherefore we can't use it to solve the problem as posed.Also because in a reasonable size city the large population limit gives a small error (e.g. smaller than the sampling error from choosing the other 49 subjects) and simplifies other calculations like computing the sample variance/SD as I do below. Except we can't,

because we don't know p.Your comment #6 analysis did not rely on any assumptions (well, except for needing to know p), but your code most certainly did (admittedly they could be changed, but 70 was a grossly non-realistic value for eighth grade population in a city and using the uniform distribution for IQ was also grossly unrealistic).

I should let this go since you have been civil to me (and thanks for providing code to make the discussion more concrete), but I'm pretty peeved about your comment to utu and don't see you taking any ownership for the shortcomings in your approach (conflating school and city?! come on, that's weak, the problem statement clearly said city) after savaging utu's IMHO reasonable approximation (not saying I agree with all of his comments, and his last paragraph kind of asked for a nasty response now that I have reread it).

Surely at some point in your engineering education you made use of approximations/idealizations to simplify analysis? IMHO knowing how to do that intelligently (and when not to!) is one of the keystones of an engineering education (with another being knowing how to make tradeoffs in multiple dimensions like cost, robustness, manufacturability, etc.).

While we're getting all anal retentive here, a really good answer would also note the expected variability of our sample. I believe that would be (assuming a typical IQ distribution with mean 100 and SD 15) an SD of 15/sqrt(49) which is 2.1. That's a useful number to know when considering the magnitude of any other errors/approximations in our solution.

And to step back a moment (see my earliest comments above) I think doing an analysis at this level of detail makes little sense until we have figured out how we got an initial sample subject with an IQ 3.3SD above the mean (i.e. a 1/2000 event). That is what I think a real engineer would/should focus on here.

It’s nice to find critics who can be civil. I guess I see it a little like this:

Imagine being asked to compute the trajectory of a satellite of mass m about a planet with mass M. With some thought, you realize that the volume of the planet is a crucial factor, and assuming that M is a point mass would only be appropriate for very distant orbits. In this case it isn’t wrong to include a term for the planet’s radius in the solution just because the problem didn’t give a radius value. In fact, your instructor might just deduct points if you failed to do so.

In my opinion, those who use the large population solution are the ones applying an overly-restrictive assumption (akin to the point-mass assumption in my example). You’re right that we don’t know the population. Given that we don’t know p, why assume that it’s so big that we can treat it as infinite? Assuming a large population is a very strong assumption. Why is it better than assuming the population is 349? Maybe it’s good enough in this case because “city” really does always imply some minimum (relatively large) size, but that’s yet another assumption.

Furthermore, the formula resulting from the large-population assumption has the additional downside of obscuring the fact that population size is THE critical factor in the problem. The reason I had to put a population choice in the script is because there’s no such thing as an infinite population. (It’s a nice example of how a coding exercise might force one to notice crucial overlooked factors.) Basically it’s just a more general and powerful solution. If we wanted to, we could even build on the p-dependent solution by applying a distribution representing our uncertainty about the population size. Basically, the variable p is inescapable. We are forced to apply some value for any reasonable interpretation of the problem.

Anyway, that’s my take.

Replies:@resImagine being asked to compute the trajectory of a satellite of mass m about a planet with mass M. With some thought, you realize that the volume of the planet is a crucial factor, and assuming that M is a point mass would only be appropriate for very distant orbits. In this case it isn't wrong to include a term for the planet's radius in the solution just because the problem didn't give a radius value. In fact, your instructor might just deduct points if you failed to do so.

In my opinion, those who use the large population solution are the ones applying an overly-restrictive assumption (akin to the point-mass assumption in my example). You're right that we don't know the population. Given that we don't know p, why assume that it's so big that we can treat it as infinite? Assuming a large population is a very strong assumption. Why is it better than assuming the population is 349? Maybe it's good enough in this case because "city" really does always imply some minimum (relatively large) size, but that's yet another assumption.

Furthermore, the formula resulting from the large-population assumption has the additional downside of obscuring the fact that population size is THE critical factor in the problem. The reason I had to put a population choice in the script is because there's no such thing as an infinite population. (It's a nice example of how a coding exercise might force one to notice crucial overlooked factors.) Basically it's just a more general and powerful solution. If we wanted to, we could even build on the p-dependent solution by applying a distribution representing our uncertainty about the population size. Basically, the variable p is inescapable. We are forced to apply some value for any reasonable interpretation of the problem.

Anyway, that's my take.

Agreed!

This summarizes the core of my position (with a secondary point being solving “test questions” using only the information given as much as possible). That was why I made the point about an eighth grade population p of 700 being enough to put the “large population assumption” error into the noise based on your code. IMHO 700 would be small for a city. As an estimate I would say the eight grade population would be around 1/70 (based on lifespan) of a city’s total population (based on the numbers I saw for Chicago it is ~1/90, not sure if the difference is demographic, or a function of private schools, or differing catchment areas for different stats, or ?). This implies any city over 50k people is large enough for the assumption to be reasonable (plotting the error vs. p would give a better idea of the range of validity, it may extend to smaller populations).

This is a great observation. And a good note for me to conclude on. Cheers!