The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
 TeasersiSteve Blog
/
Race/IQ

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

From Vox:

Charles Murray is once again peddling junk science about race and IQ

Podcaster and author Sam Harris is the latest to fall for it.

Updated by Eric Turkheimer, Kathryn Paige Harden, and Richard E. Nisbett May 18, 2017, 9:50am EDT

Eric Turkheimer is the Hugh Scott Hamilton Professor of Psychology at the University of Virginia. Twitter: @ent3c. Kathryn Paige Harden (@kph3k) is associate professor in the department of psychology at the University of Texas at Austin. Richard E. Nisbett is the Theodore M. Newcomb Distinguished University Professor at the University of Michigan.

Charles Murray, the conservative scholar who co-authored The Bell Curve with the late Richard Herrnstein, was recently denied a platform at Middlebury College. Students shouted him down, and one of his hosts was hurt in a scuffle. But Murray recently gained a much larger audience: an extensive interview with best-selling author Sam Harris on his popular Waking Up podcast. That is hardly a niche forum: Waking Up is the fifth-most-downloaded podcast in iTunes’s Science and Medicine category.

Getting worked up over Charles Murray being allowed on a podcast seems a little bizarre. (Here’s the podcast.)

Under the faux indignation and clickbait headline, however, this is about as good an attempt as any to shore up the Conventional Wisdom that the racial differences in average intelligence can’t be influenced by genetics at all. So I’ll go through a chunk of it, adding comments.

Interestingly, the article, when read carefully, is also about how Charles Murray is mostly so much more right than the Conventional Wisdom about IQ. But he’s still a Witch! The article is another one of these attempts to fight back against today’s rampant Science Denialism while not being accused of witchcraft yourself.

Here’s an important question: Do these triple bankshot approaches ever work?

They’re kind of like some prisoner of war being put on TV to denounce the Great Satan while blinking T-O-R-T-U-R-E in Morse Code? But what if nobody back home knows Morse Code anymore?

The basic problem is that the zeitgeist is continually dumbing down. We don’t worry about how to apply objective principles anymore to real world examples of human behavior, we just look for who are the Good Guys and who are the Bad Guys. And how can we tell? Just look at them: the cishet white males are the Bad Guys. What’s so complicated about that?

In this kind of mental atmosphere, will more than three Vox readers come to the end of this carefully coded article and say to themselves: “You know, Charles Murray is still as evil and stupid as I thought, but now I realize that most of what Murray says about IQ is Science and Good!”?

In an episode that runs nearly two and a half hours, Harris, who is best known as the author of The End of Faith, presents Murray as a victim of “a politically correct moral panic” — and goes so far as to say that Murray has no intellectually honest academic critics. Murray’s work on The Bell Curve, Harris insists, merely summarizes the consensus of experts on the subject of intelligence.

The consensus, he says, is that IQ exists; that it is extraordinarily important to life outcomes of all sorts; that it is largely heritable; and that we don’t know of any interventions that can improve the part that is not heritable. The consensus also includes the observation that the IQs of black Americans are lower, on average, than that of whites, and — most contentiously — that this and other differences among racial groups is based at least in part in genetics. …

(In the interview, Murray says he has modified none of his views since the publication of the book, in 1994; if anything, he says, the evidence for his claims has grown stronger. In fact, the field of intelligence has moved far beyond what Murray has been saying for the past 23 years.)

Eh … As I pointed out on the 20th anniversary of The Bell Curve, the world today looks even more like the world Herrnstein and Murray described.

The reality is that there haven’t been all that many revolutionary discoveries since then. The genomic research up through 2016 largely has panned out in the direction Herrnstein and Murray expected, although I’ve been told that a new preprint raises questions about Murray’s guess that the gene variants driving differences between the races are similar to the variants driving differences between individuals. If true, that would suggest that racial differences are in some ways more profound than Murray assumed, which would be ironic.

Turkheimer has gotten a lot of attention for a 2003 paper arguing that in one sample of poor people with lowish IQs, the heritability of IQ was lower than in better off populations, which is interesting but not hugely galvanizing. Emil Kirkegaard in 2016 asked “Did Turkheimer el al (2003) replicate?” I won’t try to adjudicate a question over my head.

But, anyway, the last big scientific finding to raise major questions about the Jensenist view was the Flynn Effect in the 1970s-1980s, which Herrnstein and Murray didn’t exactly ignore: they named it in The Bell Curve.

Murray’s premises, which proceed in declining order of actual broad acceptance by the scientific community, go like this:

1) Intelligence, as measured by IQ tests, is a meaningful construct that describes differences in cognitive ability among humans.

2) Individual differences in intelligence are moderately heritable.

3) Racial groups differ in their mean scores on IQ tests.

4) Discoveries about genetic ancestry have validated commonly used racial groupings.

5) On the basis of points 1 through 4, it is natural to assume that the reasons for racial differences in IQ scores are themselves at least partly genetic.

Until you get to 5, none of the premises is completely incorrect. However, for each of them Murray’s characterization of the evidence is slanted in a direction that leads first to the social policies he endorses, and ultimately to his conclusions about race and IQ. We, and many other scientific psychologists, believe the evidence supports a different view of intelligence, heritability, and race.

We believe there is a fairly wide consensus among behavioral scientists in favor of our views, but there is undeniably a range of opinions in the scientific community. Some well-informed scientists hold views closer to Murray’s than to ours. …

Let’s take Murray’s principles one at a time.

Intelligence is meaningful. This principle comes closest to being universally accepted by scientific psychologists. …

But observing that some people have greater cognitive ability than others is one thing; assuming that this is because of some biologically based, essential inner quality called g that causes them to be smarter, as Murray claims, is another. There is a vibrant ongoing debate about the biological reality of g, but intelligence tests can be meaningful and useful even if an essential inner g doesn’t exist at all.

Indeed. So what is the relevance of g to this debate?

The question of g is fascinating and also quite difficult. But it’s not absolutely relevant to this debate other than that poor Stephen Jay Gould got all hung up on g, fulminating: “The chimerical nature of g is the rotten core of Jensen’s edifice …”

As I’ve pointed out before, for example, Harvard requires applicants to take the SAT or ACT, both of which correlate considerably with IQ. The goal is to supplement the GPA with a measure that gives additional insight into brainpower. Say the g factor doesn’t exist and that there is zero correlation between an SAT math score and an SAT verbal score. Harvard would still favor students who score well on both measures over those who score well on only math or verbal. In the real world, there is a lot of correlation between SAT Math and SAT Verbal scores, just like the g factor theory implies. But, I suspect, we would still be having this IQ and Race debate if there weren’t.

Intelligence is heritable. To say that intelligence is heritable means that, in general, people who are more similar genetically are also more similar in their IQ. Identical twins, who share all their DNA, have more similar IQs than fraternal twins or siblings, who only share half. Half-siblings’ IQs are even less similar than that; cousins, still less.

Heritability is not unique to IQ; in fact, virtually all differences among individual human beings are somewhat heritable. … Heritability is not a special property of certain traits that have turned out to be genetic; it is a description of the human condition, according to which we are born with certain biological realities that play out in complex ways in concert with environmental factors, and are affected by chance events throughout our lives.

Okay!

This is a pretty funny example of the rhetorical strategy of much of this article. It’s designed to get readers to say to themselves: “That nasty moron Murray thinks the heritability of intelligence is partly genetic, when smart people know it’s really a … description of the human condition!”

An awful lot of this article consists of the three professors agreeing with Murray, but phrasing their endorsement of various Bell Curve assertions in such a way that Vox readers will think it’s actually a crushing takedown of Murray. The whole thing is full of these kind of trick maneuvers.

Do these kind of Secret Decoder Ring articles ever work? Does anybody ever finish the article and say to themselves, “Yes, Charlie Murray is just as evil and stupid as I previously believed, but now I’m aware that 80% of what Murray says about IQ is Science and Good!”

I dunno …

The basic problem is that the zeitgeist is just getting dumber and dumber as the dominant way of thinking gets more childish: Good Guys vs. Bad Guys. (And you determine who are the Good Guys and who are the Bad Guys not by something complicated like what they do, but by something simple: who they are.) So the likelihood of this kind of devious triple bankshot approach actually smartening people up doesn’t seem all that likely. But what do I know?

Today we can also study genes and behavior more directly by analyzing people’s DNA. These methods have given scientists a new way to compute heritability: Studies that measure DNA sequence variation directly have shown that pairs of people who are not relatives, but who are slightly more similar genetically

Such as members of the same race?

Much of the brain fog that besets Vox-level discussions of this question is due to Americans forgetting that race is deeply related to the question of who your relatives are. American intellectuals seldom think in terms of family trees, even though biological genealogy is just about the most absolutely real thing there is in the social realm. The simple reality is that people of one race tend to be more closely related in their family trees to people of the same race than they are to people of other races. But almost nobody notices the relations between race and genealogy in modern American thinking.

, also have more similar IQs than other pairs of people who happen to be more different genetically. These “DNA-based” heritability studies don’t tell you much more than the classical twin studies did, but they put to bed many of the lingering suspicions that twin studies were fundamentally flawed in some way. Like the validity of intelligence testing, the heritability of intelligence is no longer scientifically contentious.

In other words, “the heritability of intelligence is no longer scientifically contentious.” Nor is “the validity of intelligence testing.”

The new DNA-based science has also led to an ironic discovery: Virtually none of the complex human qualities that have been shown to be heritable are associated with a single determinative gene!

It’s almost as if the genetics behind the most complex object in the known universe, the human brain, are also complex.

There are no “genes for” IQ in any but the very weakest sense. Murray’s assertion in the podcast that we are only a few years away from a thorough understanding of IQ at the level of individual genes is scientifically unserious. Modern DNA science has found hundreds of genetic variants that each have a very, very tiny association with intelligence, but even if you add them all together they predict only a small fraction of someone’s IQ score.

And that fraction goes up year by year as larger and larger sample sizes are assembled.

The ability to add together genetic variants to predict an IQ score is a useful tool in the social sciences, but it has not produced a purely biological understanding of why some people have more cognitive ability than others.

Indeed, “it has not produced a purely biological understanding.” But the biological understanding is improving annually.

This is the usual debate over whether a glass is part full or part empty. What we can say is that each year, the glass gets fuller.

Most crucially, heritability, whether low or high, implies nothing about modifiability. The classic example is height, which is strongly heritable (80 to 90 percent), yet the average height of 11-year-old boys in Japan has increased by more than 5 inches in the past 50 years.

True. I write about height a fair amount in part because the effects of nurture on height are so clear. Thus, it’s plausible that the effects of nurture on intelligence probably exist too, even though they are hard to document.

As a non-scientist, I’m more of a nurturist when it comes to IQ than most actual scientists in the field. The scientists emphasize that that the half or so of the influences on IQ that aren’t nature aren’t what we normally think of as nurture, such as having a lot of books in the house growing up. Instead, what gets lumped under nurture appears to be mostly random bad luck that we don’t really understand.

But I’m more cautious on this than most researchers. I’m not convinced that they’ve figured out what drives the Flynn Effect over time, so I’ll hold open the possibility that more traditional nurture may play a considerable role.

But, please note, the Japanese remain one of the shorter nationalities despite a couple of generations of first world living standards. They’ve been surpassed in average height by the South Koreans, for example. The tallest Europeans on average include the wealthy Dutch and the much less wealthy Serbs, Croats, Bosnians, and Albanians. So, height differences among ancestral groups appear to be part nature, part nurture.

A similar historical change is occurring for intelligence: Average IQ scores are increasing across birth cohorts, such that Americans experienced an 18-point gain in average IQ from 1948 to 2002.

Indeed, the Flynn Effect is extremely interesting, as I’ve often pointed out.

And the most decisive and permanent environmental intervention that an individual can experience, adoption from a poor family into a better-off one, is associated with IQ gains of 12 to 18 points. …

There was a small French study of cross-class adoption with a sample size of 38. Despite the tiny sample, I find its finding that nature and nurture are about roughly equally influential (with nature a little stronger) quite plausible. (My general presumption before studying any interesting question is that we’ll end up around fifty-fifty.)

Race differences in average IQ score. People who identify as black or Hispanic in the US and elsewhere on average obtain lower IQ scores than people who identify as white or Asian. That is simply a fact, and stating it plainly offers no support in itself for a biological interpretation of the difference. To what extent is the observed difference in cognitive function a reflection of the myriad ways black people in the US experience historical, social, and economic disadvantage — earning less money, suffering more from chronic disease, dying younger, living in more dangerous and chaotic neighborhoods, attending inferior schools?

Okay, but let’s think about African-American height for a moment, since we were just talking about Japanese height. There’s this guy you may have heard of named LeBron James.

He’s really tall.

In fact, there are a lot of tall, healthy African-Americans currently dominating the NBA playoffs. In terms of height, African-Americans don’t appear to be a malnourished, beaten down population like, say, Guatemalan Indians.

Similarly, the last 72 men to qualify for the finals of the Olympic 100 meter dash, from 1984 through 2016, have been at least half black.

Now you could say, like James Flynn, that contemporary African-American culture is detrimental to the full development of African-American cognitive functioning, that black Americans focus too much on basketball and gangsta rap.

I think that’s highly possible.

But, who exactly is responsible for that? Charles Murray?

This is another triple bankshot approach: if we can just punch Charles Murray enough (metaphorically or literally), then inner city blacks will realize they should stop listening to gangsta rap and instead become patent attorneys. Or something.

… Race and genetic ancestry. First, a too-brief interlude about the biological status of race and genetic ancestry. The topic of whether race is a social or biological construct has been as hotly debated as any topic in the human sciences. The answer, by our lights, isn’t that hard: Human evolutionary history is real; the more recent sorting of people into nations and social groups with some degree of ethnic similarity is real; individual and familial ancestry is real. All of these things are correlated with genetics, but they are also all continuous and dynamic, both geographically and historically.

Our lay concept of race is a social construct that has been laid on top of these vastly more complex biological realities. That is not to say that socially defined race is meaningless or useless. (Modern genomics can do a good job of determining where in Central Europe or Western Africa your ancestors resided.)

And since “modern genomics can do a good job of determining where in Central Europe or Western Africa your ancestors resided,” they can, of course, also do the easier job of determining whether the bulk of your relatives were from Europe or sub-Saharan Africa.

However, a willingness to speak casually about modern racial groupings as simplifications of the ancient and turbulent history of human ancestry should not deceive us into conjuring back into existence 19th-century notions of race — Caucasoid, Negroid, Mongoloid, and all that.

Funny how the Obama Administration spent 8 years heartily enforcing policies based on categories called whites (i.e., Caucasoid), blacks (Negroid), and Asians (Mongoloid) and all that. It’s almost as if the Obama Administration believed that such categories are good enough for government work.

Murray talks about advances in population genetics as if they have validated modern racial groups. In reality, the racial groups used in the US — white, black, Hispanic, Asian — are such a poor proxy for underlying genetic ancestry that no self-respecting statistical geneticist would undertake a study based only on self-identified racial category as a proxy for genetic ancestry measured from DNA.

Okay, but the implication of that argument is 180 degrees backward from what Turkheimer et al are rhetorically implying. Isn’t it obvious that IQ studies that use self-identified race, as most do, are going to find a slightly lower correlation between race and IQ than ideal studies that use actual genetic ancestry?

For example, both Barack and Michelle Obama self-identified on the 2010 Census solely as black, but Barack clearly has a higher IQ than Michelle. The Vox authors in effect complain that studies based on self-identification would lump both together as purely black, ignoring Barack’s substantial white ancestry. That’s a reasonable methodological complaint, but its implications are the reverse of what they imply.

Similarly, there is an obvious correlation in the U.S. among Hispanics between white ancestry and educational attainment that gets blurred if you rely purely on self-identification.

Black Harvard professors Henry Louis Gates and Lani Guinier complained in 2004 that a very large fraction of Harvard’s affirmative action spots for blacks go to applicants, like Barack, with a white parent and/or foreign elite ancestry instead of toward genuine descendants of American slaves, like Michelle. (They sort of dropped the topic after the rise of Barack later that year).

Finally, the relationship between self-identification and racial ancestry has been investigated via DNA a lot recently, and the results are pretty much that, for whites and blacks, the government’s categories for self-identification are good enough for government work. In 23andMe studies, people who self-identify as non-Hispanic whites are overwhelmingly over 90% white by ancestry. People who identify as non-Hispanic African-Americans are largely at least 50% black.

23andme found among their clients, by my calculations:

If the average self-identified black is 73.2% black and the average self-identified white is 0.19% black, then the average black in America is 385 times blacker than the average white. That doesn’t seem very murky to me.

This was all predictable from the workings of the One Drop System.

Some of this will change in newer generations raised under somewhat different rules, but the basic reality discovered by genome studies is that in America, individuals who self-identify as non-Hispanic whites or as non-Hispanic blacks tend to be quite different by ancestry.

Genetic group differences in IQ. On the basis of the above premises, Murray casually concludes that group differences in IQ are genetically based. But what of the actual evidence on the question? Murray makes a rhetorical move that is commonly deployed by people supporting his point of view: They stake out the claim that at least some of the difference between racial groups is genetic, and challenge us to defend the claim that none, absolutely zero, of it is. They know that science is not designed for proving absolute negatives, but we will go this far: There is currently no reason at all to think that any significant portion of the IQ differences among socially defined racial groups is genetic in origin.

“No reason at all” is pretty silly. A much more reasonable suggestion would be that Occam’s Razor currently favors the hypothesis that some of the IQ gap is genetic in origin, but the subject is extremely complicated and it could turn out to be different.

It’s also possible that there is something we don’t understand at present about this dauntingly complex subject that makes a reasonably final answer not possible, a little bit like how Gödel’s incompleteness theorems came as a big surprise to mathematicians and philosophers such as Bertrand Russell.

In any case, we’ll learn a lot more about this subject over the next couple of decades due to the ongoing advances in genomics.

I had dinner last year with a geneticist who informed me that in his laptop in his backpack under the table was data documenting some gene variants that contribute a part of the racial IQ gap. He asked me if I thought he should publish it.

I asked him how close he was to tenure.

Now, if this scientist chooses to publish, Turkheimer et al could still argue that his results aren’t a “significant portion” of The Gap. This question is very, very complex technically, and giant sample sizes are needed. But those will be eventually forthcoming and we will (probably) eventually see.

But, right now, it sure seems like the wind has mostly been blowing for a long, long time in Murray’s direction and there’s not much reason to expect it to suddenly reverse in the future.

Toward the end of the Vox article:

Liberals need not deny that intelligence is a real thing or that IQ tests measure something real about intelligence, that individuals and groups differ in measured IQ, or that individual differences are heritable in complex ways.

But liberals must deny that racial differences in IQ could possibly be heritable in complex ways.

But isn’t the upshot of this article that Charles Murray is more correct than the Conventional Wisdom about 80% of what’s at issue?

Why isn’t this article entitled, for example: “Charles Murray is mostly right and Stephen Jay Gould was mostly wrong”?

And that leads to a meta-point: Instead of liberals attempting to imply, using all their rhetorical skills, that only horrible people like Charles Murray think there is any evidence at all for a genetic influence on differences in average IQs among races, shouldn’t they be spending more time explaining why, if Murray turns out to be right, that wouldn’t be The End of the World? Right now, we get told over and over about how unthinkable and outrageous this quite plausible scientific finding would be and how only bad people, practically Hitlerites, think there is any evidence for it at all.

This conventional wisdom strikes me as imprudent.

Personally, I think, this seemingly horrifying potential scientific discovery ought to be easily endurable, just as the NBA has survived the rise of the popular suspicion that the reasons LeBron James and other blacks make up most of the best basketball players include genetic differences.

I’ve long argued that The Worst that liberals can imagine about the scientific reality isn’t actually so bad. Murray’s world looks an awful lot like the world we live in, which we manage to live in. But I don’t have the rhetorical chops to reassure liberals that life will go on. I’m an official Horrible Extremist.

But that raises the question: Who does have the rhetorical skills to undermine the increasingly hysterical conventional wisdom and package the mature point of view about genetic diversity in the old soft soap that will go over well with Nice People?

Clearly, even Charles Murray doesn’t have the eloquence to reassure liberals.

Fortunately, there is this guy who is obsessed with genetic diversity in sports, having read David Epstein’s HBD-aware The Sports Gene, And he is really good at public speaking to liberals. And he doesn’t have that much else on his plate at the moment: Barack Obama.

So if Mr. Obama ever reads this, let me ask him to think about taking on the public service of deflating the Science Denialist hysteria over race and genetic diversity.

P.S. This article’s junior co-author, Paige Harden, had some more respectful things to say about Murray back in March.

 
🔊 Listen RSS
Speaking of Arthur Jensen, Occidentalist has a table listing all 40 academic studies he could find of the white-black gap in average IQ in the U.S. They range from 1918, when it was measured at 17 points, to 2008, when it was found to be 16 points. So, don’t let anybody tell you The Gap hasn’t closed over the last 90 years.

Seriously, is there anything in the human sciences more stable than La Griffe’s Fundamental Constant of American Sociology? It’s really odd when you stop to think about how stable it has been. I suspect that differences in average height have changed significantly more over the generations. For example, when I was a kid, the Dutch weren’t particularly tall, not the way they are now.

Things change.

Except this …

Indeed, I’m wondering whether there isn’t some kind of behavioral feedback at work regarding IQ that somehow keeps The Gap about the same. I don’t have any candidates in mind for what that stabilizing mechanism might be, but it’s worth considering.

(Republished from iSteve by permission of author or representative)
 
• Tags: IQ, Race, Race/IQ 
🔊 Listen RSS
From the NYT:

Betty Hart Dies at 85; Studied Disparities in Children’s Vocabulary Growth 

By WILLIAM YARDLEY 

Published: October 25, 2012 

Betty Hart, whose research documenting how poor, working-class and professional parents speak to their young children helped establish the critical role that communicating with babies and toddlers has in their later development, died on Sept. 28 in hospice care in Tucson. She was 85 …. 

Dr. Hart was a graduate student at the University of Kansas in the 1960s when she began trying to help poor preschool children overcome speech and vocabulary deficits. But she and her colleagues later concluded that they had started too late in the children’s lives — that the ones they were trying to help could not simply “catch up” with extra intervention. 

At the time, a prevalent view was that poor children were essentially beyond help, victims of circumstances and genetics. But Dr. Hart and some of her colleagues suspected otherwise and revisited the issue in the early 1980s, beginning research that would continue for a decade. 

“Rather than concede to the unmalleable forces of heredity, we decided that we would undertake research that would allow us to understand the disparate developmental trajectories we saw,” she and her former graduate supervisor, Todd R. Risley, wrote in 1995 in “Meaningful Differences in the Everyday Experience of Young American Children,” a book about their findings, which were reported in 1992. “We realized that if we were to understand how and when differences in developmental trajectories began, we needed to see what was happening to children at home at the very beginning of their vocabulary growth.” 

They began a two-and-a-half-year study of 42 families of various socioeconomic levels who had very young children. Starting when the children were between 7 and 9 months old, they recorded every word and utterance spoken to them and by them, as well as every parent-child interaction, over the course of one hour every month. 

It took many more years to transcribe and analyze the data, and the researchers were astonished by what they eventually found. 

“Simply in words heard, the average child on welfare was having half as much experience per hour (616 words per hour) as the average working-class child (1,251 words per hour) and less than one-third that of the average child in a professional family (2,153 words per hour),” Drs. Hart and Risley wrote. 

“By age 4, the average child in a welfare family might have 13 million fewer words of cumulative experience than the average child in a working-class family,” they added. 

Isn’t there a giant assumption in this famous calculation: that the one hour per month of child-parent interactions that Hart & Risley recorded are representative of the entire month? Don’t some of these non-welfare parents have jobs, during which periods they can’t be talking to their children?

Let’s try the math. Say the average 0 to 4 year old is awake 10 hours per day, or 3,600 hours per year, or 14,400 hours in those four years. If the working class family talks at the child 635 more words per hour than those famously laconic welfare families, then that comes out to a differential of 9,144,000 words, not 13,000,000 words. So the working class family must be talking at their children not just ten hours per day, but more like 14 hours per day, leaving only 10 hours per day for the poor child to sleep (or to talk himself or to watch TV or to play with his blocks or to watch the cat or to daydream).

Shouldn’t somebody call Child Protective Services and report all the non-welfare families in the country for child abuse due to incessant chatter?

They also found disparities in tone, in positive and negative feedback, and in other areas — and that the disparities in speech and vocabulary acquisition persisted into school years and affected overall educational development. 

So, parents with big vocabularies tended to have children with big vocabularies. (Also, I would imagine, parental skin tone, height, and hair color tended to correlate with their children’s skin tone, height, and hair color.)

“People kept thinking, ‘Oh, we can catch kids up later,’ and her big message was to start young and make sure the environment for young children is really rich in language,” said Dr. Walker, an associate research professor at Kansas who worked with Dr. Hart and followed many of the children into their school years. 

I recommend taking your preschoolers to Tom Stoppard plays. Start with The Real Thing no later than 30 months and work up to Arcadia by at least the fourth birthday. Also, read to them every night from Nabokov. Pnin is an easy start, but they should be finished with Ada by the time they enter kindergarten.

The work has become a touchstone in debates over education policy, including what kind of investments governments should make in early intervention programs. One nonprofit program whose goals are rooted in the findings is Reach Out and Read, which uses pediatric exam rooms to promote literacy for lower-income children beginning at 6 months old. 

Prompted by the success of Reach Out and Read, Dr. Alan L. Mendelsohn, a developmental-behavioral pediatrician at Bellevue Hospital and New York University Langone Medical Center, pushed intervention even further. He created a program through Bellevue in which lower-income parents visiting doctors are filmed interacting and reading with their children and then given suggestions on how they can expand their speaking and interactions. 

“Hart and Risley’s work really informed for me and many others the idea that maybe you could bridge the gap,” Dr. Mendelsohn said, “or in jargon terms — address the disparities.” …

I don’t see any mention here of experimental research, just tracking of existing differences that are compatible with most combinations of nature and nurture theories.

“Today, much of her research is being applied in many different ways,” said Dr. Andrew Garner, the chairman of a work group on early brain and child development for the American Academy of Pediatrics. “I think you could also argue that the current interest in brain development and epigenetics reinforces at almost a molecular level what she had identified 20 years ago.”

Epigenetics!

One obvious but little mentioned implication of this popular line of thought is: White professional mothers who hire semi-literate nannies who have smaller vocabularies in English than in Spanish and smaller vocabularies in Spanish than in Mayan to raise their children for them while they put in the hours to make partner or get tenure are dooming their offspring to only getting into State U. You see, by not personally speaking to their small children for much of the day using their high level vocabularies, Hart & Risley’s logic says their kids are in big, big trouble.

And, indeed, many white mothers behave exactly as if this were true.

For example, one of my early bosses in the marketing research business was Kathie, a hard-charging, funny, foul-mouthed MBA who let nothing stand in the way of our team making the numbers. Then I heard a rumor that she and her boyfriend, an MBA at a big corporation, were going to take a little time off from each other. Then she started going to the gym at lunchtime, lost ten pounds, and then showed up one Monday morning wearing an engagement ring and a big smile: her ex-boyfriend was now going to be her husband. Marriage and a baby ensued, but she was right back on the job a month after giving birth. Then she got pregnant again, and came back to the job a couple of months after giving birth. But within a week of her return, she announced she was permanently retiring to be a housewife. Management tried hard to talk her into part-time work or taking just a couple of years off or whatever she wanted, but she was adamant that she was done with working: she was a full-time mom from now on.

Of course, Kathie’s trajectory was feasible because her husband was making good money. But, her emotions are common.

Of course, this pro stay-at-home-mom implication of the Hart & Risley conventional wisdom is not played up in the press, which is largely run by women who are not stay-at-home-moms and who frequently feel guilty about it if they do have children or resent those women who are mothers, and thus try to put them down by emphasizing how glamorous and politically important it is to be a working woman.

What does the research say on stay-at-home mothers vs working mothers in terms of children’s cognitive development? I haven’t looked in a long time, but my recollection was that it’s inherently uncertain because nobody can run a controlled experiment. Mothers are constantly adapting to what they think is best for their children (e.g., Kathie), trying to optimize a variety of factors that differs for each family and, indeed, for each child.

That moms refuse to follow experimental methodologies when it comes to their own kids is bad for science, but good for children.

(Republished from iSteve by permission of author or representative)
 
• Tags: IQ, Race/IQ 
🔊 Listen RSS
Probably. 
Chuck at Occidentalist assembles a bunch of test reports, here and here. It’s not as well-studied of a subject as it is in the U.S., so it’s hard to make sense of all the data, but most point toward the white-black gap in the U.K. being well under a standard deviation.

I haven’t seen a good meta-analyses by a British researcher who knows the ins and outs of all these acronyms like GCSE. (For example, a few years ago a British researcher slipped up on writing about regional differences in performance on the SAT in the U.S. because he didn’t know that only the most ambitious students in the Midwest take the SAT instead of the ACT — so what pitfalls await American kibbitzers among British test scores?) But most of the data seems to suggest a smaller cognitive and/or achievement gap in the U.K. than in the U.S.

It has been apparent for some time now (see this post at Racial Reality) that in Britain, the lads are not all right. In the U.S., we’ve become familiar with gender gaps on school achievement tests favoring black and Hispanic girls over their brothers, but we see less of this among whites and Asians. This is among the better evidence that culture — fear of being put down by your co-ethnics for Acting White, etc. — is depressing NAM performance. 
On a lot of tests, in Britain, there’s even a bigger gender gap favoring the distaff side, but it seems to go across all ethnicities, even Chinese. We see weird things like girls whose parents are from Africa outscoring white boys and maybe even East Asian boys on some tests. 
As I pointed out in a couple of articles in 2005, class is the big divide in Britain rather than race. “Class” is a 1500-year-long project to civilize the Conan the Barbarian warlords who inundated the Roman Empire to act like “gentlemen.” By the late 20th Century, all that politeness, all that studying, all that self-discipline, was striking young males of the lower classes as pretty gay. Thus, chavism. 
In contrast, there isn’t all that much of an oppositional culture among blacks in Britain, since assimilating into the white working class isn’t terribly hard: You like ‘aving a pint while watching footie on the telly, too? The proportion of mixed race children appears much larger than in the U.S. As historian David Starkey pointed out during the English looting last summer, that blacks were in the lead, but whites were right behind in the looting — something you don’t see in the U.S much at all.
Moreover, blacks in Britain are of immigrant origin: West Indian and African, with the Africans doing better on tests, typically. Some not insignificant fraction of Africans in Britain were brain-drained from Anglophone ex-colonies to work in National Health as nurses and doctors. In the U.S., West Indians and African immigrants tend to outperform native blacks. The Bell Curve found that in the NLSY79 longitudinal study, blacks who were immigrants or the children of immigrants outscored native African-Americans by an average of 5 IQ points. 
But, those are just a few speculations. It’s an interesting question that, as far as I know, hasn’t been studied terribly systematically.

Update: lots of good stuff in the comments from people who know more about what they are talking about when it comes to Britain than I know.

(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS
The oldest SAT score report on the College Board website is from 1996, right after the “recentering” in 1995 that raised scores about 100 points on a 400 to 1600 scale. Over the last 15 years, the average overall score on the original two-part Verbal + Math SAT (i.e., ignoring the new-fangled Writing section of the test introduced in the last decade) fell a grand total of two points, from 1013 to 1011. (See what I mean about baseball statistics being more volatile?)
1996 v. 2011 College-Bound Seniors Avg SAT Scores
Total (V+M) Verbal Math
1996 2011 Chg 1996 2011 Chg 1996 2011 Chg
ALL 1013 1011 (2) 505 497 (8) 508 514 6
Female 995 995 0 503 495 (8) 492 500 8
Male 1034 1031 (3) 507 500 (7) 527 531 4
Asian 1054 1112 58 496 517 21 558 595 37
White 1049 1063 14 526 528 2 523 535 12
Black 856 855 (1) 434 428 (6) 422 427 5
AmerIndian 960 972 12 483 484 1 477 488 11
Mexican 914 917 3 455 451 (4) 459 466 7
PR 897 904 7 452 452 0 445 452 7
Other Hisp 931 913 (18) 465 451 (14) 466 462 (4)
You’ll note that the average white score went up 14 points form 1049 to 1063. Did white people get smarter over that period? I don’t know. The SAT changed a lot over those 15 years, with analogies being dropped and some Verbal multiple choice questions being exiled to Writing. Also, kids appear to have cared more about prepping in 2011, although the College Board doesn’t like to talk about this.

Asians went up 58 points, which is pretty striking. Everybody else fell farther behind whites, which wasn’t supposed to happen.

Now, it could be that scores actually did pretty well over this 15 year stretch, because the College Board scraped the bottom of the barrel harder. In 1996, 1,085,000 college-bound seniors took the SAT. In 2011, there were 1,647,000 senior SAT-takers. Just
between 2006 and 2011, the College Board let an incremental 150,000 students take the SAT free or at reduced cost.

On the other hand, my impression is that it became a lot more common for students to take both the SAT and ACT over that 15-year stretch, so some of the increase in the number of test-takers comes from people who would only have taken the ACT in 1996. It used to be that East and West Coasters took the SAT and Midwesterners the ACT, but by 2011, lots of students try both to see which one they’ll do better on. These kids who take both tests probably tend to be fairly ambitious ones who are looking to game the system by taking both tests, then submitting only the test score they did better upon. So, double-dippers likely scored reasonably well (although, of course, not so 2400 / 36 outstanding that they wouldn’t bother taking any test again).
(Has anybody recently done an authoritative study of the trend in overall SAT scores considering all the factors driving scores up or down?)
Between 1996 and 2011, everybody except Other Hispanics got a little better in Math. (I suspect that Other Hispanics used to be mostly Cubans and random fairly well-to-do South Americans, but now it includes a lot of Central Americans.) But Asians got a lot better: 37 points, from 558 to 595.
Verbal scores stagnated or declined slightly, except for Asians, who went up 21 points from 496 to 517. 
Now, let’s look at scores relative to the white scores in 1996 and 2011. A decade and a half ago, the overall score for everybody was 36 points lower than the white score. Today, it’s 52 points lower. Most of that 16 points of relative decline is due to the demographic composition of America’s SAT-takers changing for the worse.
Difference v. whites
Total (V+M)
1996 2011 Chg
ALL 36 52 (16)
Female 54 68 (14)
Male 15 32 (17)
Asian 5 49 44
White 0 0 0
Black 193 208 (15)
AmerIndian 89 91 (2)
Mexican 135 146 (11)
PR 152 159 (7)
Other Hisp 118 150 (32)
The Gap got worse for most of the minority groups that the press gets worked up over. Blacks fell from 193 points behind whites to 208 points (a 15 point relative decline, or a point per year). Mexicans fell from 135 lower to 146 lower. Other Hispanics fell the most, from 118 behind to 150 behind.
These declines are probably mostly due to society (especially the College Board) scraping the bottom of the barrel harder in 2011. What with the recession and all, everybody is convinced that they must go to college, so they try the SAT. The number of people who scored below 400 on Verbal grew from 179,000 to 302,000 and on Math from 172,000 to 251,000.

The number who scored 700 or higher also shot upwards, but that might be due in part to kids taking the SAT more times or taking both the SAT and ACT to see if they can shoot the moon. The number scoring 700 or higher on Verbal went up from 47,000 to 77,000 and on Math from 58,000 to 112,000. High scorers are presumably the most likely to do a lot of test prep and otherwise try to game the system.

In contrast to all other ethnic groups, who fell farther behind whites over the last 15 years, Asians had a 5 point advantage over whites in 1996, which blossomed to a 49 point lead by 2011, a relative cha
nge of 44 points.That’s a big change, relative to the near-stasis on everything else.

(Republished from iSteve by permission of author or representative)
 
• Tags: IQ, Race, Race/IQ, SAT, Testing, Tests 
🔊 Listen RSS
Here’s my new VDARE essay
It’s no surprise why Democrats tend to be so angry at anybody who mentions the race-IQ link, but why do so many Republicans now feel the same way? There are a number of reasons, but one is often overlooked. I explore an aspect of the sociology and psychology of Republican voters.
Read it there.

(Republished from iSteve by permission of author or representative)
 
• Tags: Race/IQ 
No Items Found
Steve Sailer
About Steve Sailer

Steve Sailer is a journalist, movie critic for Taki's Magazine, VDARE.com columnist, and founder of the Human Biodiversity discussion group for top scientists and public intellectuals.


PastClassics
The “war hero” candidate buried information about POWs left behind in Vietnam.
What Was John McCain's True Wartime Record in Vietnam?
The evidence is clear — but often ignored
Are elite university admissions based on meritocracy and diversity as claimed?
A simple remedy for income stagnation