The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information

 TeasersiSteve Blog

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

The Flynn Effect of rising raw scores on IQ tests is one of the most interesting phenomena in all the human sciences. It was first noticed in the 1940s, but for a long time little attention was paid to the fact that IQ test publishers had to renorm their tests periodically because people kept doing better on them. This pattern began to be explored by political philosopher James Flynn from around 1979 onward, and the phrase “Flynn Effect” was coined in his honor in 1994′s The Bell Curve.

One interesting aspect of the Flynn Effect is that it tends to be larger on the less culturally biased tests, such as the outer space-looking Raven’s Progressive Matrices:

Historically, much effort was put into the obvious challenge of developing IQ tests that are stable across space, from culture to culture. In contrast, nobody until Flynn paid all that much attention to the question of IQ tests being stable across time.

For example, the alien-looking Raven’s Matrices IQ test that was introduced in the 1930s in the hope of being more culture-free than previous IQ tests has seen a huge Flynn Effect of around 3 points per decade, or a standard deviation (15 points) in a half century. A score on the Raven’s that would put you at the 50th percentile a half century ago would only put you at the 16th percentile today.

The more human-seeming Wechsler Intelligence Scale for Children (WISC) saw a still-substantial Flynn Effect of about two points per decade, but that’s less than the Raven’s.

Screenshot 2015-10-17 00.15.05

Importantly, the size of the Flynn Effect from 1947-2002 differed sharply amongst the subtests on the WISC as shown above, from only 2 points over the 55 years on the “Information” and “Arithmetic” subtests to 22 points on “Picture Arrangement” and 24 points on “Similarities.” (In the table above, the Flynn Effect column is taken from my 2007 review in VDARE of Flynn’s book What Is Intelligence? )

The kind of cognitive facilities that come up in normal conversation, such as vocabulary, arithmetic and general knowledge, have only seen small Flynn Effects, which is why the Flynn Effect isn’t easily noticeable in much of daily life (although I’ll point out below where it can be seen).

Recently, James Thompson’s Psychological Comments had a table of the “cultural load” of each WISC subtest from a 2013 paper:

Kees-Jan Kan, Jelte M. Wicherts, Conor V. Dolan, and Han L. J. van der Maas. “On the Nature and Nurture of Intelligence and Specific Cognitive Abilities: The More Heritable, the More Culture Dependent.” Psychological Science 24(12) 2420–2428

… Cultural load was operationalized as the average proportion of items that were adjusted in each subtest of the WAIS-III when the scale was adapted for use in 13 countries.

I presume that means adjustments in questions beyond simple translation. IQ test publishers validate new editions of their tests in each country in which they intend to sell them, and that lets them notice proposed questions that don’t work well due to local idiosyncrasies. (In contrast, the PISA international school achievement tests have a “we’ll fix it in post-production” philosophy of dropping poorly designed questions after the PISA test is given. But in either case, it’s important to figure out at some point which questions just don’t work the same across space and which ones work well around the world with just simple translations.)

Wicherts et al have noticed that heritability is strongest on the most culture loaded subtests, which is very important. But I want to focus today upon the potential implications of their data (the Cultural Load column in my table above) for better understanding of the Flynn Effect.

My table above combines the two sets of figures for Weschler substests. (Note the oranges to tangerines comparison of WISC [Flynn Effect] to WAIS [Cultural Load] — there are a ton of technical issues here, such as the Digit Span subtest being missing from Flynn’s data, but I’m just going to blunder onward.)

Eyeballing my table, it looks like there’s a moderate negative correlation between the size of the Flynn Effect and the size of the Cultural Load. The correlation is -0.44.

This overall pattern shouldn’t be surprising because it’s in line with the general difference between the Raven’s and the Wechsler’s: the more a Wechsler subtest is like the Raven’s, the higher the Flynn Effect. Conversely, the more culture-dependent a Wechsler subtest is, the lower the Flynn Effect.

For example, “vocabulary” is the most culturally sensitive Wechsler subtest, not surprisingly, and it’s got a quite small Flynn Effect. Interestingly, vocabulary’s also a really good subtest of overall intelligence. For instance, the ongoing General Social Survey includes a 10 word vocabulary test and that has proven to be a surprisingly decent proxy for IQ.

If we leave out the “Similarities” outlier, the correlation is -0.74.

My best theory for what’s going on with the Flynn Effect besides obvious ones like better nutrition is that the world has seen a major cultural / environmental shift that has been going on in most cultures around the world at a fairly steady pace that makes young people better at certain subtests, typically on Performance IQ subtests, but doesn’t do them much good on Verbal IQ subtests except for “Similarities.”

As I wrote in 2007 about “Similarities:”

Finally, the fastest rising subtest on the WISC, Similarities, rewards abstract scientific thinking, what Flynn calls viewing the world through “scientific spectacles.”

A child gets a maximum score for replying that dogs and rabbits are “mammals.” A kid in 1947 who had never seen a nature documentary on TV would likely have said “They have four legs” or something else more concrete than the Linnaean category “mammals.”

In 1947 a child in the hollers of Kentucky would probably know more concrete things about dogs and rabbits than an urban child today. But IQ tests have tended to anticipate the direction in which global culture has evolved, away from the concrete and toward the abstract and two-dimensional, toward what can be represented on a piece of paper or a screen.

Whatever this change is, it’s reminiscent of Moore’s Law in its endurance and steady pace. As you know, around 1968 Gordon Moore of Intel, the famous Silicon Valley silicon chip firm descended from Shockley Semiconductor, pointed out that Intel had been able to double the number of transistors on a standard size piece of silicon every year or two throughout the 1960s, and he believed that the industry would be able to keep up this pace for some time into the future. This more or less proved true for at least four decades, with world changing consequences, such as the coining of the term “Silicon Valley” in 1971 and the rise of Silicon Valley to immense economic importance.

I don’t know if Moore’s Law is still in effect (the laptop I bought in 2015 is only trivially faster than the one I bought in 2012, the first time in my personal computer owning career, which goes back to 1984, that a new computer wasn’t tangibly faster). Similarly, I don’t know if the Flynn Effect is still operating everywhere. (I haven’t really been following the data in this decade.)

But Moore’s Law has been kind of like the Flynn Effect in that it has been relatively incremental, decade after decade, rather than erratic, and the effects have been felt globally even though its heartland has been Silicon Valley, kind of like how IQ testing’s heartland has been Silicon Valley ever since Lewis Terman released America’s first IQ test, the Stanford-Binet, a century ago.

Moreover, Moore’s Law (in the sense of higher performance in general) has had multiple causes. For example, when clock speeds on CPU chips topped out, the chip companies were able to regroup and keep Moore’s Law progressing for a number of years further by doing other things. Similarly, it’s likely that better nutrition both contributed to the Flynn Effect (the U.S. added micronutrient supplementation of both iodine and iron to staples between WWI and WWII) in the past, but improved nutrition has been less of a contributor to the Flynn Effect in some countries in recent years as nutrition has gotten about as good as it’s going to get. But other more mysterious factors apparently stepped in to keep the Flynn Effect going a while longer.

So, Moore’s Law is an informative analogy for the Flynn Effect.

But I would go further and suggest, somewhat hand-wavingly, that one of the driving forces of the Flynn Effect has been Moore’s Law, or, to be both more precise and more vague, some kind of superset of a direction to technological change of which Moore’s Law is a subset.

One of the big changes in daily life over recent centuries has been the growth of what I might call humans having to deal with “machine logic.” People today deal far more often each day than in the past with semi-intelligent machines who can only be dealt with in a certain way according to their own logic. You deal with the ATM rather than with a bank teller, with a gasoline pump rather than with a pump jockey, with elevator buttons rather than with elevator operators. You can’t wave your hands around with these machines until they figure out what you want done. You have to follow a precise logical series of steps.

(This trend may not continue forever. For example, searching the Internet using Google today requires users to use less logic than searching the Internet using Alta Vista in 1998 required. The term “Boolean operators” was useful to understand to get more out of Alta Vista, while Google is so smart today that you don’t have to be as smart.)

This trend toward people having to interface more each decade with machine logic hasn’t just been happening since the silicon chip was invented. Before the silicon chip was the transistor, perfected by William Shockley, and before that the vacuum tube, which Lee de Forest made significant progress upon in Palo Alto around the time Lewis Terman of Stanford was adopting Binet’s pioneering IQ test for the American market.

Granted, I’m waving my hands around in making this argument in the hopes that you’ll grasp what I’m trying to get across. I don’t have this reduced to a precise series of steps that a machine intelligence could understand, but I do think I’m onto something: that the high Flynn Effect, low Culture Load IQ subtests are a kind of like mastering dealing with information technologies, and kids these days get more practice in that than we did and we got more practice than our parents did.

In contrast, kids these days likely have less practice dealing with complex 3-d entities, such as repairing automobile engines. Instead, they are used to dealing with 2-d paper and, ever increasingly, 2-d screens. But IQ tests tend to shy away from much in the way of 3-d testing, other than some blocks subtests on the WISC and other children’s IQ tests, largely for reasons of economy. Asking and answering questions in a 2-d format, whether on paper or on a computer screen, is cheap.

But because 2-d is cheap, the real world has also moved in the 2-d direction that IQ tests anticipated.


One thing that seems pretty likely is that in each person’s life, he has a window where it’s easy and fun to learn to communicate logically with a new set of systems, and over time that window closes. For example, when I was in the marketing research industry, I jumped all over the coming of the personal computer in 1984 and the Internet in 1996.

More senior executives at the information company where I worked back then tended to find the new personal information technologies difficult to master. They were used to issuing orders to intelligent human beings, such as their secretaries, who wouldn’t take everything quite so literally. The founders of the company where I worked were superbly intelligent at dealing with human psychology, but they found arbitrary machine logic daunting.

But similar information technology developments in this century have not struck me as fun at all to learn about. On Twitter, for example, I’m basically clueless about whether I’m replying to one person or to thousands. Today, I feel like the Vice Chairman of my employer back in 1984 when he gave me his $9,000 IBM PC XT with the coveted 10-meg hard disk because he was too old to learn to type.

Generation after generation, children grow up in an environment ever denser with the kind of systems logic that the more Flynn Effected-Wechsler subtests ask about. Growing up, kids these days get more practice with the kind of thinking tested on the Raven’s and on some of the Wechsler subtexts. And they legitimately are better at it.

The Flynn Effect is a side effect of the developers of the IQ test being on “the right side of history.” We’re used to hearing progressives denounce IQ tests as obsolete pseudoscience on the wrong side of history, but, in reality, IQ testing in the United States has some amusing organic ties to the triumph of Silicon Valley. Louis Terman’s son Fred Terman (1900-1982), a professor of electrical engineering at Stanford, was the perhaps the single most important figure in the rise of Silicon Valley. The mentor of Hewlett and Packard, he largely invented the model of Stanford grad students like Larry Page and Sergey Brin starting up high tech firms like Google.

You are supposed to believe that the Termans were all wrong, but it sure looks like we’re living in the world the Terman family anticipated.

• Category: Science • Tags: Flynn Effect, IQ, Moore's Law, Robots, Silicon Valley 
🔊 Listen RSS

Back before 1992 Olympics, Runner’s World executive editor Amby Burfoot published a cover story “White Men Can’t Run” pointing out the West African / East African distinction between who wins Olympic sprints versus distances races.

At that point, blacks of West African descent had made up all of the last 16 finalists in the Olympics men’s 100m dash, the race that determines the World’s Fastest Man. A white Scotsman had won the 100m dash at the Moscow 1980 Olympics, but in both the 1984 and 1988 Olympics, all the finalists who had made it through three preliminary rounds were black.

Amazingly, that’s now true for eight straight Olympics; 64 out of the last 64 finalists have been black from 1984 through 2012. That’s one of most astounding statistics in all of human biodiversity studies.

On the other hand, a few non-blacks have had some success in the 100m in recent years. A white Frenchman became the first white man to clearly break the 10 second barrier, getting as low as 9.92 in 2011. And Japanese sprinters regularly make the Olympic semifinals, so this streak no doubt won’t last forever. This spring Bingtian Su of China became the first Asian to run 9.99.

In Beijing, at the current world championships of track & field (one tier below the Olympics), Usain Bolt of Jamaica, 2008-2012 Olympic gold medalist, edged out Justin Gatlin, the American 2004 Olympic gold medalist who was twice subsequently caught for PEDs, in a time of 9.79 to 9.80.

Of note, Bingtian Su of China pleased the home fans in the semifinal by tying his recent Asian record of 9.99. In the expanded 9-man final, he finished last at 10.06.

• Category: Race/Ethnicity, Science • Tags: Human Biodiversity, Sports 
🔊 Listen RSS

Carl Zimmer reports in the NYT:

DNA Deciphers Roots of Modern Europeans
JUNE 10, 2015

… On Wednesday in the journal Nature, two teams of scientists — one based at the University of Copenhagen and one based at Harvard University — presented the largest studies to date of ancient European DNA, extracted from 170 skeletons found in countries from Spain to Russia. Both studies indicate that today’s Europeans descend from three groups who moved into Europe at different stages of history.

The first were hunter-gatherers who arrived some 45,000 years ago in Europe.

Then came farmers who arrived from the Near East about 8,000 years ago.

Finally, a group of nomadic sheepherders from western Russia called the Yamnaya arrived about 4,500 years ago. The authors of the new studies also suggest that the Yamnaya language may have given rise to many of the languages spoken in Europe today.

In other words, with “the Yamnaya” we’re likely talking about more or less the people also known as the Proto-Indo-Europeans, who used to be called the Aryans.

… Until about 9,000 years ago, Europe was home to a genetically distinct population of hunter-gatherers, the researchers found. Then, between 9,000 and 7,000 years ago, the genetic profiles of the inhabitants in some parts of Europe abruptly changed, acquiring DNA from Near Eastern populations.

Archaeologists have long known that farming practices spread into Europe at the time from Turkey. But the new evidence shows that it wasn’t just the ideas that spread — the farmers did, too.

The hunter-gatherers didn’t disappear, however. They managed to survive in pockets across Europe between the farming communities.

“It’s an amazing cultural process,” said David Reich, a geneticist at Harvard Medical School who led the university’s team. “You have groups which are as genetically distinct as Europeans and East Asians. And they’re living side by side for thousands of years.”

Between 7,000 and 5,000 years ago, however, hunter-gatherer DNA began turning up in the genes of European farmers. “There’s a breakdown of these cultural barriers, and they mix,” said Dr. Reich.

Poussin, 1634

Perhaps like the breakdown of the cultural barriers between the Roman men and the Sabine women?

About 4,500 years ago, the final piece of Europe’s genetic puzzle fell into place. A new infusion of DNA arrived — one that is still very common in living Europeans, especially in central and northern Europe.

The closest match to this new DNA, both teams of scientists found, comes from skeletons found in Yamnaya graves in western Russia and Ukraine.

Archaeologists have long been fascinated by the Yamnaya, who left behind artifacts on the steppes of western Russia and Ukraine dating from 5,300 to 4,600 years ago. The Yamnaya used horses to manage huge herds of sheep, and followed their livestock across the steppes with wagons full of food and water.

It was an immensely successful way of life, allowing the Yamnaya to build huge funeral mounds for their dead, which they filled with jewelry, weapons and even entire chariots.

David W. Anthony, an archaeologist at Hartwick College and a co-author on the Harvard study, said it was likely that the expansion of Yamnaya into Europe was relatively peaceful. “It wasn’t Attila the Hun coming in and killing everybody,” he said.

It’s a stereotype that the Eurasian Steppe tends to be violent, so therefore it can’t be true. The real reason Eastern Europe is called The Bloodlands is because of the beautiful red sunsets. Everybody knows that.

Instead, Dr. Anthony thought the most likely scenario was that the Yamnaya “entered into some kind of stable opposition” with the resident Europeans that lasted for a few centuries. But then gradually the barriers between the cultures eroded.

For a dissenting view of the values and predilections of Eurasian steppe peoples:

On the other hand, Dr. Anthony cogently rebutted:

The Copenhagen team’s study suggests that the Yamnaya didn’t just expand west into Europe, however. The scientists examined DNA from 4,700-year-old skeletons from a Siberian culture called the Afanasievo. It turns out that they inherited Yamnaya DNA, too.

Dr. Anthony was surprised by the possibility that Yamnaya pushed out over a range of about 4,000 miles.

What with them being so peaceful and all.

“I myself have a hard time wrapping my head around explanations for that,” he said.

I bet you do.

The two studies also add new fuel to a debate about how languages spread across Europe and Asia. Most European tongues belong to the Indo-European family, which also incudes languages in southern and Central Asia.

For decades, linguists have debated how Indo-European got to Europe. Some favor the idea that the original farmers brought Indo-European into Europe from Turkey. Others think the language came from the Russian steppes thousands of years later.

The new genetic results won’t settle the debate, said Eske Willerslev, an evolutionary biologist at Copenhagen University who led the Danish team. But he did think the results were consistent with the idea that the Yamnaya brought Indo-European from the steppes to Europe. …

“We can just say that the expansion fits very well with the geographical spread of the Indo-European language,” said Dr. Willerslev.

• Category: History, Science • Tags: Anthropology, Aryans, Indo-Europeans, Yamnaya 
🔊 Listen RSS

I wanted to come back to the popular NYT Magazine article “Why Do Americans Stink at Math?” about how they teach math better in Japan, as you can tell because Japanese students average a higher PISA score than American students. According to the article, the Common Core now offers us another opportunity to teach math better. But, American teachers have consistently failed to exploit the opportunities offered them by educational theorists:

It wasn’t the first time that Americans had dreamed up a better way to teach math and then failed to implement it. The same pattern played out in the 1960s, when schools gripped by a post-Sputnik inferiority complex unveiled an ambitious “new math,” only to find, a few years later, that nothing actually changed. In fact, efforts to introduce a better way of teaching math stretch back to the 1800s. The story is the same every time: a big, excited push, followed by mass confusion and then a return to conventional practices.

You see, it’s not that the math fads of the past failed, it’s that they were never really tried.

In reality, the New Math mostly failed because it was an attempt by math professors to design a curriculum that makes sense to math professors wanting to create new math professors. To students, however, it was repetitious (every September from 1965-1970 I had to study the Number Line in the first chapter of each math textbook), boring, and pointless. The Number Line didn’t do anything to help me think more interesting thoughts about baseball statistics.

The trouble always starts when teachers are told to put innovative ideas into practice without much guidance on how to do it. In the hands of unprepared teachers, the reforms turn to nonsense, perplexing students more than helping them.

The trouble starts earlier when the Powers that Be adopt some smooth-talking salesman’s pitch for a whole new way to teach math without making him test it first on real students. The reason we have the Common Core is not because it aced its Phase I, II, and III experiments involving real students. It was never tested before roll-out.

No, we have the Common Core because David Coleman impressed Bill Gates as significantly less stupid than the typical education theorist, so Gates bribed the educational establishment to get behind Coleman’s baby and make it a fait accompli before anyone had a chance to ask: “Shouldn’t we test this first?” (And keep in mind that I’m relatively positive toward the Common Core versus most of the other junk out there. If our country is going to let one guy control education according to his whims, Bill Gates would be among the less bad choices for that guy.)

Carefully taught, the assignments can help make math more concrete. Students don’t just memorize their times tables and addition facts but also understand how arithmetic works and how to apply it to real-life situations. But in practice, most teachers are unprepared and children are baffled, leaving parents furious.

This paragraph reflects today’s education establishment worldview about the past up until about last week. Until yesterday, children were forced to sit up perfectly straight in their desks and chant the time tables and get rapped on the knuckles with a ruler when they made a mistake. That’s why students “just memorize their times tables and addition facts” instead of developing Critical Thinking Skills and Concern about Social Justice.

In reality, of course, large fractions of students these days fail to memorize their times tables and addition facts.

In other words, liberals are completely amnesiac about how they’ve been running education for a long, long time.

For instance, I went to a Catholic parochial school with nuns, and there was a little knuckle-rapping still going on in the mid-1960s. But by the time I got to St. Francis de Sales’ 7th grade in 1970, the younger teachers had staged a coup and organized a junior high school teaching collective that was more relevant. Most of my schooling in 1970-72, as far as I can remember, consisted of listening in class to album sides from Abbey Road, Deja Vu, Hair, and Jesus Christ Superstar for examples of symbols and metaphors, and sitting in a circle and rapping about how the deaths of Hendrix, Joplin, and Morison bummed us out.

And this was at a prim parochial school. I went to public Millikan Junior High for summer school those years and it looked like Dazed and Confused. Granted, St. Francis de Sales is just over Coldwater Canyon from the Sunset Strip, so we were probably a year or two out in the lead of the rest of the country, but your junior high school probably went through the same changes within a half decade.

Let me repeat this NYT explanation of how things will be better if the educational theorists ever get their full funding:

Students don’t just memorize their times tables and addition facts but also understand how arithmetic works and how to apply it to real-life situations.

Look, forcing students to memorize their times tables and addition facts (e.g., 6+7=13) is not something the current liberal-run system is all that great at. It’s boring for teachers. But you sure can’t apply arithmetic to real-life situations without being instantly aware and really confident that 6+7=13.

As for “understand how arithmetic works,” well, that’s a rabbit hole that more than a few of the greatest minds of the later 19th and early 20th Centuries went down:

“From this proposition it will follow, when arithmetical addition has been defined, that 1+1=2.”

That’s on p. 379 of Volume I of Principia Mathematica by Bertrand Russell and Alfred North Whitehead in 1910. (I haven’t actually read the previous 378 pages.)

There’s a difference between how to work with math and how math works. But the article on why Americans stink at math seems oblivious to that:

The new math of the ‘60s, the new new math of the ‘80s and today’s Common Core math all stem from the idea that the traditional way of teaching math simply does not work. As a nation, we suffer from an ailment that John Allen Paulos, a Temple University math professor and an author, calls innumeracy — the mathematical equivalent of not being able to read. On national tests, nearly two-thirds of fourth graders and eighth graders are not proficient in math. More than half of fourth graders taking the 2013 National Assessment of Educational Progress could not accurately read the temperature on a neatly drawn thermometer. (They did not understand that each hash mark represented two degrees rather than one, leading many students to mistake 46 degrees for 43 degrees.)

May I suggest that numeracy and mathematics are not necessarily the same thing. The New Math of the 1960s, for example, was definitely not intended to emphasize the kind of practical numeracy that say, a carpenter needs. It was intended to make students better at the higher, more abstract forms of mathematics that would form the underpinnings of their college and postgrad math courses that would allow the very smartest students to make the theoretical breakthroughs necessary to win the technological competition in the Cold War and/or create better grad students for math professors.

In general, numeracy and abstract higher math skills correlate, just as the ability to harmonize and the ability to read music correlate. But lots of star musicians are bad at reading music. For example, here’s a list of 15 guitarists who couldn’t read sheet music, including John Lennon, Jimi Hendrix, Eric Clapton, and Eddie Van Halen. Similary, from Wikipedia on the Beatles’ song “Golden Slumbers” on Abbey Road:

“Golden Slumbers” is based on the poem “Cradle Song“, a lullaby by the dramatist Thomas Dekker. The poem appears in Dekker’s 1603 comedy Patient Grissel. McCartney saw sheet music for Dekker’s lullaby at his father’s home in Liverpool, left on a piano by his stepsister Ruth. Unable to read music, he created his own music.

My impression is that while McCartney lacks musical literacy, he’s quite good at numeracy and could probably tell you off the top of his head his annual after-tax royalties on “Golden Slumbers” and how much that bitch Yoko made off his song before Paul wrestled the rights back. (I don’t know specifically about “Golden Slumbers,” but there was a period of years in which 100% of the royalties from Paul’s “Yesterday” went to Yoko, and that sum is no doubt carved in Paul’s soul.)

By the lowly standards of pundits, and even by the higher standards of MBAs, I’m pretty numerate. I can do arithmetical stunts like calculating a weighted average in my head. But I let my wife help my sons with their high school math because all that stuff is over my head. It’s too abstract for me. I don’t like variables that can stand for different things, I like numbers that represent real things. If I didn’t like working with actual numbers so much, I might care more about working with pretend numbers.

Unlike most people, however, I don’t advise children to Be Like Me. But, I think people who theorize in the New York Times about education should try at least to be aware of these tradeoffs.

On the same multiple-choice test, three-quarters of fourth graders could not translate a simple word problem about a girl who sold 15 cups of lemonade on Saturday and twice as many on Sunday into the expression “15 + (2×15).” Even in Massachusetts, one of the country’s highest-performing states, math students are more than two years behind their counterparts in Shanghai.

Adulthood does not alleviate our quantitative deficiency. A 2012 study comparing 16-to-65-year-olds in 20 countries found that Americans rank in the bottom five in numeracy. On a scale of 1 to 5, 29 percent of them scored at Level 1 or below, meaning they could do basic arithmetic but not computations requiring two or more steps.

This PIAAC test of adults from the PISA people showed that immigrants and blacks were pulling the U.S. scores way down versus other rich countries in Europe and Northeast Asia. From the New York Times last year :

The new study shows that foreign-born adults in the United States have much poorer-than-average skills, but even the native-born scored a bit below the international norms. White Americans fared better than the multicountry average in literacy, but were about average in the math and technology tests.

The NYT Magazine article assumes that numeracy is the same as understanding how math works. For example, in reactionary America in contrast to progressive Japan, according to the article,

Students learn not math but, in the words of one math educator, answer-getting. Instead of trying to convey, say, the essence of what it means to subtract fractions teachers tell students to draw butterflies and multiply along the diagonal wings, add the antennas and finally reduce and simplify as needed. The answer-getting strategies may serve them well for a class period of practice problems, but after a week, they forget. And students often can’t figure out how to apply the strategy for a particular problem to new problems.

In contrast, street children in Brazil are numerate and understand the essences:

But our innumeracy isn’t inevitable. In the 1970s and the 1980s, cognitive scientists studied a population known as the unschooled, people with little or no formal education. Observing workers at a Baltimore dairy factory in the ‘80s, the psychologist Sylvia Scribner noted that even basic tasks required an extensive amount of math. For instance, many of the workers charged with loading quarts and gallons of milk into crates had no more than a sixth-grade education. But they were able to do math, in order to assemble their loads efficiently, that was “equivalent to shifting between different base systems of numbers.” Throughout these mental calculations, errors were “virtually nonexistent.” And yet when these workers were out sick and the dairy’s better-educated office workers filled in for them, productivity declined.

The unschooled may have been more capable of complex math than people who were specifically taught it, but in the context of school, they were stymied by math they already knew. Studies of children in Brazil, who helped support their families by roaming the streets selling roasted peanuts and coconuts, showed that the children routinely solved complex problems in their heads to calculate a bill or make change. When cognitive scientists presented the children with the very same problem, however, this time with pen and paper, they stumbled. A 12-year-old boy who accurately computed the price of four coconuts at 35 cruzeiros each was later given the problem on paper. Incorrectly using the multiplication method he was taught in school, he came up with the wrong answer. Similarly, when Scribner gave her dairy workers tests using the language of math class, their scores averaged around 64 percent. The cognitive-science research suggested a startling cause of Americans’ innumeracy: school.

But of course the favela kids making change don’t understand the “essence” of arithmetic, not in the sense that say Bertrand Russell understood its essence. They have rules of thumb they follow that work fine for their tasks. Their techniques aren’t necessarily generalizable, however. Their change-making techniques aren’t going to be much use in getting them through Algebra II, which is now required to graduate high school in some regions in America.

So, in the real world, inculcating the numeracy to make change and getting all students through Algebra II turn out to be somewhat contradictory goals for the bottom half or so of the population. I don’t know what’s the best way to deal with this partial trade-off. But certainly the first step is to be able to publicly admit there is a tradeoff.

• Category: Science • Tags: American Media, Education, Math 
🔊 Listen RSS

Nicholas Wade in the NYT reports:

The Nobel Prize in Physiology or Medicine was awarded this year to three American scientists who solved a problem of cell biology with deep relevance to cancer and aging. The three will receive equal shares of a prize worth around $1.4 million.

The recipients solved a longstanding puzzle involving the ends of chromosomes, the giant molecules of DNA that embody the genetic information. These ends, called telomeres, get shorter each time a cell divides and so serve as a kind of clock that counts off the cell’s allotted span of life.

The three winners are Elizabeth H. Blackburn of the University of California, San Francisco, Carol W. Greider of Johns Hopkins University School of Medicine and Jack W. Szostak of Massachusetts General Hospital.

The two other 2009 hard science Nobels are not out yet, but this announcement reflects an on-going trend in which the top female scientific talent is concentrating in the life sciences and leaving the lifeless sciences, physics and chemistry, to the boys.

Here’s a list of all female winners (keep in mind that there have been more multiple winners in recent years — in other words, it’s gotten easier to be a Nobel Laureate in recent years because prizes are more often fractured):

So, before 1965, women won five Nobels in physics or chemistry vs. only one in medicine. Since then, women have won zero in physics or chemistry (warning: this could change this week) versus nine in medicine.

This strikes me as healthy: women specializing in what they (and I, as a beneficiary of medical science) find most important. Of course, in the wake of the 2005 Larry Summers brouhaha, vast amounts of money are being spent to lure women scientists away from the life sciences and into the inanimate sciences in the name of diversity. Will all that money spent make humanity better off?

(Republished from iSteve by permission of author or representative)
• Category: Science • Tags: Diversity, Feminism 
🔊 Listen RSS

A point I want to make more clearly is that one major reason that accurately predicting events that people are particularly interested in is so hard is because many of those events are the result of some kind of tournament.

We are fascinated by tournaments. (Just look at all the complaints that tonight’s college football championship game only represents a quasi-tournament rather than an explicit tournament like the NCAA basketball championships).

So many of the things we most want experts to predict for us are explicit tournaments (e.g., the Super Bowl playoffs) that have been carefully designed to create maximum uncertainty in the later, more climactic rounds by matching the best contestants against each other.

For example, in about 90 or 100 tries, a #16 seeded team in the men’s NCAA basketball team has never upset a #1 seeded team in the opening round, so basketball games are actually quite predictable when there is a fair-sized difference in quality between teams as determined by their seasonal performance. But subsequent round games become less predictable as the quality gap narrows, so public interest builds.

Or, the things we are interested in can be semi-explicit tournaments (e.g., the Presidential primary/general election process).

Or, unplanned events take on some of the nature of tournaments.

For example, people in the 19th Century were utterly fascinated by the Battle of Waterloo (June 18, 1815), which determined the basic political arrangements of Europe up through 1914. It was often remarked that the next century of European dominance was determined by the events of a few minutes in the crisis of the battle in which Napoleon’s hitherto-undefeated Imperial Guard nearly broke through the British lines, but were stopped just short. Then, they faltered, broke, and ran.

Waterloo — which Wellington called “a damn nice thing — the nearest run thing you ever saw” — was seen as evidence against large-scale deterministic theories of history, since so much depended upon something so close.

Contributing to Waterloo’s fame was its numerous tournament-like aspects. For example, Bonaparte was the old champion making a stunning comeback. Wellington was the challenger who had never faced Napoleon before, but had worked his way up to the top by defeating his best marshals.

Finally, much that interests us are forged by vaguely tournament-like processes. For example, stock prices are the result of, in effect, competitions between those who think the price is too low and those who think it is too high.

On the other hand, the kind of phenomena that the social sciences (and much of public policy) are concerned with — crime rates, test scores, and the like — tend not to be very tournament-like at all, and thus tend to be fairly predictable.

(Republished from iSteve by permission of author or representative)
• Category: Science 
🔊 Listen RSS

I was wondering what impact Galileo’s conviction had on science in Italy, so I took a look at the database Charles Murray sent me of the 4002 eminent artists and scientists he compiled from leading reference books for his 2003 book Human Accomplishment.

From 1000 AD to Galileo’s conviction in 1632, Italy furnished 34.7% of the world’s scientific eminence. From then up through 1950, it only accounted for 3.46%. Now that’s what I call an order of magnitude!

Italian contributions to science (measured at the scientist’s 40th birthday) continued on fairly strong for the rest of the 17th Century, so the Galileo trial impact wasn’t immediate. Of course, the 17th Century was like Andy Warhol’s factory — everybody was a genius! (Except, in the 17th Century there really were geniuses throughout Europe). But, in Italy slowly things sloooowed down, as they sped up elsewhere.

We’re not used to things getting more boring and unproductive, but it has been a common tendency throughout history, and one we may get familiar with again.

(Republished from iSteve by permission of author or representative)
• Category: Science • Tags: Political Correctness Makes You Stupid 
🔊 Listen RSS

Science is in the business of making predictions, but the better it gets at predicting anything, the more boring those predictions are for us. For example, I predict that the sun will set at the O’Hare Airport in Chicago today at 7:26 pm CDT. When you think of all the effort that has gone into astronomical observation and prediction over the millennia (for example, Stonehenge), that’s an incredible feat the human race has achieved to be able to accurately predict that.

It’s also phenomenally boring.

Now, here’s a different prediction: Republican nominee Mike Huckabee will outpoll Democratic nominee Bill Richardson 51%-47% in the November 2008 Presidential election. “What an idiot!” you say, “Don’t you know that the Clintons will stop at nothing to get back to the White House? Richardson and Huckabee? You don’t know anything about the election!” And you’re right. I don’t. I’m not even sure where Huckabee is from. I think it’s that state, you know, the one you drive through to get to that other state.

Now, here are some more predictions. USC will not finish #1 in college football this season. Instead, Rutgers will bring the national title home to Delaware. (Or maybe to Connecticut, depending on where, precisely, Rutgers is located. Assuming it’s located somewhere. Maybe it’s like the DeVry Institute and is located everywhere. But I digress.) On the other hand, USC will win the NCAA basketball championship next spring behind frosh sensation OJ Mayo.

“What a jerk!” you exclaim, “Everybody knows that USC’s linebacking corps is the most devastating in college football since Penn State’s back in 1987.” Well, I don’t know that. In fact, I know barely anything about college football these days.

But the point is that, unlike the sunset forecast, these predictions are interesting, as brainless as they are. The reason that making up nonsense off the top of my head about elections and sports is interesting is because nobody can predict accurately sports and far-off elections with a lot of candidates. Sports, especially, are designed to be hard to predict just so that they will keep our interest. The same with gambling. Randomness isn’t natural in the world, at least above the subatomic level. It takes a lot of work to develop gambling devices that are close to random, but a roulette wheel is more interesting than betting when the sun will go down because it’s hard to predict.

You often hear that the social sciences aren’t real sciences like astronomy because they can’t predict anything. But that’s not true. Indeed, I’ll make a social science prediction for 25 years into the future. I predict that in the year 2032, the students at the schools in Beverly Hills will enjoy higher average scores on statewide and nationwide standardized tests than the students at schools in Compton. Anybody want to bet against me?

I’ve got a million more predictions like that. For example, in 2032, the children of today’s unskilled immigrants will be more of a burden on society than the children of today’s skilled immigrants. (That seems like an important use of social science — to make predictions extremely important for choosing the optimal immigration legislation, right?)

“Well, sure,” you say, “Of course. But those predictions are boring. And depressing. In fact, it’s in bad taste to mention things that we all sort of know are true but that we really don’t want to think about. Who wants to hear predictions like that? Tell us something interesting.”

Okay, on December 31, 2032, the Dow Jones Average will stand at 107,391. But just one year later it will have crashed, in the wake of Black Wednesday, all the way to 33,828. But by 2042, during the bubble following a major breakthrough in cold fusion, the Dow will have reached the 201,537 barrier.

“Now that’s better! That’s the kind of prediction we like: specific and exciting. Of course, you’re probably just randomly punching numbers on your keypad, but we forgive you because you’re not boring and depressing us anymore.”

(Republished from iSteve by permission of author or representative)
• Category: Science 
Steve Sailer
About Steve Sailer

Steve Sailer is a journalist, movie critic for Taki's Magazine, columnist, and founder of the Human Biodiversity discussion group for top scientists and public intellectuals.

The “war hero” candidate buried information about POWs left behind in Vietnam.
The evidence is clear — but often ignored
The unspoken statistical reality of urban crime over the last quarter century.
The major media overlooked Communist spies and Madoff’s fraud. What are they missing today?
What Was John McCain's True Wartime Record in Vietnam?