The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
 TeasersiSteve Blog
/
National Assessment of Educational Progress

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

2015 NAEP White Hispanic Gap2

This is taking the average of four 2015 federal NAEP scores: both Math and Reading for both 4th and 8th Grades.

 
🔊 Listen RSS

NAEP 2015 Asian White Gaps

Here are the 2015 National Assessment of Educational Progress scores for Asians (orange) and whites (blue). I took a simple average of four scores: Reading and Math for both 4th and 8th grades. The overall sample size for the whole country is about 280,000, which is a lot, although I wouldn’t put too much faith in any one state’s scores, such as Colorado’s outlier score for Asians.

One observation I’d make is that Hawaii suggests the long term price of importing farm workers: Hawaii brought in a lot of Japanese and Chinese many generations ago, and in 2015 they’re still not scoring impressively.

 
🔊 Listen RSS

Screenshot 2015-10-28 05.27.50

Here are the brand new 2015 federal National Assessment of Educational Progress (NAEP) tests scores sorted in order of the size of the White-Black Gap on 8th grade math. The color reflects whether the state went for Obama (blue) or Romney (red) in 2012.

A few comments:

- Although it’s often assumed that The Gap is due to racism, it tends to be bigger in blue Democratic states.

- Gentrifying Washington DC now has enough white children to get a white NAEP score. Sure enough, The Gap in very liberal Washington DC is bigger than in all the states, due to a very high white score in DC and a slightly below average black score.

- German-Americans and Nordic-Americans don’t seem to know how to deal with African-Americans. As I’ve often pointed out, the biggest Gap is in Wisconsin, but in this table Nebraska, Minnesota, and Pennsylvania have the next widest Gaps. (Any relationship between this and Merkel’s Boner is probably not coincidental.)

- The highest black scores are in Dept. of Defense schools (DODEA), followed by military intensive states like Arizona and Alaska and well-educated liberals states like New Jersey and Massachusetts that also have high white scores.

- The smallest Gap is in West Virginia, which has, by far, the lowest white scores.

 
🔊 Listen RSS

For years, Audacious Epigone and myself have been pointing out that Texas public school kids do surprisingly well on the federal NAEP exam within each ethnic group. Now, the NYT finally figures that out, too:

Surprise: Florida and Texas Excel in Math and Reading Scores
OCT. 26, 2015

David Leonhardt
@DLeonhardt

When the Education Department releases its biennial scorecard of reading and math scores for all 50 states this week, Florida and Texas are likely to look pretty mediocre. In 2013, the last time that scores were released, Florida ranked 30th on the tests, which are given to fourth and eighth graders, and Texas ranked 32nd.

But these raw scores, which receive widespread attention, almost certainly present a misleading picture — and one that gives short shrift to both Florida and Texas. In truth, schools in both states appear to be well above average at teaching their students math and reading. Florida and Texas look worse than they deserve to because they’re educating a more disadvantaged group of students than most states are.

A report released Monday by the Urban Institute has adjusted the raw scores for each state to account for student demographics, including poverty, race, native language and the share of students in special education. The central idea behind the adjustments is that not all students arrive at school equally prepared, and states should not be judged as if students did.

“Making these demographic adjustments,” said Matthew Chingos, a senior fellow at the Urban Institute and the report’s author, “gives us a much better picture of how students are doing.”

With the adjustments, Texas jumps all the way to third in the 2013 state ranking, and Florida to fourth. Massachusetts, which also ranks first with unadjusted scores, remains in the top spot; although the state is relatively affluent, its students perform even better than its demographics would predict. New Jersey ranks second.

Other states with a less extreme version of the Florida and Texas story — that is, their schools are performing better than is often understood — include Arkansas, Georgia, Nevada and New York.

The new results will no doubt offer fodder for the continuing debate over education. Florida and Texas are mostly Republican-run states, where teacher unions are relatively weak and policy makers have tried to introduce more competition and accountability. At the same time, some states with a strong union presence, including New Jersey and New York, also perform well.

The results do seem to offer another vote of confidence for rigorous, common standards — an idea that took off with the Common Core, but has since come under harsh political attack. Massachusetts helped pioneer the idea of such standards in 1993, with ambitious goals, clear assessments and increased school funding.

States with less impressive results in the Urban Institute analysis, where favorable demographics are disguising mediocre performance, include Connecticut, Nebraska, Wyoming, Montana and Iowa. And while New Hampshire, Vermont, Minnesota and Washington are still above average, their scores are not as impressive as the unadjusted numbers suggest.

Many of these states are affluent or predominantly white — if not both. The new analysis suggests that many of their school systems have better reputations than they deserve. They enroll a lot of students who come to school well prepared and thus excel on tests. But the schools themselves are not doing as good a job as their test scores suggest.

This won’t come as a surprise to long-time readers of the Steveosphere.

But, while it’s journalistic custom to refer to the NAEP as “the gold standard” of testing, how much can we really trust the NAEP for making these kind of subtle state by state comparisons?

Specifically, the NAEP are low stakes tests to the kids, and in some states, the adults administering the NAEP treat them as low stakes for them too. On the other hand, there is some evidence that Texas administrators cares about their schools scoring better on the NAEP. For example, Texas excuses 10% of its sample of 4th graders from taking the NAEP while California only excuses 3% of its sample.

In contrast, the SAT and ACT college admissions tests taken by juniors and seniors in high school are clearly high stakes tests on which students have an incentive to try hard.

But comparing SAT and ACT average scores are tricky because in most states not everybody takes even one of the tests because they aren’t interested in applying to a competitive college.

And there are regional differences in whether a state is traditionally an ACT state (e.g, Iowa) or an SAT state (e.g., New Jersey) that influence average scores. For example, a few decades ago, Iowa usually led the county in average SAT scores because the only Iowa students who took the SAT were brainiacs interested in applying to the exclusive coastal colleges.

On the other hand, the regional differences are blurring as, especially, the ACT aggressively pushed into SAT states. Now it’s becoming common for ambitious students to take both tests to see which one they do better upon.

In general, the smarter people are, the more likely they are to take a college entrance exam. So, the lower the percentage of kids taking an exam in a state, the more inflated that state’s average score tends to be relative to the whole population of kids in the state, which is the figure I’d like to roughly estimate.

So, I’m going to present the 2014 SAT and ACT numbers for the two biggest states, Texas and California, both average scores and percent of the cohort taking each exam.

Unfortunately, I don’t remember who provided me with these numbers of SAT and ACT scores from 2006 through 2014, both set on the old-fashioned SAT 400 to 1600 scale (i.e., leaving out the doomed Writing subtest; the ACT is a 3 part exam with a maximum score of 36 but the ACT people publish tables for how to convert ACT scores to SAT scores). So, I don’t know if these data are reliable. But they don’t seem too implausible either. Also, I found the 2010 Census data for 13 year olds by race in each state as a proxy for 17 year olds in 2014. (For some reason, I couldn’t find Asians by age in Texas in 2010, so I’ll just stick to the Big Three racial/ethnic groups.)

Screenshot 2015-10-26 19.30.38

Let’s start off by looking at the white scores. In California, the total number of SAT tests said to be taken by whites in 2014 was equal to 47% of the number of white 13-year-olds in 2010 on the Census. In Texas, the percentage of SAT takers among whites was 49%, so we can compare the average SAT scores pretty directly, with just a reminder that this comparison is slightly biased in favor of California: California white kids score 1099, which is 36 points higher than Texas’s average white SAT score. The standard deviation on the 400 to 1600 scale was supposed to be 200 (although it’s gotten larger over time), so that would suggest California kids score about 0.18 standard deviations higher on the SAT, which is not a large gap, but not vanishingly small either.

On the ACT, California white high school students average 1144 and Texas white kids 1078, for a 64 point or .32 s.d. gap. But only 21% of white kids in California take the ACT, suggesting it’s kind of a boutique test in California for strivers. In contrast, 32% of Texas whites take the test, suggesting there the ACT in Texas falls in between a boutique test and a meat and potatoes test. So, it’s hard to compare the ACT scores for whites directly.

But my general impression is that whites in California, at least among the college curious, score a little better on college admissions tests than whites in Texas.

(One methodological quibble to keep in mind is that I don’t know how the data treats an individual student retaking the same test in one year. Do they enter all the scores or just the highest? And is the likelihood of retaking the same test greater in one state or the other?)

The difference is pretty small, but it’s in the opposite direction of the difference reported by the NAEP.

Among blacks, the California advantage appears to be quite similar to what’s seen among whites: small but not insignificant.

On the other hand, among Hispanics, California’s advantages in test scores are smaller than among among whites and blacks, and Texas Hispanics are somewhat more likely to take both the SAT and ACT. I’m not at all confident that California Hispanics would do better overall than Texas Hispanics if everybody in both states took a college admissions test.

So, my best guess would be: modest advantages for the white and black populations of Californians over white and black Texans, respectively, but Hispanics in Texas overall are no lower scoring and might actually be a few points higher.

By the way, Texas Asians score 16 points higher on the SAT and 17 points higher on the ACT than California Asians. I would include them in the table if I could find the 2010 Census figures for the number of 13-year-old Asians in Texas.

 
🔊 Listen RSS
Screenshot 2015-08-29 20.15.45

Federal NAEP reading scores 12th graders 2013

A general assumption of the moderate conventional wisdom over the last half century is that average black performance is dragged down by specific impediments, such as poverty, crime, culture of poverty, parental taciturnity, lead paint, or whatever. One would therefore expect blacks without those impediments to score equal with whites.

But a close inspection of the social science data suggests that the world doesn’t really look like that. For example, above is the 2013 federal National Assessment of Educational Progress scores for 12th graders in Reading. Blacks who are the children of college graduates average 274, which is the same as whites who are the children of high school dropouts.

The Math Gap is the same:

Screenshot 2015-08-29 20.34.33

At the high school dropout level, The Gap in math is 16 points, but at the college graduate level, The Gap is twice as large: 32 points. That’s the opposite of what the conventional wisdom would imply.

So, basically, there are two theories left to account for this. How do we choose between them?

In the past, Western civilization tried to follow Occam’s Razor, which implies the Bell Curve theory of regression toward different means would be most likely.

But the term “Western civilization” is exclusionary and makes people feel bad. These days, we know that the highest form of thought is not using Occam’s Razor but shouting “Occam’s racist!”

So the only viable explanation is the Conspiracy Theory Theory of Pervasive Racism: people who think they are white are constantly destroying black bodies by saying words like “field” and “swing.” Or something. It doesn’t really matter what the specifics of the Conspiracy Theory Theory are since the more unfalsifiable the better.

Because Science.

 
🔊 Listen RSS

Screenshot 2015-07-01 16.54.40

Paul Krugman argues today that Puerto Rico is kind of like West Virginia, Mississippi, and Alabama:

Put it this way: if a region of the United States turns out to be a relatively bad location for production, we don’t expect the population to maintain itself by competing via ultra-low wages; we expect working-age residents to leave for more favorable places. That’s what you see in poor mainland states like West Virginia, which actually looks a fair bit like Puerto Rico in terms of low labor force participation, albeit not quite so much so. (Mississippi and Alabama also have low participation.) … There is much discussion of what’s wrong with Puerto Rico, but maybe we should, at least some of the time, just think of Puerto Rico as an ordinary region of the U.S. …

Okay, but there’s a huge difference in test scores.

The federal government has been administering a special Puerto Rico-customized version of its National Assessment of Educational Progress (NAEP) exam in Spanish to Puerto Rican public school students, and the results have been jaw-droppingly bad.

For example, among Puerto Rican 8th graders tested in mathematics in 2013, 95% scored Below Basic, 5% scored Basic, and (to the limits of rounding) 0% scored Proficient, and 0% scored Advanced. These results were the same in 2011.

In contrast, among American public school students poor enough to be eligible for subsidized school lunches (“NSLP” in the graph above), only 39% scored Below Basic, 41% scored Basic, 17% scored Proficient, and 3% scored Advanced.

Puerto Rico’s test scores are just shamefully low, suggesting that Puerto Rican schools are completely dropping the ball. By way of contrast, in the U.S., among black 8th graders, 38% score Basic, 13% score Proficient, and 2% score Advanced. In the U.S. among Hispanic 8th graders, 41% reach Basic, 18% Proficient, and 3% Advanced.

In Krugman’s bete noire of West Virginia, 42% are Basic, 20% are Proficient, and 3% are Advanced. In Mississippi, 40% are Basic, 18% Proficient, and 3% are Advanced. In Alabama, 40% are Basic, 16% are Proficient, and 3% are Advanced. (Unmentioned by Krugman, the lowest scores among public school students are in liberal Washington D.C.: 35% Basic, 15% Proficient, and 4% Advanced.)

Let me repeat, in Puerto Rico in Spanish, 5% are Basic, and zero zip zilch are Proficient, much less Advanced.

Am I misinterpreting something? I thought I must be, but here’s a press release from the Feds confirming what I just said:

The 2013 Spanish-language mathematics assessment marks the first time that Puerto Rico has been able to use NAEP results to establish a valid comparison to the last assessment in 2011. Prior to 2011, the assessment was carefully redesigned to ensure an accurate assessment of students in Puerto Rico. Results from assessments in Puerto Rico in 2003, 2005 and 2007 cannot be compared, in part because of the larger-than-expected number of questions that students either didn’t answer or answered incorrectly, making it difficult to precisely measure student knowledge and skills. The National Center for Education Statistics, which conducts NAEP, administered the NAEP mathematics assessment in 2011. But those results have not been available until now, as it was necessary to replicate the assessment in 2013 to ensure that valid comparisons could be made.

“The ability to accurately measure student performance is essential for improving education,” said Terry Mazany, chairman of the National Assessment Governing Board, which oversees NAEP. “With the support and encouragement of education officials in Puerto Rico, this assessment achieves that goal. This is a great accomplishment and an important step forward for Puerto Rico’s schools and students.”

NAEP assessments report performance using average scores and percentages of students at or above three achievement levels: Basic, Proficient and Advanced. The 2013 assessment results showed that 11 percent of fourth-graders in Puerto Rico and 5 percent of eighth-graders in public schools performed at or above the Basic level; conversely, 89 percent of fourth-graders and 95 percent of eighth-graders scored below that level. The Basic level denotes partial mastery of the knowledge and skills needed for grade-appropriate work. One percent or fewer of students in either grade scored at or above the Proficient level, which denotes solid academic performance. Only a few students scored at the Advanced level.

The sample size for 8th graders was 5,200 students at 120 public schools in the Territory.

UPDATE: I’ve now discovered Puerto Rico’s scores on the 2012 international PISA test. Puerto Rico came in behind Jordan in math.

Results this abysmal can’t solely be an HBD problem (although it’s an interesting data point in any discussion of hybrid vigor); this has to also be due to a corrupt and incompetent education system in Puerto Rico.

New York Times’ comments aren’t generally very useful for finding out information, but Krugman’s piece did get this comment:

KO’R New York, NY 4 hours ago

My husband and I have had a house in PR for 24 years. For two of those years we taught English and ESL at Interamericana, the second largest PR university. Our neighbors have children in the public grade schools. In a nutshell: the educational system in PR is a joke!!! Bureaucratic and corrupt. Five examples: (1) In the elementary schools near us if a teacher is sick or absent for any reason, there is no class that day. (2) Trying to get a textbook changed at Interamericana requires about a year or more of bureaucratic shinnanigans (3) A colleague at Interamericana told us that he’d taught in Africa (don’t remember where) for a few years and PR was much worse in terms of bureaucracy and politics. ( (4) The teaching method in PR is for the teacher to stand in front of the class, read from the textbook verbatim, and have the students repeat what he or she read. And I’m not speaking just about English – this goes for all subjects. 5) Interamericana is supposed to be a bi-lingual iniversity. In practice, this means the textbooks are in English, the professor reads the Spanish translation aloud, and the usually minimal discussion is in Spanish. …

Public school spending in Puerto Rico is $7,429 per student versus $10,658 per student in the U.S. Puerto Rico spends more per student than Utah and Idaho and slightly less than Oklahoma.

Puerto Rico spends less than half as much as the U.S. average on Instruction: $3,082 in Puerto Rico vs. $6,520 in America, significantly less than any American state. But Puerto Rico spends more than the U.S. average on Total Support Services ($3,757 vs. $3,700). Puerto Rico is especially lavish when it comes to the shifty-sounding subcategories of General Administration ($699 in PR vs. $212 in America) and Other Support Services ($644 vs. $347). PR spends more per student on General Administration than any state in America, trailing only the notorious District of Columbia school system, and more even than DC and all 50 states on the nebulous Other Support Services.

Being a schoolteacher apparently doesn’t pay well in PR, but it looks like a job cooking the books somewhere in the K-12 bureaucracy could be lucrative.

The NAEP scores for Puerto Rico and the U.S. are for just public school students.

A higher percentage of young people in Puerto Rico attend private schools than in the U.S. The NAEP reported:

In Puerto Rico, about 23 percent of students in kindergarten through 12th grade attended private schools as of the 2011-2012 school year, compared with 10 percent in the United States. Puerto Rico results are not part of the results reported for the NAEP national sample.

So that accounts for part of the gap. But, still, public schools cover 77% of Puerto Ricans v. 90% of Americans, so the overall picture doesn’t change much: the vast majority of Puerto Rican 8th graders are Below Basic in math.

Another contributing factor is likely that quite a few Puerto Ricans summer in America and winter in Puerto Rico and yank their kids back and forth, which is disruptive to their education.

It’s clear that Puerto Ricans consider their own public schools to be terrible and that anybody who can afford private school should get out. The NAEP press release mentions that 100% of Puerto Rican public school students are eligible for subsidized school lunches versus about 50% in the U.S. Heck, Oscar-winner Benicio Del Toro’s lawyer father didn’t just send him to private school, they sent him to a boarding school in Pennsylvania.

Still, these Puerto Rican public school scores are so catastrophic that I also wouldn’t rule out active sabotage by teachers, such as giving students an anti-pep talk, for some local labor reason. For example, a PISA score from Austria was low a couple of tests ago because the teacher’s union told teachers to tell students not to bother working hard on the test. But the diminishment of the Austrian PISA score wasn’t anywhere near this bad. And Puerto Rico students got exactly the same scores in 2011 and 2013.

And here’s Jason Malloy’s meta-analysis of studies of Puerto Rican cognitive performance over the last 90 years.

 
🔊 Listen RSS

From the Baltimore Sun:

Baltimore second in per-pupil spending, Census Bureau says

May 21, 2013|By Erica L. Green, The Baltimore Sun

The Baltimore school system ranked second among the nation’s 100 largest school districts in how much it spent per pupil in fiscal year 2011, according to data released Tuesday by the U.S. Census Bureau.

The city’s $15,483 per-pupil expenditure was second to New York City’s $19,770. Rounding out the top five were Montgomery County, which spent $15,421; Milwaukee public schools at $14,244; and Prince George’s County public schools, which spent $13,775.

Baltimore City, New York, and Milwaukee test scores are broken out separately in the NAEP test’s Trial Urban District Assessment program. (The other two districts are suburban counties in the rich Washington DC area. Three of the top five most expensive districts in the country are in liberal Maryland.) I’ll look at 8th grade math for black students only:

National (public schools): 51% basic or above, 14% proficient or above, 2% advanced

Baltimore City: 44% basic or above, 10% proficient or above, 1% advanced

New York City: 51% basic or above, 13% proficient, 1% advanced

Milwaukee: 31% basic or above, 4% proficient, NA advanced

So, Baltimore gets more for its money than Milwaukee. (Of course, if you’ve been reading iSteve for long, you’ll know of the amazing dismalness of Milwaukee blacks.)

 
🔊 Listen RSS

Long time readers know I’ve been interested in the question of school test scores in the two biggest states, California and Texas. In the federal National Assessment of Educational Progress scores, Texas routinely beats California across all racial groups. But the NAEP is low stakes to students, which makes it easier for state officials to manipulate results at the margins.

However, looking at an unverified table of high-stakes SAT and ACT college admission average test scores for 2014, white, Hispanic, and black California high schoolers outscore their counterparts in Texas (using a weighted average of SAT and ACT scores). But Texas’s Asians outscore California’s Asians.

Race CA CA SAT/ACT TX TX SAT/ACT CA-TX
All 350,655 1,016 295,583 973 43
AmInd 1,814 982 1,501 992 (9)
Asian 66,385 1,108 18,569 1,126 (18)
Black 20,667 888 37,615 854 33
Hispanic 131,723 905 113,395 891 14
Other 28,357 1,065 12,961 1,003 61
White 101,709 1,113 111,542 1,069 44

Both states are moderately majority SAT: in California, SAT takers outnumber ACT takers 2.1 to 1, and in Texas 1.5 to 1. This appears to be putting everything on the traditional 400 to 1600 scale, rather than the 600 to 2400 scale of the last decade, but that is being phased out soon. The mean was rescaled in 1995 to, ideally, be 1000 with a standard deviation of 200, although both have drifted since then.

So, California’s overall average is 97 points, or a little under a half of a standard deviation below it’s white average, while Texas’s overall average is 96 points below it’s white average.

I’m not going to put too much credence in these numbers: even if the data are valid (which I haven’t checked), my weighted average methodology is crude. On the other hand, the results don’t seem too implausible.

I mostly want to put some numbers out there to provoke somebody interested in this long-running problem of how to synthesize SAT and ACT scores reliably to try to come up with a more sophisticated general model.

 
🔊 Listen RSS

One of the older, more nagging conundrums for anybody interested in education and demographics is the lack of readily available meaningful data on how high school students do by state and by race on high stakes tests such as the SAT and ACT college admissions tests.

The federal government invests a lot of money in the NAEP test, but that is a low stakes test for students, so it’s more easily manipulable by those states that care about the results. For example, Texas usually manages to have a larger percentage of its less academically inclined students not take the NAEP than does Iowa, which helps contribute to Texas’s sterling NAEP scores.

Or maybe Texas really has figured out an effective, economical system of educating students of all ethnic groups. It’s hard to say, but it’s an important question that deserves study.

A high stakes test, in contrast, is one in which students have motivations for doing their best, which is why I’ve always wanted to look at SAT and ACT scores by state. After all, the NAEP isn’t important in the big picture, while the SAT and ACT are.

But, the percent of 17-year-olds taking one or both college admissions tests vary by state. This, however, is not an insuperable problem since estimates of what nontakers might have scored can be modeled demographically by looking at the variation in usage rates.

Another difficult problem, but one I believe can be modeled, is that the two tests started out regionally, with the ACT dominating states near its headquarters in Iowa City and the SAT near its headquarters in Princeton and on the West Coast.

In the upper Midwest, traditionally, the only students who took the SAT were ambitious one looking for admission to national universities on the East or West coasts. This led to Iowa and Illinois students taking the SAT averaging much higher scores than in the East and West.

In recent years, both tests have become less regional, with ACT-taking spreading to the coasts.

That evolution should help an ambitious analyst come up with a reasonable model for estimating the best guess for the combined SAT/ACT scores by state by race.

An iSteve reader (whose identity I have lost in the shuffle) kindly posted average SAT and ACT scores and number taking by state by race each year from 2006-2014 here. He converted the ACT scores into SAT score equivalents, although I don’t know which methodology he used.

Combine this trove of data with the 2010 Census data on the number of 17 year olds by race in each state and you have the raw materials for building a model that will get around the traditional problems that have bogged everybody down.

Me, personally, I’m not going to do all this work, but if somebody out there has the skills and is looking for a topic, this is an important one.

I don’t have the sources for this data, but if you are interested in working with this, post questions in the comments and the person who posted the numbers might respond.

 
🔊 Listen RSS

The darker the tint the better the 8th graders

Audacious Epigone has posted his table of white IQ estimates by state, using NAEP scores for 8th graders (public and private), ranging from 108.0 in Washington D.C. (which isn’t a state) and 104.4 in Massachusetts and 103.5 in New Jersey to 97.7 in Oklahoma, 97.5 in Alabama and a hurting 95.1 in West Virginia.

Thus, New Jersey whites (who, Bruce Springsteen songs to the contrary, are a notably intelligent and well-educated white population) scores 0.4 standard deviations higher than Alabama whites. That’s not a huge gap. On the other hand, the 0.86 s.d. gap between Washington D.C. whites and nearby West Virginia whites is substantial, and may subtly color a lot of media discourse.

Moynihan’s Law of the Canadian Border is still vaguely visible, but is much less strong for whites only than for total populations (which is of course most of the joke). Texas of course stands out sharply from the central southern states.

Of course, some of the differences in test scores don’t reflect underlying IQ but are instead reflections of different effectivenesses at educations and, presumably, at how hard different states get their students to try on the NAEP.

You can read his whole table there.

 
🔊 Listen RSS
The federal government’s National Assessment of Educational Progress test results for 12th graders in readin’ and ‘rithmetic are now out for 2013. The feds have a nice website to display the numbers. I’ve been following these kind of test score stats for almost as long as I’ve been following baseball statistics, but I have to admit that seldom if ever do any Mike Trouts come along to add excitement to my peculiar hobby.
Above is a graph of the ten states where the NAEP had big enough sample sizes to break out The Gap (white-black, in this graph on the Math test). Of the ten states, the only one where The Gap is notably smaller than in the nation at large is West Virginia. How has West Virginia accomplished this goal that has obsessed policymakers and pundits for most of my lifetime? By having many of the smart white people in West Virginia move to greener pastures in North Carolina, Virginia, Georgia, and so forth.
(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS
At Your Lying Eyes, Ziel writes:

Predictions – Anyone Wanna Bet?

In the spirit of the famous Simon/Ehrlich bet, here are some predictions for 2023. Any takers?

1. Real GDP growth will average less than 2.5% per year over the next decade

2. The Gap – as measured by NAEP 8th grade math scores among black and white students nationwide – will be greater than 0.9 standard deviations.

3. California’s performance on the 8th grade math NAEP will not improve relative to the U.S. mean (in standard deviation units) over it’s 2013 performance.

4. The price of oil – despite decreased demand – will be no lower than the average price during 2013.

5. The Social Security revenue estimates of the CBO with regard to the 2013 Comprehensive Immigration Reform act will prove to be too optimistic (as a % of GDP). The CBO estimates of the immigrant population in the U.S. as of 2023 will prove to be too low.

6. The share of total income earned by the bottom 20% of American families (measured in terms of family income) will be lower than it is today; this will also hold true for wealth.

7. Neither Libya nor Egypt will have a functioning democracy.

8. Average global temperature, as measured by the GISS, will not be lower than today.

9. The per capita GDP of Brazil, measured in $PPP, relative to that of Switzerland in terms of dollar difference, will not be improved.

The last one is a bit of a sucker bet in that “dollar difference” implies absolute difference between Switzerland and Brazil, not a relative percentage difference. For example, say that per capita GDP in Switzerland is $50,000 and in Brazil is $20,000. (I’m exaggerating to make the arithmetic easy.) If Brazil goes up 20 percent to $24,000, while Switzerland goes up 10 percent to $55,000, then Ziel wins.
I’d be worried about losing on California’s NAEP scores relative to the rest of the country. I suspect the long term trend is that California will become increasingly Asian while shedding blacks and Hispanics to places with lousier weather. But how soon (if ever) that mechanism will impact NAEP scores is something I’d have to look at recent numbers closely before I’d want to bet on it.
(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS
Over the years, I’ve given Michael Bloomberg a hard time. Why? Well, the billionaire New York City mayor who likes to claim that he has “the seventh-largest army in the world” seems like a worthy foe.
One of Bloomberg’s boasts has been that, based on rising test scores, he had fixed the New York City public schools: a few years ago, 82% of NYC students scored proficient or advanced in math!
This braggadocio contributed to his political foes in Albany deciding to toughen the tests, with predictable results. From the NYT:

At their peak, in 2009, 69 percent of city students were deemed proficient in English, and 82 percent in math, under less stringent exams. After concluding the tests had become too easy, the state made them harder to pass in 2010, resulting in score drops statewide. … Last year, … 47 percent of city students passed in English, and 60 percent in math.

This year, New York State revamped the tests even more radically. …

In New York City, 26 percent of students in third through eighth grade passed the state exams in English, and 30 percent passed in math, according to the New York State Education Department.

Kevin Drum points out that on the federal NAEP test, NYC is down slightly relative to the average big city over the last few years.

Statewide, 16 percent of black students and 18 percent of Hispanic students passed English exams, compared with 40 percent of white students and 50 percent of Asians.

There must be something uniquely peculiar about New York since the test score hierarchy turned out to be Asian: white: Hispanic: black. Who has ever seen that ranking before?

The exams were some of the first in the nation to be aligned with a more rigorous set of standards known as Common Core, which emphasize deep analysis and creative problem-solving. …

By the way, does anybody have an informed opinion on Common Core tests, which are currently slated to go into operation in another 44 states?
(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS
Jennifer Rubin, who scribes the pro-immigration “Right Turn” column in the Washington Post, denounces Jason Richwine for the high crime of Noticing Things:

Heritage stumbles, again and again

Posted by Jennifer Rubin on May 8, 2013 at 4:23 pm

It’s been a tough go of it for Heritage ever since it released its study asserting immigration reform would cost trillions. It was roundly criticized by both liberal and conservative analysts. Then today the dam really broke.

The Post reports that the dissertation of the study’s co-author, Jason Richwine, asserted, “The average IQ of immigrants in the United States is substantially lower than that of the white native population, and the difference is likely to persist over several generations. The consequences are a lack of socioeconomic assimilation among low-IQ immigrant groups, more underclass behavior, less social trust, and an increase in the proportion of unskilled workers in the American labor market.” No wonder he came up with such a study; his dissertation adviser was George Borjas, a Harvard professor infamous for his crusade against immigration (legal or not).

Jennifer Korn, executive director of the pro-immigration-reform conservative Hispanic Leadership Network, responds: “If you start with the off-base premise that Hispanic immigrants have a lower IQ, it’s no surprise how they came up with such a flawed study.” She continued: “Richwine’s comments are bigoted and ignorant. America is a nation of immigrants; to impugn the intelligence of immigrants is to offend each and every American and the foundation of our country. The American Hispanic community is entrepreneurial, and we strive to better our lives through hard work and determination. This is not a community hampered by low intelligence but a community consistently moving forward to better themselves and our country.”

Heritage scrambled to distance itself from the author’s IQ views, with a spokesperson insisting that they did not relate to the viability of its study. But for the reasons Korn gives it most certainly does. No wonder the study postulates that legalized immigrants will be poor and become a drain on society.

Moreover, that Heritage engaged such a person to author its immigration study suggests that the “fix” was in from the get-go. It also raises the question of whether Heritage is now hiring fringe characters to generate its partisan studies of questionable scholarship. I expect that will be about all we hear from Heritage on the study for a while.

It certainly undermines the cause of all immigration opponents to have their prized work authored by such a character. It’s an unpleasant reminder that sincere opponents of reform should distance themselves from the collection of extremists and bigots who populate certain anti-immigrant groups. One can certainly be anti-immigration-reform and not be anti-Hispanic, but it doesn’t help to be rallying around a report by someone convinced that “the totality of the evidence suggests a genetic component to group differences in IQ.”

The facts won’t calm Ms. Rubin down, because, obviously, the facts are hatestats, but here’s a meta-analysis of the enormous amount of data available on the subject:
Roth, P. L., Bevier, C. A., Bobko, P., Switzer III, F. S. & Tyler, P. (2001) “Ethnic group differences in cognitive ability in employment and educational settings: a meta-analysis.Personnel Psychology 54, 297–330.
As I wrote in 2005:
This 2001 meta-analysis of 39 studies covering a total 5,696,519 individuals in America (aged 14 and above) came up with an overall difference of 0.72 standard deviations in g (the “general factor” in cognitive ability) between “Anglo” whites and Hispanics. The 95% confidence range of the studies ran from .60 to .88 standard deviations, so there’s not a huge amount of disagreement among the studies.
One standard deviation equals 15 IQ points, so that’s a gap of 10.8 IQ points, or an IQ of 89 on the Lynn-Vanhanen scale where white Americans equal 100. That would imply the average Hispanic would fall at the 24th percentile of the white IQ distribution. This inequality gets worse at higher IQs Assuming a normal distribution, 4.8% of whites would fall above 125 IQ versus only 0.9% of Hispanics, which explains why Hispanics are given ethnic preferences in prestige college admissions.
In contrast, 105 studies of 6,246,729 individuals found an overall white-black gap of 1.10 standard deviations, or 16.5 points. (I typically round this down to 1.0 standard deviation and 15 points). So, the white-Hispanic gap appears to be about 65% as large as the notoriously depressing white-black gap. (Warning: this 65% number does not come from a perfect apples to apples comparison because more studies are used in calculating the white-black difference than the white-Hispanic difference.)For screen shots of data tables from Roth et al, click here.

This fits well with lots of other data. For example, Hispanics generally do almost as badly on the National Assessment of Educational Progress school achievement tests as blacks, but that average is dragged down by immigrant kids who have problems adjusting to English. The last time the NAEP asked about where the child was born was 1992, and Dr. Stefan Thernstrom of Harvard kindly provided me with the data from that examination. For foreign-born Hispanics, the typical gap versus non-Hispanic whites was 1.14 times as large as the black-white gap. But for American-born Hispanics, the gap between non-Hispanic whites and American-born Hispanics was 0.67 times as large as the gap between non-Hispanic whites and blacks, very similar to the 0.65 difference seen in the meta-analysis of IQs.For more on Mexican-American educational attainment, see the landmark “Generations of Exclusion” study by Telles & Ortiz.

(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS
Texas public school students usually score pretty well in the federal government’s NAEP school achievement tests, at least when adjusted for ethnicity. I’ve always wondered how they do it. It would seem like the kind of thing worth checking into.
One way, it turns out, is by excluding more students from having to take the NAEP than other states do. Texas excuses 10% of its 4th graders versus 4% nationwide and only 3% in California. (See p. 5 of this new report on the NAEP performance of the 5 biggest states.) So, Texas has simply made a large fraction of Below Basic scorers vanish. That’s a nice little running start for Texas.
If Texas has figured out how to fiddle with that parameter, I wonder what else they’ve figured out?
(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS
The feds’ National Assessment of Educational Progress has a table of 4th and 8th grade vocabulary and reading comprehension scores by state. Sample size issues are of concern for smaller states which tend to bounce around, but we can state with a high degree of statistical confidence that the future of the state of California, the traditional State of the Future, looks dumb. Out of the 50 states, the Golden State ranks 48th, 47th, 48th, and 49th on various measures. Here’s the bottom six of 52 in the four different tests:

In contrast, Massachusetts is 1st, 1st, 1st, and 1st, while the District of Columbia was 52nd, 52nd, 52nd, and 52nd (in case you are wondering why D.C. is the 52nd state, Department of Defense schools rank 2nd, 5th, 2nd, 6th). Obviously, the problem is all those Republicans in California and D.C. If only D.C. would develop enlightened political opinions like Massachusetts, its test scores would soar.

Perhaps more relevantly, Texas is 37th, 36th, 37th, and 36th. Texas always beats California on the NAEP. Has anybody studied this to make sure this is not just a test artifact (e.g., Texas cares about the NAEP and California doesn’t)? If it isn’t, why the consistent difference? Texas is pretty bad, but it’s not as bad as California, and beggars can’t be choosers, so somebody ought to be investigating why Texas beats California.

One obvious objection is that the future isn’t as bad as it looks because Hispanics, as new immigrants, are just being held back by the inevitable biases of testing skills in English.

Indeed, this effect does exist, but how big is it? Here’s national 8th grade vocabulary. The first number is score at the 10th percentile, then 25th, 50th, 75th and 90th.

Let’s first compare whites and Asians. At the 10th percentile, Asians lag whites by 8 points. Presumably, a fair number of these Asian 8th graders just got off the plane from China, so their English vocabulary is limited. At the 25th percentile, the White-Asian gap is down to 5 points. At the median, it’s 3, at the 75th percentile it’s 0, and at the 90th percentile, Asians are out in the lead by a point.
Now, compare Hispanics to blacks, most of whom grow up speaking English, but as we all know from hundreds of articles, African-Americans grow up in conditions that would drive a Trappist Monk crazy for lack of speech. In black homes, nobody every talks, watches TV, or listens to rap music. So, black scores on language are bad, with unfortunate long-term consequences.
At the 10th percentile, where many of the Hispanics are newcomers, blacks lead by 2 points. At the 25th percentile, however, Hispanics are out in front by 1 point, by 2 at the median, 3 at the 75 percentile, and 4 at the 90th.
So, clearly, Hispanics who have all the advantages are, on average, a little smarter than blacks who have all the advantages. In other words, if immigration were shut off for a generation or two, Mexicans would appear, on average, perceptibly more on the ball academically than blacks. Indeed, that was my perception back in the 1970s in L.A., where the Chicanos had mostly been a stable population since WWII.
But, nationally, Hispanics only pick up 6 points on blacks going from the 10th to the 90th percentiles, while Asians pick up 9 points on whites, who are, to be frank, a lot more competition.
Being a little smarter than blacks is, well, good. Or, you could say with equal justice, less bad. On the other hand, Hispanics at the 90th percentile among Hispanics, typically those with all the advantages, are simply not playing in the same league as Asians and whites with all the advantages. They’re down there beating out blacks for third place, not being nationally competitive. There’s not a lot of high end in the Hispanic population.
However you look at it, it’s still not very encouraging considering that our leadership kind of bet the country on Hispanics.
(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS
Psychometrics is a relatively mature field of science, and a politically unpopular one. So you might think there isn’t much money to be made in making up brand new standardized tests. Yet, there is.
From the NYT:

U.S. Asks Educators to Reinvent Student Tests, and How They Are Given

By SAM DILLON<nyt_correction_top>

<nyt_byline> Standardized exams — the multiple-choice, bubble tests in math and reading that have played a growing role in American public education in recent years — are being overhauled.

Over the next four years, two groups of states, 44 in all, will get $330 million to work with hundreds of university professors and testing experts to design a series of new assessments that officials say will look very different from those in use today.

The new tests, which Secretary of Education Arne Duncan described in a speech in Virginia on Thursday, are to be ready for the 2014-15 school year.

They will be computer-based, Mr. Duncan said, and will measure higher-order skills ignored by the multiple-choice exams used in nearly every state, including students’ ability to read complex texts, synthesize information and do research projects.

“The use of smarter technology in assessments,” Mr. Duncan said, “makes it possible to assess students by asking them to design products of experiments, to manipulate parameters, run tests and record data.”

I don’t know what the phrase “design products of experiments” even means, so I suspect that the schoolchildren of 2014-15 won’t be doing much of it.

Okay, I looked up Duncan’s speech, “Beyond the Bubble Tests,” and what he actually said was “design products or experiments,” which almost makes sense, until you stop and think about it. Who is going to assess the products the students design? George Foreman? Donald Trump? (The Donald would be good at grading these tests: tough, but fair. Here’s a video of Ali G pitching the product he designed — the “ice cream glove” — to Trump.

Because the new tests will be computerized and will be administered several times throughout the school year, they are expected to provide faster feedback to teachers than the current tests about what students are learning and what might need to be retaught.

Both groups will produce tests that rely heavily on technology in their classroom administration and in their scoring, she noted.

Both will provide not only end-of-year tests similar to those in use now but also formative tests that teachers will administer several times a year to help guide instruction, she said.

And both groups’ tests will include so-called performance-based tasks, designed to mirror complex, real-world situations.

In performance-based tasks, which are increasingly common in tests administered by the military and in other fields, students are given a problem — they could be told, for example, to pretend they are a mayor who needs to reduce a city’s pollution — and must sift through a portfolio of tools and write analytically about how they would use them to solve the problem.

Oh, boy …

There is some good stuff here — adaptive tests are a good idea (both the military’s AFQT and the GRE have gone over to them). But there’s obvious trouble, too.

Okay, so these new tests are going to be much more complex, much more subjective, and get graded much faster than fill-in-the-bubble tests? They’ll be a dessert topping and a floor wax!

These sound a lot like the Advanced Placement tests offered to high school students, which usually include lengthy essays. But AP tests take two months to grade, and are only offered once per year (in May, with scores coming back in July), because they use high school teachers on their summer vacations to grade them.

There’s no good reason why fill-in-the-bubble tests can’t be scored quickly. A lot of public school bubble tests are graded slothfully, but they don’t have to be. My son took the ERB’s Independent School Entrance Exam on a Saturday morning and his score arrived at our house in the U.S. Mail the following Friday, six days later.

The only legitimate reason for slow grading is if there are also essays to be read, but in my experience, essay results tend to be dubious at least below the level of Advanced Placement tests, where there is specific subject matter in common. The Writing test that was added to the SAT around 2003 has largely been a bust, with many colleges refusing to use it in the admissions process.

One often overlooked problem with any kind of writing test, for example, is that graders have a hard time reading kids’ handwriting. You can’t demand that kids type because millions of them can’t. Indeed, writing test results tend to correlate with number of words written, which is often more of a test of handwriting speed than of anything else. Multiple choice tests have obvious weaknesses, but at least they minimize the variance introduced by small motor skills.

And the reference to “performance-based tasks” in which people are supposed to “write analytically” is naive. I suspect that Duncan and the NYT man are confused by all the talk during the Ricci case about the wonders of “assessment centers” in which candidates for promotion are supposed to sort through an in-basket and talk out loud about how they would handle problems. In other words, those are hugely expensive oral tests. The city of New Haven brought in 30 senior fire department officials from out of state to be the judges on the oral part of the test.

And the main point of spending all this money on an oral test is that an oral test can’t be blindgraded. In New Haven, 19 of the 30 oral test judges were minorities, which isn’t something that happens by randomly recruiting senior fire department officials from across the country.

But nobody can afford to rig the testing of 35,000,000 students annually.

Here are some excerpts from Duncan’s speech:

President Obama called on the nation’s governors and state education chiefs “to develop standards and assessments that don’t simply measure whether students can fill in a bubble on a test, but whether they possess 21st century skills like problem-solving and critical thinking and entrepreneurship and creativity.”

You know your chain is being yanked when you hear that schoolteachers are supposed to teach “21st century skills” like “entrepreneurship.” So, schoolteachers are going to teach kids how to be Steve Jobs?

Look, there are a lot of good things to say about teachers, but, generally speaking, people who strive for union jobs with lifetime tenure and summers off are not the world’s leading role models on entrepreneurship.

Further, whenever you hear teachers talk about how they teach “critical thinking,” you can more or less translate that into “I hate drilling brats on their times tables. It’s so boring.” On the whole, teachers aren’t very good critical thinkers. If they were, Ed School would drive them batty. (Here is an essay about Ed School by one teacher who is a good critical thinker.)

And last but not least, for the first time, the new assessments will better measure the higher-order thinking skills so vital to success in the global economy of the 21st century and the future of American prosperity. To be on track today for college and careers, students need to show that they can analyze and solve complex problems, communicate clearly, synthesize information, apply knowledge, and generalize learning to other settings. …

Over the past 19 months, I have visited 42 states to talk to teachers, parents, students, school leaders, and lawmakers about our nation’s public schools. Almost everywhere I went, I heard people express concern that the curriculum had narrowed as more educators “taught to the test,” especially in schools with large numbers of disadvantaged students.

Two words: Disparate Impact.

The higher the intellectual skills that are tested, the larger the gaps between the races will turn out to be. Consider the AP Physics C exam, the harder of the two AP physics tests: In 2008, 5,705 white males earned 5s (the top score) versus six black females.

In contrast, tests of rote memorization, such as having third graders chant the multiplication tables, will have smaller disparate impact than tests of whether students “can analyze and solve complex problems, communicate clearly, synthesize information, apply knowledge, and generalize learning to other settings.” That’s a pretty decent description of what IQ tests measure.

Duncan says that the new tests could replace existing high school exit exams that students must pass to graduate.

Many educators have lamented for years the persistent disconnect between what high schools expect from their students and the skills that colleges expect from incoming freshman. Yet both of the state consortia that won awards in the Race to the Top assessment competition pursued and got a remarkable level of buy-in from colleges and universities.

… In those MOUs, 188 public colleges and universities and 16 private ones agreed that they would work with the consortium to define what it means to be college-ready on the new high school assessments.

The fact that you can currently graduate from high school without being smart enough for college is not a bug, it’s a feature. Look, this isn’t Lake Wobegon. Half the people in America are below average in intelligence. They aren’t really college material. But they shouldn’t all have to go through life branded as a high school dropout instead of high school graduate because they weren’t lucky enough in the genetic lottery to be college material.

The Gates Foundation and the U. of California ganged up on the LA public schools to get the school board to pass a rule that nobody will be allowed to graduate who hasn’t passed three years of math, including Algebra II. That’s great for UC, not so great for an 85 IQ kid who just wants a high school diploma so employers won’t treat him like (uh oh) a high school dropout. But, nobody gets that.

Another benefit of Duncan’s new high stakes tests will be Smaller Sample Sizes of Questions:

With the benefit of technology, assessment questions can incorporate audio and video. Problems can be situated in real-world environments, where students perform tasks or include multi-stage scenarios and extended essays.

By way of example, the NAEP has experimented with asking eighth-graders to use a hot-air balloon simulation to design and conduct an experiment to determine the relationship between payload mass and balloon altitude. As the balloon rises in the flight box, the student notes the changes in altitude, balloon volume, and time to final altitude. Unlike filling in the bubble on a score sheet, this complex simulation task takes 60 minutes to complete.

So, the NAEP has experimented with this kind of question. How did the experiment work out?

You’ll notice that the problem with using up 60 minutes of valuable testing time on a single multipart problem instead of, say, 60 separate problems is that it radically reduces the sample size. A lot of kids will get off track right away and get a zero for the whole one hour segment. Other kids will have seen a hot air balloon problem the week before and nail the whole thing and get a perfect score for the hour.

That kind of thing is fine for the low stakes NAEP where results are only reported by groups with huge sample sizes (for example, the NAEP reports scores for whites, blacks, and Hispanics, but not for Asians). But for high stakes testing of individual students and of their teachers, it’s too random. AP tests have large problems on them, but they are only given to the top quarter or so of high school students in the country, not the bottom half of grade school students.

It’s absurd to think that it’s all that crucial that all American schoolchildren must be able to “analyze and solve complex problems, communicate clearly, synthesize information, apply knowledge, and generalize learning to other settings.” You can be a success in life without being able to do any of that terribly well.

Look, for example, at the Secretary of Education. Arne Duncan has spent 19 months traveling to 42 states, talking about testing with teachers, parents, school leaders, and lawmakers. Yet, has he been able to synthesize information about testing terribly well at all? Has his failure to apply knowledge and generalize learning about testing gotten him fired from the Cabinet?

(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS

Americans have devoted an enormous amount of effort over the centuries to devising useful baseball statistics. In recent years, Americans have talked a lot about devising useful educational statistics.

For example, I’ve pointed out a million times over the last decade that it doesn’t make much sense to judge teachers, schools, or colleges by their students’ test scores. Most of the time, all you are doing is determining which kids were smarter to start with. Logically, it makes more sense to judge their “value added” by comparing how the students score now to how they scored in the past before the people or institutions being measured got their mitts on the students.

Over the last few years, everybody who is anybody in education — Bill Gates, Arne Duncan, you name it — has come around to this perspective (although they won’t use the word “smarter”).

A big problem, however, is that this value added idea remains almost wholly theoretical because almost none of the prominent educational statistics are published in value added form.

In contrast, when Bill James was pointing out 30 years ago that Batting Average, traditionally the most prestigious hitting statistic (the guy with the highest BA was crowned “Batting Champion”), wasn’t as good a measure of hitting contribution as Slugging Average plus On-Base Percentage, he could show you what he meant using real numbers that were available to everybody, even if you had to calculate them yourself from other, more widely published statistics.

Readers would say, “Yeah, he’s right. For example, Matty Alou (career batting average .307, but slugging average .381 and on-base percentage .345) wasn’t anywhere near as good as Mickey Mantle (career batting average only .298, but slugging average .557 and on-base percentage .421). If you add on-base percentage and slugging average together to get “OPS,” then Mickey had a .977 while Matty only had .726. And that sounds about right. Mickey was awesome, but it didn’t always show up in his traditional statistics. Now, we’ve finally got a statistic that matches up with what we all could see from watching lots of Yankee games.”

On the other hand, other innovative baseball statistics from that era have faded because they didn’t seem to work as well in practice as in theory. Readers would be rightly skeptical that Glenn Hubbard and Roy Smalley Jr. really were all time greats, as these complicated formulas said they were.

A couple of years ago, Audacious Epigone and I stumbled upon a potentially promising fluke in the federal National Assessment of Educational Progress test scores by state. Since these tests are given every two years to representative samples of fourth and eighth graders, then you ought to be able to roughly estimate how much value the public schools in each state have added from 4th grade to 8th grade by comparing, say, a state’s 2009 8th grade scores to that state’s 2005 4th grade scores.

Granted, people move in and out of states, but if you just look at the scores for non-Hispanic whites, you can cut down the effect of demographic change to what might be a manageable level.

So, how to display this data in a semi-usable form? In the following table, I’ve put the Rank of each state. For example, in NAEP 4th Grade Reading scores in 2005, white public school students in Alabama ranked 48th (out of 52 — the 50 states plus D.C. and the Department of Defense schools for the children of military personnel). By 2009, this cohort of Alabamans was up to 47th in 8th Grade Reading. That’s a Change in Rank of +1. Woo-hoo!

In contrast, in Math, Alabama’s 4th Graders were 50th in 2005 and the state’s 8th Graders were 50th in 2009, so that’s a Change in Rank for Math of zero.

There are measures that are better for some purposes than Rank, but, admit it, ranking all the states is more interesting than using standard deviations or whatever.

A new idea is embodied in the last column, which reports the Difference in Rank between Math and Reading scores for 8th Graders in 2009. Because Alabama was 47th in Reading in 2009, but only 50th in Math in 2009, it gets a Difference in Rank of -3. Boo-hoo …

What’s the point of this last measure?

There’s a fair amount of evidence that schools have more impact on Math performance than Reading performance. For example, math scores on a variety of tests have gone up some since hitting rock bottom during the Seventies (in most of America outside of Berkeley, the Seventies were when the Sixties actually happened). In contrast, reading and verbal scores have staggered around despite a huge amount of effort to raise them.

Why have math scores proven more open to improvement by schools than reading scores? One reason probably is that because kids only spend about 1/5th of their waking hours in school. And almost nobody does math outside of school, but some kids read outside of school. So, if you, say, double the amount of time spent in school on math, then you are increasing the total amount of time kids are spending doing math by about 98%. But if you double the amount of time spent on reading in school, there are some rotten stinker kids who read for fun in their free time, and thus you aren’t doing much for them in terms of total hours devoted to reading.

Not surprisingly, a decade of the No Child Left Behind act, which tells states to hammer on math and reading and don’t worry about that arty stuff like history and science, has seen continued slow improvements in math, but not much in reading — except at the bottom (i.e., the kids who don’t read outside school).

So, by 8th grade, Reading scores would likely be a rough measure of IQ crossed with bookishness (personality and culture). In contrast, 8th Grade Math scores are more amenable to alteration by schools since kids aren’t waiting in line to buy Harry Potter and the Lowest Common Denominator. So, the idea behind the final column is to compare rank on 8th Grade Math to rank on 8th Grade Reading. A positive number means your state has a better (lower) rank on Math than on Reading, which might reflect relatively well on your public schools given the raw materials it has to work with relative to other states.

For example, on the NAEP, Texas ranks 11th among white 8th graders in Reading, which is pretty good for such a huge state. But, it ranks a very impressive 4th among white 8th graders in Math, for a Difference in Ranking score of +7. This suggests Texas is doing something with math that’s worth checking into. Maybe they are just teaching to the test, but this is the NAEP, which isn’t a high-stakes test. And there are worse things than teaching to the test. (Whatever they are doing, they are starting young, because Texas ranks 2nd in Math for white 4th Graders.)

So, here is this huge table:

NAEP Read <spanxl67″ width=”58″><spanxl67″ width=”33″><spanxl67″ width=”33″><spanxl67″ width=”58″><spanxl67″ width=”62″>
Public4th8th4th-8th4th8th4th-8th8th-8th
White 2005 2009 09-05 2005 2009 09-05 09-09
Rank Rank Chg in Rnk Rank Rank Chg in Rnk Dif in Rnk
Alabama 48 47 +1 50 50 +0 -3
Alaska 37 31 +6 31 21 +10 +10
Arizona 41 29 +12 36 27 +9 +2
Arkansas 34 46 -12 37 44 -7 +2
California 32 33 -1 25 36 -11 -3
Colorado 9 9 0 13 6 7 +3
Connecticut 4 2 +2 8 7 +1 -5
Delaware 3 14 -11 11 17 -6 -3
DC 1 +1 1 +1 0
DoDEA 8 5 +3 21 16 +5 -11
Florida 16 21 -5 14 37 -23 -16
Georgia 27 38 -11 33 34 -1 +4
Hawaii 40 45 -5 40 48 -8 -3
Idaho 30 35 -5 29 26 3 +9
Illinois 13 10 +3 28 18 +10 -8
Indiana 43 34 +9 26 29 +-3 +5
Iowa 42 41 +1 39 41 +-2 0
Kansas 33 19 +14 10 15 +-5 +4
Kentucky 46 37 +9 51 49 +2 -12
Louisiana 45 51 -6 41 45 -4 +6
Maine 36 39 -3 42 39 3 0
Maryland 7 3 +4 7 2 +5 +1
Massachusetts 2 4 -2 3 1 2 +3
Michigan 28 40 -12 22 42 -20 -2
Minnesota 12 7 +5 4 5 +-1 +2
Mississippi 49 48 +1 48 51 +-3 -3
Missouri 26 27 -1 45 32 13 -5
Montana 21 16 +5 35 10 +25 +6
Nebraska 18 20 -2 30 28 2 -8
Nevada 51 49 +2 44 40 +4 +9
New Hampshire 19 24 -5 20 23 -3 +1
New Jersey 6 1 +5 5 3 +2 -2
New Mexico 35 25 +10 49 38 +11 -13
New York 10 8 +2 16 19 +-3 -11
North Carolina 22 28 -6 6 8 -2 +20
North Dakota 20 22 -2 24 9 15 +13
Ohio 14 12 +2 12 30 +-18 -18
Oklahoma 50 50 0 46 46 0 +4
Oregon 44 36 +8 34 31 +3 +5
Pennsylvania 15 6 +9 17 14 +3 -8
Rhode Island 39 43 -4 43 43 0 0
South Carolina 38 44 -6 9 24 -15 +20
South Dakota 29 13 +16 23 12 +11 +1
Tennessee 47 42 +5 47 47 +0 -5
Texas 11 11 0 2 4 -2 +7
Utah 31 30 +1 38 33 +5 -3
Vermont 24 18 +6 32 22 +10 -4
Virginia 5 17 -12 15 20 -5 -3
Washington 17 15 +2 19 11 +8 +4
West Virginia 52 52 0 52 52 0 0
Wisconsin 23 26 -3 18 13 5 +13
Wyoming 25 32 -7 27 35 -8 -3
NAEP Read <spanxl67″ width=”58″><spanxl67″ width=”33″><spanxl67″ width=”33″><spanxl67″ width=”58″><spanxl67″ width=”62″>
Public4th8th4th-8th4th8th4th-8th8th-8th
White 2005 2009 09-05 2005 2009 09-05 09-09
Rank Rank Chg in Rnk Rank Rank Chg in Rnk Dif in Rnk

As J.K. Simmons asks at the end of Burn After Reading, “What did we learn?

I’m not terribly sure, either. Who knows enough about what goes on within the educational establishments of all the states to know whether these numbers make sense?

But, at least we have some value added numbers and aren’t just still talking about how valuable they’d be if we ever got around to getting any.

(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS

Almost a decade ago, President Bush and Senator Kennedy got together and pushed through the No Child Left Behind act, which mandated that every single child in America would score “Proficient” or “Advanced” on reading and writing by 2013-2014, and told the states to concoct, administer, and grade their own tests to demonstrate this (nudge, nudge, wink, wink).

Some states got the hint, such as Mississippi, which soon reported that, even with a couple of years left on its Five Year Plan for Educational Awesomeness, 89% of Mississippi 4th grade readers were already Proficient/Advanced. Whether the governor of Mississippi also invited President Bush and Senator Kennedy to float in state down the Mississippi and see all the thriving new schools that he had erected on the banks of that mighty river is lost in the mists of history.

Unfortunately, while Bush and Kennedy were at it, they forgot to abolish the federal National Assessment of Education Progress test, which has gone on reporting that reading test scores have just kept on keeping on. From today’s Washington Post:

Reading scores stalled under ‘no child’ law, report finds

… progress nationwide has stalled despite huge instructional efforts launched under the No Child Left Behind law.

The 2009 National Assessment of Educational Progress showed that fourth-grade scores for the nation’s public schools stagnated after the law took effect in 2002, rose modestly in 2007, then flatlined. …

The national picture for eighth-grade reading was largely the same: a slight uptick in performance since 2007 but no gain in the seven years when President George W. Bush’s program for school reform was in high gear. …

When Bush signed the law, hopes were high for a revolution in reading. Billions of dollars were spent, especially in early grades, to build fluency, decoding skills, vocabulary, comprehension and a love of books that would propel students in all subjects. The goal was to eliminate racial and ethnic achievement gaps. But Wednesday’s report showed no great leaps for the nation and stubborn disparities in performance between white and black students, among others.

Another way to look at it is that we’re actually doing pretty good. With demographic riptide running in the wrong direction, just staying in the same place is a tribute to a lot of hard work.

Other notes: the white-black gap in 4th grade reading scores is by far the largest in the most liberal jurisdiction, the District of Columbia. Nationwide, it’s 25 points, but in DC it’s 60 points. The next biggest white-black gaps for 4th graders are in Minnesota (35 points) and Wisconsin (35). The smallest white-black gaps are in West Virginia (12 points — dumb whites), New Hampshire and Vermont (few blacks), and Pentagon-run schools (need a 92 IQ to enlist).

Indeed, DC has by far the highest scoring white kids (15 points ahead of Massachusetts). It’s black students are no longer the lowest scoring, being four points ahead of Wisconsin. (The worst scoring black 4th graders are in the socially liberal Old Northwest: Wisconsin, Michigan, and Minnesota. This is probably due in part to high welfare payments and easy eligibility requirements in the 1960s attracting the most feckless Southern blacks.)

Unfortunately, there aren’t enough white 8th graders in DC public schools for the NAEP to come up with an adequate sample size of white 8th graders in DC.

(Republished from iSteve by permission of author or representative)
 
🔊 Listen RSS

From the Washington Post, here are the scores by state on the Preliminary SAT (PSAT) required to make the first cut in the National Merit Scholarship program. (To convert from the three part PSAT score to the traditional two-part SAT Math plus Verbal scores, divide by 3 and multiply by 20: e.g., Arizona requires a 210, which is like a 1400 on the SAT.) It’s a good indication of the number of upper middle class residents by state.

For example, Washington D.C. always trails all 50 states on average National Assessment of Educational Progress scores for public school students, but it ties with Massachusetts (which leads NAEP scores more often than any other state), Maryland, and New Jersey for first on this measure with a 221 (the equivalent of a 1473 on the post-1995 SAT). Montana usually is close behind Massachusetts on the NAEP, but only requires a 204 because it lacks much of a native, childbearing upper middle class. In contrast, California, whose white students do relatively poorly on the NAEP on average, does well on this measure, requiring a 218. The lowest scoring state is Wyoming at 201. I would guess that’s about 2/3rds of a standard deviation behind the top four states.

Alaska 211
Arizona 210
Arkansas 203
California 218
Colorado 213
Connecticut 218
Delaware 219
Washington D.C. 221
Florida 211
Georgia 214
Hawaii 214
Idaho 209
Illinois 214
Indiana 211
Iowa 209
Kansas 211
Kentucky 209
Louisiana 207
Maine 213
Maryland 221
Massachusetts 221
Michigan 209
Minnesota 215
Mississippi 203
Missouri 211
Montana 204
Nebraska 206
Nevada 202
New Hampshire 213
New Jersey 221
New Mexico 208
New York 218
North Carolina 214
North Dakota 202
Ohio 211
Oklahoma 207
Oregon 213
Pennsylvania 214
Rhode Island 217
South Carolina 211
South Dakota 205
Tennessee 213
Texas 216
Utah 206
Vermont 213
Virginia 218
Washington 217
West Virginia 203
Wisconsin 207
Wyoming 201

I haven’t quantified this, but I would assume that Blue States average higher scores than Red States on this measure, although Texas does well at 216.

In general, Texas does fairly well on most tests of educational competence, and it’s encouraging that such a huge state seems to perform relatively well both for the average and for the elite. It would be interesting to know how far back this goes in time, since Texas does not have a historical reputation for educational attainment the way Massachusetts does.

(Republished from iSteve by permission of author or representative)
 
No Items Found
Steve Sailer
About Steve Sailer

Steve Sailer is a journalist, movie critic for Taki's Magazine, VDARE.com columnist, and founder of the Human Biodiversity discussion group for top scientists and public intellectuals.


PastClassics
The unspoken statistical reality of urban crime over the last quarter century.
The “war hero” candidate buried information about POWs left behind in Vietnam.
The major media overlooked Communist spies and Madoff’s fraud. What are they missing today?
What Was John McCain's True Wartime Record in Vietnam?