America’s two college admissions tests, the SAT and the ACT, have a duopoly that has become more competitive in this century, with unfortunate results. Competition usually produces more of what the customer wants, but what the customers (the parents of the kids taking the tests) want is higher scores. And so do the customers of the customers (the colleges, which are ranked in part on their students’ test scores).
In the old days, the SAT (coasts) and the ACT (heartland) had regional semi-monopolies. Moreover, they both saw themselves, during the Cold War, as being entrusted with the nation’s future, so they delivered quite impressive tests for the technology available in the mid-20th Century.
Over time, however, both have responded to market pressures for higher scores. For example, in 1995 about 70 or 80 points were added to the previously quite hard SAT Verbal test. The 2017 renovation of the SAT appears to have boosted scores another 50 points. Similarly, something is going on with the ACT, judging from all the perfect scores this year.
Meanwhile, neither test has upgraded to adaptive testing, which varies the difficulty of questions in regard to how well the test-taker has done on previous questions. Both SAT and ACT tests are now generally taken online, but the question mix is still static.The problem with static testing, of course, is that the question selection is usually pitched at the middle of the bell curve, so it doesn’t do as good a job distinguishing between people on the right tail as it could. From the NYT in 2018:
There’s talk that the online test might one day become adaptive. What does that mean?
Adaptive tests adjust the level of questioning according to how the test taker performs on prior questions, so that low scorers are asked fewer of the hardest questions and high scorers don’t need to waste as much time on easy ones. That kind of test can provide a more detailed picture of what students have mastered. It also means test takers get different questions in a different order from one another and from previous exams that may have been leaked or stolen.
Does anyone use these adaptive tests?
The GRE and G.M.A.T. graduate exams are adaptive tests, as is the Smarter Balanced test some states use to measure Common Core skills in grades 3 to 11. Language placement exams are often adaptive, as are licensing exams for pharmacists, accountants, paramedics and other professions.
What’s the holdup on going adaptive?
Just to go online has required exhaustive studies and statistical analysis to ensure comparability of a paper exam score with one from a computer, time the test loading and scrolling speeds of various laptops, and assure that computerized testing doesn’t work to the advantage of some groups of students over others.
Beyond that, adaptive testing also demands a much larger store of potential questions.
So, here’s my suggestion:
First, implement adaptive testing.
Second, keep the scoring the same as today, but just raise the maximum. Currently, the SAT is scored from 400 to 1600 and the ACT from 12 to 36. So, just add another standard deviation of headroom: score the SAT from 400 to 1800 and the ACT from 12 to 40.
Keep the current scoring the same: somebody who gets a 1200 on the current test will still get a 1200. A college whose average is 1200 will still average 1200. The only difference will be at the high end, where much finer and more revealing gradations will be possible.