From the Wall Street Journal:
Can You See the Future? Probably Better Than Professional Forecasters
By JASON ZWEIG
Three-quarters of all U.S. stock mutual funds have failed to beat the market over the past decade. Last year, 98% of economists expected interest rates to rise; they fell instead. Most energy analysts didn’t foresee oil’s collapse from $145 a barrel in 2008 to $38 this summer — or its 15% rebound since.
A new book suggests that amateurs might well be less-hapless forecasters than the experts — so long as they go about it the right way.
I think Philip Tetlock’s “Superforecasting: The Art and Science of Prediction,” co-written with the journalist Dan Gardner, is the most important book on decision making since Daniel Kahneman’s “Thinking, Fast and Slow.” (I helped write and edit the Kahneman book but receive no royalties from it.)
Was I the only reviewer in the world who wasn’t totally wowed by Kahneman’s laborious documentation that people can be tricked?
Prof. Kahneman agrees. “It’s a manual to systematic thinking in the real world,” he told me. “This book shows that under the right conditions regular people are capable of improving their judgment enough to beat the professionals at their own game.”
The book is so powerful because Prof. Tetlock, a psychologist and professor of management at the University of Pennsylvania’s Wharton School, has a remarkable trove of data. He has just concluded the first stage of what he calls the Good Judgment Project, which pitted some 20,000 amateur forecasters against some of the most knowledgeable experts in the world.
The amateurs won — hands down. Their forecasts were more accurate more often, and the confidence they had in their forecasts — as measured by the odds they set on being right — was more accurately tuned.
The top 2%, whom Prof. Tetlock dubs “superforecasters,” have above-average — but rarely genius-level — intelligence. Many are mathematicians, scientists or software engineers; but among the others are a pharmacist, a Pilates instructor, a caseworker for the Pennsylvania state welfare department and a Canadian underwater-hockey coach.
The forecasters competed online against four other teams and against government intelligence experts to answer nearly 500 questions over the course of four years: Will the president of Tunisia go into exile in the next month? Will the gold price exceed $1,850 on Sept. 30, 2011? Will OPEC agree to cut its oil output at or before its November 2014 meeting?
It turned out that, after rigorous statistical controls, the elite amateurs were on average about 30% more accurate than the experts with access to classified information. What’s more, the full pool of amateurs also outperformed the experts.
The most careful, curious, open-minded, persistent and self-critical — as measured by a battery of psychological tests — did the best.
“What you think is much less important than how you think,” says Prof. Tetlock; superforecasters regard their views “as hypotheses to be tested, not treasures to be guarded.”
Most experts — like most people — “are too quick to make up their minds and too slow to change them,” he says. And experts are paid not just to be right, but to sound right: cocksure even when the evidence is sparse or ambiguous.
So the project was designed to force the forecasters “to be ruthlessly honest about why they think what they do,” says Prof. Tetlock.
A reader sends along a section from the last chapter of Tetlock’s new book:
THE KTO-KOGO STATUS QUO
… Like many hardball operators before and since, Vladimir Lenin insisted politics, defined broadly, was nothing more than a struggle for power, or as he memorably put it, “kto, kogo?” That literally means “who, whom” and it was Lenin’s shorthand for “Who does what to whom?” Arguments and evidence are lovely adornments but what matters is the ceaseless contest to be the kto, not the kogo. It follows that the goal of forecasting is not to see what’s coming. It is to advance the interests of the forecaster and the forecaster’s tribe. Accurate forecasts may help do that sometimes, and when they do accuracy is welcome, but it is pushed aside if that’s what the pursuit of power requires. Earlier, I discussed Jonathan Schell’s 1982 warning that a holocaust would certainly occur in the near future “unless we rid ourselves of our nuclear arsenals,” which was clearly not an accurate forecast. Schell wanted to rouse readers to join the swelling nuclear disarmament movement. He did. So his forecast was not accurate, but did it fail? Lenin would say it did exactly what it was supposed to do.
Dick Morris—a Republican pollster and former adviser to President Bill Clinton—underscored the point days after the presidential election of 2012. Shortly before the vote, Morris had forecast a Romney landslide. Afterward, he was mocked. So he defended himself. “The Romney campaign was falling apart, people were not optimistic, nobody thought there was a chance of victory and I felt that it was my duty at that point to go out and say what I said,” Morris said. Of course Morris may have lied about having lied, but the fact that Morris felt this defense was plausible says plenty about the kto-kogo world he operates in.
You don’t have to be a Marxist-Leninist to concede that Lenin had a point. Self and tribe matter. If forecasting can be co-opted to advance their interests, it will be. From this perspective, there is no need to reform and improve forecasting, and it will not change, because it is already serving its primary purpose well.
But before giving up, let’s remember that Lenin was a tad dogmatic. People want power, yes. But they value other things too. And that can make all the difference.