The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
 TeasersiSteve Blog
/
Philosophy of science

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

Marketing!

There has been much discussion lately of the “Replication Crisis” in psychology, especially since the publication of a recent study attempting to replicate 100 well-known psychology experiments. From The Guardian:

Study delivers bleak verdict on validity of psychology experiment results

Of 100 studies published in top-ranking journals in 2008, 75% of social psychology experiments and half of cognitive studies failed the replication test

For more analysis, see Scott Alexander at SlateStarCodex: “If you can’t make predictions, you’re still in a crisis.”

(By the way, some fields in psychology, most notably psychometrics, don’t seem to have a replication crisis. Their PR problem is the opposite one: they keep making the same old predictions, which keep coming true, and everybody who is anybody therefore hates them for it, kill-the-messenger style. For example, around the turn of the century, Ian Deary’s team tracked down a large number of elderly individuals who had taken the IQ test given to every 11-year-old in Scotland in 1932 to see how their lives had turned out. They found that their 1932 IQ score was a fairly good predictor. Similarly, much of The Bell Curve was based on the lives of the huge National Longitudinal Study of Youth 1979 sample up through 1990. We now have another quarter a century of data with which to prove that The Bell Curve doesn’t replicate. And we even have data on thousands of the children of women in the original Bell Curve sample. This trove of data is fairly freely available to academic researchers, but you don’t hear much about findings in The Bell Curve failing to replicate.)

Now there are a lot of reasons for these embarrassing failures, but I’d like to emphasize a fairly fundamental one that will continue to plague fields like social psychology even if most of the needed methodological reforms are enacted.

Consider the distinction between short-term and long-term predictions by pointing out two different fields that use scientific methods but come up with very different types of results.

At one end of the continuum are physics and astronomy. They tend to be useful at making very long term predictions: we know to the minute when the sun will come up tomorrow and when it will come up in a million years. The predictions of physics tend to work over very large spatial ranges, as well. As our astronomical instruments improve, we’ll be able to make similarly long term sunrise forecasts for other planetary systems.

Why? Because physicists really have discovered some Laws of the Universe.

At the other end of the continuum is the marketing research industry, which uses scientific methods to make short-term, localized predictions. In fact, the marketing research industry doesn’t want its predictions to be assumed to be permanent and universal because then it would go out of business.

For example, “Dear Jello Pudding Brand Manager: As your test marketer, it is our sad duty to report that your proposed new TV commercials nostalgically bringing back Bill Cosby to endorse your product again have tested very poorly in our test market experiment, with the test group who saw the new commercials going on to buy far less Jello Pudding over the subsequent six months than the control group that didn’t see Mr. Cosby endorsing your product. We recommend against rolling your new spots out nationally in the U.S. However, we do have some good news. The Cosby commercials tested remarkably well in our new test markets in China, where there has been far less coverage of Mr. Cosby’s recent public relations travails.”

I ran these kind of huge laboratory-quality test markets over 30 years ago in places like Eau Claire, Wisconsin and Pittsfield, MA. (We didn’t have Chinese test markets, of course.) The scientific accuracy was amazing, even way back then.

But while our marketing research test market laboratories were run on highly scientific principles, that didn’t necessarily make our results Science, at least not in the sense of discovering Permanent Laws of the Entire Universe. I vaguely recall that other people in our company did a highly scientific test involving Bill Cosby’s pudding ads, and I believe Cosby’s ads tested well in the early 1980s.

But that doesn’t mean we discovered a permanent law of the universe: Have Bill Cosby Endorse Your Product.

In fact, most people wouldn’t call marketing research a science, although it employs many people who studied sciences in college and more than a few who have graduate degrees in science, especially in psychology.

Marketing Research doesn’t have a Replication Crisis. Clients don’t expect marketing research experiments from the 1990s to replicate with the same results in the 2010s.

Where does psychology fall along this continuum between physics and marketing research?

Most would agree it falls in the middle somewhere.

My impression is that economic incentives push academic psychologists more toward interfacing closely with marketing research, which is corporate funded. For example, there are a lot of “priming” studies by psychologists of ways to manipulate people. “Priming” would be kind of like the active ingredient of “marketing.”

Malcolm Gladwell discovered a goldmine in recounting to corporate audiences findings from social sciences. People in the marketing world like the prestige of Science and the assumption that Scientists are coming up with Permanent Laws of the Universe that will make their jobs easier because once they learn these secret laws, they won’t have to work so hard coming up with new stuff as customers get bored with old marketing campaigns.

That kind of marketing money pushes psychologists toward experiments in how to manipulate behavior, making them more like marketing researchers. But everybody still expects psychological scientists to come up with Permanent Laws of the Universe even though marketing researchers seldom do. Psychologists don’t want to disabuse marketers of this delusion because then they would lose the prestige of Science!

 
🔊 Listen RSS
I harp on one key issue in philosophy of science a lot because I get a lot of backtalk along the lines of: Everybody knows that the social sciences are a fraud. They can’t predict whether the stock market will go up or down tomorrow, so how you can say that social science data suggests that letting in a bunch of unskilled illegal immigrants today will lead, all else being equal, to lower school test scores later when their kids get into school? If smart rich guys can’t predict the stock market tomorrow, how can an evil nobody like you predict school test scores in a decade? Nobody can predict anything!

Nate Silver writes an article, The Weather Man Is Not a Moron, about how weather forecasting has improved dramatically, which it has. The forecast on the evening news is much more accurate than when I was a boy.

In 2008, Chris Anderson, the editor of Wired magazine, wrote optimistically of the era of Big Data. So voluminous were our databases and so powerful were our computers, he claimed, that there was no longer much need for theory, or even the scientific method. At the time, it was hard to disagree. 

But if prediction is the truest way to put our information to the test, we have not scored well. In November 2007, economists in the Survey of Professional Forecasters — examining some 45,000 economic-data series — foresaw less than a 1-in-500 chance of an economic meltdown as severe as the one that would begin one month later. …

The one area in which our predictions are making extraordinary progress, however, is perhaps the most unlikely field [weather].

But this dichotomy between market forecasting and weather forecasting shouldn’t be all that surprising if you keep in mind that there is, theoretically, a fundamental difference between forecasting events that respond to forecasts (e.g., the stock market) v. forecasting events that don’t respond to forecasts (e.g., the weather). Theoretically, the former is resistant to improvement while the latter is not. Improving the weather forecasts is hard in an absolute sense, but the project lacks the special kind of futility that attempts to permanently beat other people’s forecasts have.

Hurricanes don’t respond to better forecasts by sitting down together and hashing out more sophisticated ways to fool weathermen.

In contrast, say you come up with a better way to predict whether the stock market will go up or down tomorrow. After awhile, your competitors in the stock market forecasting game will notice you are now riding around in a G6 and they will start trying to reverse engineer your method, or hire away one of your employees, or rifle through your trash. Eventually, your method will be widely enough known that the stock market won’t go down tomorrow when your method says it will, because it will go down today because everybody who is anybody is already anticipating the decline that your system predicts. So, after awhile, your system will be so widely used it will be useless.

Let’s simplify this a little by thinking for a moment not about the stock market as a whole, but just about one company. Consider Apple. In the absolute sense, it’s obvious that Apple stock is worth a lot of money because it his highly likely to make a lot of money in the future.  (“Making money” is just an approximation of what stock analysts predict, but it’s close enough for my purposes). But everybody knows that.

Whether or not you want to buy Apple stock depends instead on the relative question of whether it will turn out to be worth more money than the stock price. Will Apple make more money than the market’s consensus of forecasts? That’s obviously a more difficult, second-order question than whether Apple will make a lot of money. (But perhaps you have an insight that lets you predict the future better than the market. For example, maybe you realize that all Apple has to do to make even more money is stop having an all white male set of top executives.)

It may seem rather daunting to try to out-predict the experts on Apple’s future. The thing is, however, that you can do pretty well just by flipping a coin: heads Apple will go up, tails apple will go down.

Financial economists call this the Efficient-Market Hypothesis. This does not mean that markets are more efficient than government at achieving various goals. It means that unless you have inside information, it’s really hard for an investor to beat the stock market in the long run because others will adopt your forecasting tools.

The name Efficient-Market is most unfortunate because it’s referring to the speed at which information is incorporated into forecasts, but is woozy on the accuracy of interpretation of the forecast. A phrase like Agile-Market Hypothesis might have been better.

For example, if the headline in the Wall Street Journal tomorrow morning is “iPhone Causes Brain Tumors,” you won’t beat the market by sauntering in and selling your Apple stock around noonish. Markets tend to be pretty agile (i.e., efficient) at acting upon new information.

On the other hand, the markets’ interpretation of information is often wrong. For example, in the mid-2000s, the news that illegal immigrants were pouring into the exurbs of California, Arizona, Nevada, and Florida to build expensive new houses for subprime borrowers trying desperately to get their children out of school districts overrun by the children of illegal aliens was greeted almost universally as Positive Economic News. What could possibly go wrong?

Heck, a half decade later, this interpretation of What Went Wrong is largely verboten. If you read Michael Lewis’s The Big Short carefully, yeah, you can kind of pick it up if you have an evil mind. But can you imagine a speaker at either party’s convention saying what I just said?

Is the Efficient-Markets hypothesis true? One obvious problem with it is that the Forbes 400 is full of zillionaires who beat the market long enough to make the Forbes 400. Were they just lucky? Or is the Efficient Markets Hypothesis wrong? Perhaps you can make so much money in the short run from identifying a major inefficiency, such as the recent subprime unpleasantness, that you can wind up very rich if you have the humility to then retire from placing such big bets?

Or, could it be that the Efficient-Market Hypothesis is right, and a lot of the market beaters beat the market the old fashioned way: by insider trading?

About a half decade ago, there was a lot of publicity about what enormous ROIs the endowment managers at Yale and Harvard were generating. When I looked into it, there was a correlation between endowment ROI and how hard it was to get into that college. For example, Cornell had the worst ROI in the Ivy League. I hypothesized that maybe this pattern could be explained by asking: “If you had some inside information that you couldn’t act upon yourself for fear of jail but could conceivably share with somebody you don’t do business with in return for a huge favor, what would you risk to get your kid into Harvard v. what would you risk to ket your kid into Cornell?”

This theory was extremely unpopular, so forget I ever mentioned it. The SEC has never, as far as I know, prosecuted anybody for bartering inside information for college admission. In fact, as far as I know, nobody has ever even been investigated for this, so, obviously, it must never ever have happened, and we’ll just have to look for the explanation of why the desirable colleges’ endowments outperform the less desirable colleges’ endowments elsewhere. Clearly, Harvard and Yale beat the market by investing in, uh,
timber. Yeah, timber, that’s the ticket!

In any case, the Efficient-Market Hypothesis embodies the crucial conceptual difference between trying to forecast the behavior of systems that respond to forecasts and those that don’t. There’s no Efficient-Weather Hypothesis. That’s because if you get better at forecasting the weather, you stay better at forecasting the weather.

Potentially, forecasting the performance of the children of new immigrants ought to be hard because the U.S. government should be using feedback from past performance to adjust policy to get the optimal mix. If illegal immigrant drywallers from Guatemala aren’t working out so well in the long run, okay, let fewer of them in. But of course, thinking about this subject is crimethink and even the numbers are hatestats.

(Republished from iSteve by permission of author or representative)
 
• Tags: Philosophy of science 
No Items Found
Steve Sailer
About Steve Sailer

Steve Sailer is a journalist, movie critic for Taki's Magazine, VDARE.com columnist, and founder of the Human Biodiversity discussion group for top scientists and public intellectuals.


PastClassics
The “war hero” candidate buried information about POWs left behind in Vietnam.
What Was John McCain's True Wartime Record in Vietnam?
The evidence is clear — but often ignored
Are elite university admissions based on meritocracy and diversity as claimed?
A simple remedy for income stagnation