The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 
Email This Page to Someone

 Remember My Information



=>
 TeasersAgnostic@GNXP Blogview

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

Well, just the way I asked it, our gut feelings about the economically powerful are obviously not a product of hunter-gatherer life, given that such societies have minimal hierarchy, and so minimal disparities in power, material wealth, privileges of all kinds, etc. Hunter-gatherers don’t even tolerate would-be elite-strivers, so beyond a blanket condemnation of trying to be a big-shot, they don’t have the subtler attitudes that agricultural and industrial people do — these latter groups tolerate and somewhat respect elites but resent and envy them at the same time.

So that leaves two major eras — agricultural and industrial societies. I’m going to refer to these instead by terms that North, Wallis, & Weingast use in their excellent book Violence and Social Orders. Their framework for categorizing societies is based on how violence is controlled. In the primitive social order — hunter-gatherer life — there are no organizations that prevent violence, so homicide rates are the highest of all societies. At the next step up, limited-access social orders — or “natural states” that sprung up with agriculture — substantially reduce the level of violence by giving the violence specialists (strongmen, mafia dons, etc.) an incentive to not go to war all the time. Each strongman and his circle of cronies has a tacit agreement with the other strongmen — who all make up a dominant coalition — that I’ll leave you to exploit the peasants living on your land if you leave me to exploit the peasants on my land.

This way, the strongman doesn’t have to work very much to live a comfortable life — just steal what he wants from the peasants on his land, and protect them should violence break out. Why won’t one strongman just raid another to get his land, peasants, food, and women? Because if this type of civil war breaks out, everyone’s land gets ravaged, everyone’s peasants can’t produce much food, and so every strongman will lose their easy source of free goodies (rents).

The members of the dominant coalition also agree to limit access to their circle, to limit people’s ability to form organizations, etc. If they let anybody join their group, or form a rival coalition, their slice of the pie would shrink. And this is a Malthusian economy, so the pie isn’t going to get much bigger within their lifetimes. So by restricting (though not closing off) access to the dominant coalition, each member maintains a pretty enjoyable size of the rents that they extract from the peasants. Why wouldn’t those outside the dominant coalition not try to form their own rival group anyway? Because the strongmen of the area are already part of the dominant coalition — only the relative wimps could try to stage a rebellion, and the strongmen would immediately and violently crush such an uprising.

It’s not that one faction of the coalition will never raid another, just that this will be rare and only when the target faction has lost some of its share in the balance of power — maybe they had 5 strongmen but now only 1. Obviously the other factions aren’t going to let that 1 strongman enjoy the rents that 5 were before, while they enjoy average rents — they’re going to raid him and take enough so that he’s left with what seems his fair share. Aside from these rare instances, there will be a pretty stable peace. There may be opportunistic violence among peasants, like one drunk killing another in a tavern, but nothing like getting caught in a civil war. And they certainly won’t be subject to the constant threat of being killed and their land burned in a pre-dawn raid by the neighboring tribe, as they would face in a stateless hunter-gatherer society. As a result, homicide rates are much lower in these natural states than in stateless societies.

Above natural states are open-access orders, which characterize societies that have market economies and competitive politics. Here access to the elite is open to anyone who can prove themselves worthy — it is not artificially restricted in order to preserve large rents for the incumbents. The pie can be made bigger with more people at the top, since you only get to the top in such societies by making and selling things that people want. Elite members compete against each other based on the quality and price of the goods and services they sell — it’s a mercantile elite — rather than based on who is better at violence than the others. If the elites are flabby, upstarts can readily form their own organizations — as opposed to not having the freedom to do so — that, if better, will dethrone the incumbents. Since violence is no longer part of elite competition, homicide rates are the lowest of all types of societies.

OK, now let’s take a look at just two innate views that most people have about how the business world works or what economic elites are like, and see how these are adaptations to natural states rather than to the very new open-access orders (which have only existed in Western Europe since about 1850 or so). One is the conviction, common even among many businessmen, that market share matters more than making profits — that being more popular trumps being more profitable. The other is most people’s mistrust of companies that dominate their entire industry, like Microsoft in computers.

First, the view that capturing more of the audience — whether measured by the portion of all sales dollars that head your way or the portion of all consumers who come to you — matters more than increasing revenues and decreasing costs — boosting profits — remains incredibly common. Thus we always hear about how a start-up must offer their stuff for free or nearly free in order to attract the largest crowd, and once they’ve got them locked in, make money off of them somehow — by charging them later on, by selling the audience to advertisers, etc. This thinking was widespread during the dot-com bubble, and there was a neat management-oriented book written about it called The Myth of Market Share.

Of course, that hasn’t gone away since then, as everyone says that “providers of online content” can never charge their consumers. The business model must be to give away something cool for free, attract a huge group of followers, and sell this audience to advertisers. (I don’t think most people believe that charging a subset for “premium content” is going to make them rich.) For example, here is Felix Salmon’s reaction to the NYT’s official statement that they’re going to start charging for website access starting in 2011:

Successful media companies go after audience first, and then watch revenues follow; failing ones alienate their audience in an attempt to maximize short-term revenues.

Wrong. YouTube is the most popular provider of free media, but they haven’t made jackshit four years after their founding. Ditto Wikipedia. The Wall Street Journal and Financial Times websites charge, and they’re incredibly profitable — and popular too (the WSJ has the highest newspaper circulation in the US, ousting USA Today). There is no such thing as “go after audiences” — they must do that in a way that’s profitable, not just in a way that makes them popular. If you could “watch revenues follow” by merely going after an audience, everyone would be billionaires.

The NYT here seems to be voluntarily giving up on all its readers outside the US, who can’t be reasonably expected to have the ability or inclination to p
ay for web access. It had the opportunity to be a global newspaper, leveraging both the NYT and the IHT brands, and has now thrown that away for the sake of short-term revenues.
[...]
As such, a project which was meant to bring nytimes.com into the same space as Wikipedia will now become largely irrelevant.

This sums up the pre-industrial mindset perfectly: who cares about getting paid more and spending less, when what truly matters is owning a brand that is popular, influential, and celebrated and sucked up to? In a natural state, that is the non-violent path to success because you can only become a member of the dominant coalition by knowing the right in-members. They will require you to have a certain amount of influence, prestige, power, etc., in order to let you move up in rank. It doesn’t matter if you nearly bankrupt yourself in the process of navigating these personalized patron-client networks because once you become popular and influential enough, you stand a good chance of being allowed into the dominant coalition and then coasting on rents for the rest of your life.

Clearly that doesn’t work in an open-access, competitive market economy where interactions are impersonal rather than crony-like. If you are popular and influential while paying no attention to costs and revenues, guess what — there are more profit-focused competitors who can form rival companies and bulldoze over you right away. Again look at how spectacularly the WSJ has kicked the NYT’s ass, not just in crude terms of circulation and dollars but also in terms of the quality of their website. They broadcast twice-daily video news summaries and a host of other briefer videos, offer thriving online forums, and on and on.

Again, in the open-access societies, those who achieve elite status do so by competing on the margins of quality and price of their products. You deliver high-quality stuff at a low price while keeping your costs down, and you scoop up a large share of the market and obtain prestige and influence — not the other way around. In fairness, not many practicing businessmen fall into this pre-industrial mindset because they won’t be practicing for very long, just as businessmen who cried for a complete end to free trade would go under. It’s mostly cultural commentators who preach the myth of market share, going with what their natural-state-adapted brain reflexively believes.

Next, take the case of how much we fear companies that comes to dominate their industry. People freak out because they think the giant, having wiped out the competitors, will enjoy a carte blanche to exploit them in all sorts of ways — higher prices, lower output, shoddier quality, etc. We demand the protector of the people to step in and do something about it — bust them up, tie them down, resurrect their dead competitors, just something!

That attitude is thoroughly irrational in an open-access society. Typically, the way you get that big is that you provided customers with stuff that they wanted at a low price and high quality. If you tried to sell people junk that they didn’t want at a high price and terrible quality, guess how much of the market you will end up commanding. That’s correct: zero. And even if such a company grew complacent and inertia set in, there’s nothing to worry about in an open-access society because anyone is free to form their own rival organization to drive the sluggish incumbent out.

The video game industry provides a clear example. Atari dominated the home system market in the late ’70s and early ’80s but couldn’t adapt to changing tastes — and were completely destroyed by newcomer Nintendo. But even Nintendo couldn’t adapt to the changing tastes of the mid-’90s and early 2000s — and were summarily dethroned by newcomer Sony. Of course, inertia set in at Sony and they have recently been displaced by — Nintendo! It doesn’t even have to be a newcomer, just someone who knows what people want and how to get it to them at a low price. There was no government intervention necessary to bust up Atari in the mid-’80s or Nintendo in the mid-90s or Sony in the mid-2000s. The open and competitive market process took care of everything.

But think back to life in a natural state. If one faction obtained complete control over the dominant coalition, the ever so important balance of power would be lost. You the peasant would still be just as exploited as before — same amount of food taken — but now you’re getting nothing in return. At least before, you got protection just in case the strongmen from other factions dared to invade your own master’s land. Now that master serves no protective purpose. Before, you could construe the relationship as at least somewhat fair — he benefited you and you benefited him. Now you’re entirely his slave; or equivalently, he is no longer a partial but a 100% parasite.

You can understand why minds that are adapted to natural states would find market domination by a single or even small handful of firms ominous. It is not possible to vote with your dollars and instantly boot out the market-dominator, so some other Really Strong Group must act on your behalf to do so. Why, the government is just such a group! Normal people will demand that vanquished competitors be restored, not out of compassion for those who they feel were unfairly driven out — the public shed no tears for Netscape during the Microsoft antitrust trial — but in order to restore a balance of power. That notion — the healthy effect for us normal people of there being a balance of power — is only appropriate to natural states, where one faction checks another, not to open-access societies where one firm can typically only drive another out of business by serving us better.

By the way, this shows that the public choice view of antitrust law is wrong. T he facts are that antitrust law in practice goes after harmless and beneficial giants, hamstringing their ability to serve consumers. There is little to no evidence that such beatdowns have boosted output that had been falling, lowered prices that had been rising, or improved quality that had been eroding. Typically the lawsuits are brought by the loser businesses who lost fair and square, and obviously the antitrust bureaucrats enjoy full employment by regularly going after businesses.

But we live in a society with competitive politics and free elections. If voters truly did not approve of antitrust practices that beat up on corporate giants, we wouldn’t see it — the offenders would be driven out of office. And why is it that only one group of special interests gets the full support of bureaucrats — that is, the loser businesses have influence with the government, while the winner business gets no respect. How can a marginal special interest group overpower an industry giant? It must be that all this is allowed to go on because voters approve of and even demand that these things happen — we don’t want Microsoft to grow too big or they will enslave us!

This is a special case of what Bryan Caplan writes about in The Myth of the Rational Voter: where special interests succeed in buying off the government, it is only in areas where the public truly supports the special interests. For example, the public is largely in favor of steel tariffs if the American steel industry is suffering — hey, we gotta help our brothers out! They are also in favor of subsidies to agribusiness — if we didn’t subsidize them, they couldn’t provide us with any food! And those subsidies are popular even in states where farming is minimal. So, such policies are not the result of special interests hijacking th
e government and ramrodding through policies that citizens don’t really want. In reality, it is just the ignorant public getting what it asked for.

It seems useful when we look at the systematic biases that people have regarding economics and politics to bear in mind what political and economic life was like in the natural state stage of our history. Modern economics does not tell us about that environment but instead about the open-access environment. (Actually, there’s a decent trace of it in Adam Smith’s Theory of Moral Sentiments, which mentions cabals and factions almost as much as Machiavelli — and he meant real factions, ones that would war against each other, not the domesticated parties we have today.)

We obviously are not adapted to hunter-gatherer existence in these domains — we would cut down the status-seekers or cast them out right away, rather than tolerate them and even work for them. At the same time, we evidently haven’t had enough generations to adapt to markets and governments that are both open and competitive. That is certain to pull our intuitions in certain directions, particularly toward a distrust of market-dominating firms and toward advising businesses to pursue popularity and influence more than profitability, although I’m sure I could list others if I thought about it longer.

(Republished from GNXP.com by permission of author or representative)
 
🔊 Listen RSS

As a New Year’s gift, here is a free copy of an entry I put up on my data blog (details on that here). It’s a quantitative look at the history of race and culture in America, together with qualitative examples that illustrate the story that the numbers tell. Enjoy.

Previously I looked at how much attention elite whites have given to blacks since the 1870s by measuring the percent of all Harvard Crimson articles that contained the word “negro.” That word stopped being used in any context after 1970, which doesn’t allow us to see what’s happened since then. Also, it is emotionally neutral, so while it tells us how much blacks were on the radar screen of whites, it doesn’t suggest what emotions colored their conversations about race.

When tensions flare, people will start using more charged words more frequently. The obvious counterpart to “negro” in this context is “nigger.” It could be used by racists hurling slurs, non-racists who are quoting or decrying the slur, by tribalist blacks trying to open old wounds to recruit new members, by blacks trying to “re-claim” the term, by those debating whether or not the term should be used in any context, and so on. Basically, when racial tension is relatively low, these arguments don’t come up as often, so the word won’t appear as often.

I’ve searched the NYT back to 1852 and plotted how prevalent “nigger” was in a given year, though smoothing the data out using 5-year moving averages (click to enlarge):

We see high values leading up and throughout the Civil War, a comparatively lower level during Reconstruction, followed by two peaks that mark “the nadir of American race relations.” It doesn’t change much going through the 1920s, even though this is the period of the Great Migration of blacks from the South to the West and Northeast. It falls and stays pretty low during the worst part of the Great Depression, WWII, and the first 10 years after the war. This was a period of increasing racial consciousness and integration, and the prevalence of “negro” in the Crimson was increasing during this time as well. That means that there was a greater conversation taking place, but that it wasn’t nasty in tone.

However, starting in the late 1950s it moves sharply upward, reaching a peak in 1971. This is the period of the Civil Rights movement, which on an objective level was merely continuing the previous trend of greater integration and dialogue. Yet just as we’d guess from what we’ve studied, the subjective quality of this phase of integration was much more acrimonious. Things start to calm down throughout the ’70s and mid-’80s, which our study of history wouldn’t lead us to suspect, but which a casual look at popular culture would support. Not only is this a period where pop music by blacks had little of a racial angle — that was also true of most of the R&B music of most of the ’60s — but was explicitly about putting aside differences and moving on. This is most clearly shown in the disco music scene and its re-birth a few years later during the early ’80s dance and pop music scene, when Rick James, Prince, and above all Michael Jackson tried to steer the culture onto a post-racial course.

But then the late ’80s usher in a resurgence of identity politics based on race, sex, and sexual orientation (“political correctness,” colloquially). The peak year here is technically 1995, but that is only because of the unusual weight given to the O.J. Simpson trial and Mark Fuhrman that year. Ignoring that, the real peak year of the racial tension was 1993 according to this measure. By the late ’90s, the level has started to plummet, and the 2000s have been — or should I say were — relatively free of racial tension, a point I’ve made for awhile but that bears repeating since it’s not commonly discussed.

Many people mention Obama’s election, but that was pretty late in the stage. Think back to Hurricane Katrina and Kanye West trying but failing to foment another round of L.A. riots, or Al Sharpton trying but failing to turn the Jena Six into a civil rights cause celebre, or the mainstream media trying but failing to turn the Duke lacross hoax into a fact that would show how evil white people still are. We shouldn’t be distracted by minor exceptions like right-thinking people casting out James Watson because that was an entirely elite and academic affair. It didn’t set the entire country on fire. The same is true for the minor exception of Larry Summers being driven out of Harvard, which happened during a remarkably feminism-free time.

Indeed, it’s hard to recognize the good times when they’re happening — unless they’re fantastically good — because losses loom larger than gains in our minds. Clearly racial tensions continue to go through cycles, no matter how much objective progress is made in improving the status of blacks relative to whites. Thus, we cannot expect further objective improvements to prevent another wave of racial tension.

Aside from the long mid-20th C hiatus, there are apparently 25 year distances between peaks, which is about one human generation. If the near future is like most of the past, we predict another peak around 2018, a prediction I’ve made before using similar reasoning about the length of time separating the general social hysterias that we’ve had — although in those cases, just going back to perhaps the 1920s or 1900s, not all the way back to the 1850s. Still, right now we’re in a fairly calm phase and we should enjoy it while it lasts. If you feel the urge to keep quiet on any sort of racial issues, you should err on the side of being more vocal for right now, since the mob isn’t predicted to come out for another 5 years or so, and the peak not until 10 years from now. As a rough guide to which way the racial wind is blowing, simply ask yourself, “Does it feel like it did after Rodney King and the L.A. riots, or after the O.J. verdict?” If not, things aren’t that bad.

Looking at absolute levels may be somewhat inaccurate — maybe all that counts is where the upswings and downswings are. So I’ve also plotted the year-over-year percent change in how prevalent “nigger” is, though this time using 10-year moving averages to smooth the data out because yearly flucuations up or down are even more volatile than the underlying signal. In this graph, positive values mean the trend was moving upward, negative values mean it was moving downward, and values close to 0 mean it was staying fairly steady:

Again we see sustained positive growth during the Civil War, the two bookends of the nadir of race relations, although we now see a small amount of growth during the Harlem Renaissance era. The Civil Rights period jumps out the most. Here, the growth begins in the mid-1940s, but remember that it was at its lowest absolute levels then, so even the modest increases that began then show up as large percent increases. The PC era of the late ’80s through the mid ’90s also clearly shows up. There are several periods of relative stasis, but I see three periods of decisively moving against a nasty and bitter tone in our racial conversations: Reconstruction after the Civil War (admittedly not very long or very deep), the late ’30s through WWII, and the “these are the good times” / Prince / Michael Jackson era of the mid-late
’70s through the mid ’80s, which is the most pronounced of all.

That trend also showed up television, when black-oriented sitcoms were incredibly popular. During the 1974-’75 season, 3 of the top 10 TV shows were Good Times, Sanford and Son, and The Jeffersons. The last of those that were national hits, at least as far as I recall, were The Cosby Show, A Different World, Family Matters, The Fresh Prince of Bel-Air, and In Living Color, which were most popular in the late ’80s and early ’90s. Diff’rent Strokes spans this period perfectly in theme and in time, featuring an integrated cast (and not in the form of a “token black guy”) and lasting from 1978 to 1986. The PC movement and its aftermath pretty much killed off the widely appealing black sitcom, although after a quick search, I see that Disney had a top-rated show called That’s So Raven in the middle of the tension-free 2000s. But it’s hard to think of black-focused shows from the mid-’90s through the early 2000s that were as popular as Good Times or The Cosby Show.

(In the top picture, the comparison between Jennifer Beals and Halle Berry shows that a black-white biracial babe actress who came of age during the late ’70s and early ’80s took a white husband twice, while her counterpart who became famous in the early ’90s went instead for black men.)

But enough about TV. The point is simply that the academic material we’re taught in school usually doesn’t take into account what’s popular on the radio or TV — the people’s culture only counts if they wrote songs about walking the picket line, showed that women too can be mechanics, or that we shall overcome. Historians, and people generally, are biased to see things as bad and getting worse, so they rarely notice when things were pretty good. But some aspects of popular culture can shed light on what was really going on because its producers are not academics with an axe to grind but entrepreneurs who need to know their audience and stay in touch with the times.

(Republished from GNXP.com by permission of author or representative)
 
• Category: History, Science • Tags: Culture, Data, History, Race 
🔊 Listen RSS

Earlier this year, John Tierney reviewed several studies on how delaying gratification makes us feel better in the short term by preventing guilt but makes us feel more miserable in the long term by causing regret over missed opportunities. I added my two cents here, just to note that this sounds like part of the Greg Clark story about recent genetic change in the commercial races that adapted them to the emerging mercantile societies they found themselves in. What I had in mind was the delaying of vice — investing a dollar today rather than splurging, moderating the amount of drink or sweets you enjoy, and so on.

But now Tierney has another review of related studies which show that we delay gratification even for what should be guilt-free pleasures like redeeming a gift card, using frequent flier miles, and visiting the landmarks in your local area. And don’t we all have enjoyable books and DVDs we’ve been putting off? After indulging in these cases, there is no potential bankruptcy, no hangover, and no tooth decay — so why do we indiscriminately lump them in with genuine vices and put off indulging in them? Obviously this tendency too is a feature of agrarian or industrial groups — hunter-gatherers would never leave gift cards lying around in their drawers.

It must be because of how recent the change toward delaying gratification has been. Given enough time, we might evolve a specialized module for delaying gratification in vices and another module for doing so in guilt-free pleasures, which would be better than where we are now. But when our genetic response to a change is abrupt, typically we have broad-brush solutions that take care of the intended target but also leave plenty of collateral damage. Over time our solutions get smarter, but it takes awhile. Just look at how crude the responses to malaria are.

We see this domain-general taste for (or aversion of) risk in other areas. People who lead more risky lifestyles buy much less insurance than people who lead cautious lifestyles. Those who ride motorcycles without helmets would be richer and more likely to pass on their genes if they bought a lot of insurance, while those who play it safe would be richer by not buying all that superfluous insurance. Instead, daredevils are daredevils all the way — including a contempt for insurance.

This casts doubt on how easy it is to change our behavior so that we no longer postpone our indulgence in guilt-free pleasures. Because we have a domain-general delay of gratification, it will still just feel wrong. You can also argue the logic of buying lots of insurance to the motorcyclist who rides without a helmet, but that won’t change his mind because his tastes for risk is across-the-board.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Behavioral Economics, Psychology 
🔊 Listen RSS

Google results for +”nobel laureate” +X, where X is one of the following:

Chemistry: 317,000
Physics: 415,000
Medicine: 467,000
Economics: 484,000

Of course, there are more winners to refer to in Physics than in Economics, so we should control for that. Dividing the number of Google results by the number of winners gives these per capita rates:

Chemistry: 2032
Physics: 2231
Medicine: 2395
Economics: 7446

If the intellectual merit of a body of ideas is not so well established, you’re more likely to deflect attention by reassuring everyone that, hey, it can’t be that crazy — after all, the guy is a Nobel laureate. Perhaps that’s why physics ranks above chemistry here, what with string theory etc. taking it further into speculation compared to more grounded chemistry.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Academia, Sociology 
🔊 Listen RSS

In this discussion about pop music at Steve Sailer’s, the topic of generations came up, and it’s one where few of the people who talk about it have a good grasp of how things work. For example, the Wikipedia entry on generation notes that cultural generations only showed up with industrialization and modernization — true — but doesn’t offer a good explanation for why. Also, they don’t distinguish between loudmouth generations and silent generations, which alternate over time. As long as a cohort “shares a culture,” they’re considered a generation, but that misses most of the dynamics of generation-generation. My view of it is pretty straightforward.

First, we have to notice that some cohorts are full-fledged Generations with ID badges like Baby Boomer or Gen X, and some cohorts are not as cohesive and stay more out of the spotlight. Actually, one of these invisible cohorts did get an ID badge — the Silent Generation — so I’ll refer to them as loudmouth generations (e.g., Baby Boomers, Gen X, and before long the Millennials) and silent generations (e.g., the small cohort cramped between Boomers and X-ers).

Then we ask why do the loudmouth generations band together so tightly, and why do they show such strong affiliation with the generation that they continue to talk and dress the way they did as teenagers or college students even after they’ve hit 40 years old? Well, why does any group of young people band together? — because social circumstances look dire enough that the world seems to be going to hell, so you have to stick together to help each other out. It’s as if an enemy army invaded and you had to form a makeshift army of your own.

That is the point of ethnic membership badges like hairstyle, slang, clothing, musical preferences, etc. — to show that you’re sticking with the tribe in desperate times. That’s why teenagers’ clothing has logos visible from down the hall, why they spend half their free time digging into a certain music niche, and why they’re hyper-sensitive about what hairstyle they have. Adolescence is a socially desperate time, not unlike a jungle, in contrast to the more independent situation you enjoy during full adulthood. Being caught in more desperate circumstances, teenagers freak out about being part of — fitting in with — a group that can protect them; they spend the other half of their free time communicating with their friends. Independent adults have fewer friends, keep in contact with them much less frequently, and don’t wear clothes with logos or the cover art from their favorite new album.

OK, so that happens with every cohort — why does this process leave a longer-lasting impact on the loudmouth cohorts? It is the same cause, only writ large: there’s some kind of social panic, or over-turning of the status quo, that’s spreading throughout the entire culture. So they not only face the trials that every teenager does, but they’ve also got to protect themselves against this much greater source of disorder. They have to form even stronger bonds, and display their respect for their generation much longer, than cohorts who don’t face a larger breakdown of security.

Now, where this larger chaos comes from, I’m not saying. I’m just treating it as exogenous for now, as though people who lived along the waterfront would go through periods of low need for banding together (when the ocean behaved itself) and high need to band together (when a flood regularly swept over them). The generation forged in this chaos participates in it, but it got started somewhere else. The key is that this sudden disorder forces them to answer “which side are you on?” During social-cultural peacetime, there is no Us vs. Them, so cohorts who came of age in such a period won’t see generations in black-and-white, do-or-die terms. Cohorts who come of age during disorder must make a bold and public commitment to one side or the other. You can tell when such a large-scale chaos breaks out because there is always a push to reverse “stereotypical gender roles,” as well as a surge of identity politics.

The intensity with which they display their group membership badges and groupthink is perfectly rational — when there’s a great disorder and you have to stick together, the slightest falter in signaling your membership could make them think that you’re a traitor. Indeed, notice how the loudmouth generations can meaningfully use the phrase “traitor to my generation,” while silent generations wouldn’t know what you were talking about — you mean you don’t still think The Ramones is the best band ever? Well, OK, maybe you’re right. But substitute with “I’ve always thought The Beatles were over-rated,” and watch your peers with torches and pitchforks crowd around you.

By the way, why did cultural generations only show up in the mid-to-late 19th C. after industrialization? Quite simply, the ability to form organizations of all kinds was restricted before then. Only after transitioning from what North, Wallis, and Weingast (in Violence and Social Orders — working paper here) call a limited access order — or a “natural state” — to an open access order, do we see people free to form whatever political, economic, religious, and cultural organizations that they want. In a natural state, forming organizations at will threatens the stability of the dominant coalition — how do they know that your bowling league isn’t simply a way for an opposition party to meet and plan? Or even if it didn’t start out that way, you could well get to talking about your interests after awhile.

Clearly young people need open access to all sorts of organizations in order to cohere into a loudmouth generation. They need regular hang-outs. Such places couldn’t be formed at will within a natural state. Moreover, a large cohort of young people banding together and demanding that society “hear the voice of a new generation” would have been summarily squashed by the dominant coalition of a natural state. It would have been seen as just another “faction” that threatened the delicate balance of power that held among the various groups within the elite. Once businessmen are free to operate places that cater to young people as hang-outs, and once people are free to form any interest group they want, then you get generations.

Finally, on a practical level, how do you lump people into the proper generational boxes? This is the good thing about theory — it guides you in practice. All we have to do is get the loudmouth generations’ borders right; in between them go the various silent or invisible generations. The catalyzing event is a generalized social disorder, so we just look at the big picture and pick a peak year plus maybe 2 years on either side. You can adjust the length of the panic, but there seems to be a 2-year lead-up stage, a peak year, and then a 2-year winding-down stage. Then ask, whose minds would have been struck by this disorder? Well, “young people,” and I go with 15 to 24, although again this isn’t precise.

Before 15, you’re still getting used to social life, so you may feel the impact a little, but it’s not intense. And after 24, you’re on the path to independence, you’re not texting your friends all day long, and you’ve stopped wearing logo clothing. The personality trait Openness to Experience rises during the teenage years, peaks in the early 20s, and declines after; so there’s that basis. Plus the likelihood to commit crime — another measure of reacting to social desperation — is highest between 15 and 24.

So, just work your way backwards by taking the oldest age (24) and subtracting it from the first year of the chaos, and then taking the youngest age (15) and subtracting it from the last year of the chaos. “Ground zero” for that generation is the chaos’ peak year mi
nus 20 years.

As an example, the disorder of the Sixties lasted from roughly 1967 to 1972. Applying the above algorithm, we predict a loudmouth generation born between 1943 and 1957: Baby Boomers. Then there was the early ’90s panic that began in 1989 and lasted through 1993 — L.A. riots, third wave feminism, etc. We predict a loudmouth generation born between 1965 and 1978: Generation X. There was no large-scale social chaos between those two, so that leaves a silent generation born between 1958 and 1964. Again, they don’t wear name-tags, but I call them the disco-punk generation based on what they were listening to when they were coming of age.

Going farther back, what about those who came of age during the topsy-turvy times of the Roaring Twenties? The mania lasted from roughly 1923 to 1927, forming a loudmouth generation born between 1899 and 1912. This closely corresponds to what academics call the Interbellum Generation. The next big disruption was of course WWII, which in America really struck between 1941 and 1945, creating a loudmouth generation born between 1917 and 1930. This would be the young people who were part of The Greatest Generation. That leaves a silent generation born between 1913 and 1916 — don’t know if anyone can corroborate their existence or not. That also leaves The Silent Generation proper, born between 1931 and 1942.

Looking forward, it appears that these large social disruptions recur with a period of about 25 years on average. The last peak was 1991, so I predict another one will strike in 2016, although with 5 years’ error on both sides. Let’s say it arrives on schedule and has a typical 2-year build-up and 2-year winding-down. That would create a loudmouth generation born between 1990 and 2003 — that is, the Millennials. They’re already out there; they just haven’t hatched yet. And that would also leave a silent generation born between 1979 and 1989.

My sense is that Millennials are already starting to cohere, and that 1987 is more like their first year, making the silent generation born between 1979 and 1986 (full disclosure: I belong to it). So this method surely isn’t perfect, but it’s pretty useful. It highlights the importance of looking at the world with some kind of framework — otherwise we’d simply be cataloguing one damn generation after another.

(Republished from GNXP.com by permission of author or representative)
 
• Category: History, Science • Tags: Culture, History, Psychology, Sociology 
🔊 Listen RSS

The other day I saw a flier for a colloquium in my department that sounded kind of interesting, but I thought “It probably won’t be worth it,” and I ended up not going. After all, anyone with an internet connection can find a cyber-colloquium to participate in — and drawn from a much wider range of topics (and so, one that’s more likely to really grab your interest), whose participants are drawn from a much wider range of people (and so, where you’re more likely to find experts on the topic — although also more know-nothings who follow crowds for the attention), and whose lines of thought can extend for much longer than an hour or so without fatiguing the participants.

So, this is something like the Pavarotti Effect of greater global connectedness: local opera singers are going to go out of business because consumers would rather listen to a CD of Pavarotti. It’s only after it becomes cheap to find the Pavarottis and distribute their work on a global scale that this type of “creative destruction” will happen. Similarly, if in order to get whatever colloquia gave them, academics migrated to email discussion groups or — god help you — even a blog, a far smaller number of speakers will be in demand. Why spend an hour of your time reading and commenting on the ideas of someone you see as a mediocre thinker when you could read and comment on someone you see as a superstar?

Sure, perceptions differ among the audience, so you could find two sustained online discussions that stood at opposite ends of an ideological spectrum — say, biologists who want to see much more vs. much less fancy math enter the field. That will prevent one speaker from getting all the attention. But even here, there would be a small number of superstars within each camp, and most of the little guys who could’ve given a talk here or there before would not get their voices heard on the global stage. Just like the lousy local coffee shops that get displaced by Starbucks — unlike the good locals that are robust to invasion — they’d have to cater to a niche audience that preferred quirkiness over quality.

So the big losers would be the producers of lower-quality ideas, and the winners would be the producers of higher-quality ideas as well as just about all consumers. Academics wear both of these hats, but many online discussion participants might only sit in and comment rather than give talks themselves. It seems more or less like a no-brainer, but will things actually unfold as above? I still have some doubts.

The main assumption behind Schumpeter’s notion of creative destruction is that the firms are competing and can either profit or get wiped out. If you find some fundamentally new and better way of doing something, you’ll replace the old way, just as the car replaced the horse and buggy. If academic departments faced these pressures, the ones who made better decisions about whether to host colloquia or not would grow, while those who made poorer decisions would go under. But in general departments aren’t going to go out of business — no matter how low they may fall in prestige or intellectual output, relative to other departments, they’ll still get funded by their university and other private and public sources. They have little incentive to ask whether it’s a good use of money, time, and effort to host colloquia in general or even particular talks, and so these mostly pointless things can continue indefinitely.

Do the people involved with colloquia already realize how mostly pointless they are? I think so. If the department leaders perceived an expected net benefit, then attendance would be mandatory — at least partial attendance, like attending a certain percent of all hosted during a semester. You’d be free to allocate your partial attendance however you wanted, just like you’re free to choose your elective courses when you’re getting your degrees — but you’d still have to take something. The way things are now, it’s as though the department head told its students, “We have several of these things called elective classes, and you’re encouraged to take as few or as many as you want, but you don’t actually have to.” Not exactly a ringing endorsement.

You might counter that the department heads simply value making these choices entirely voluntary, rather than browbeat students and professors into attending. But again, mandatory courses and course loads contradict this in the case of students, and all manner of mandatory career enhancement activities contradict this in the case of professors (strangely, “faculty meetings” are rarely voluntary). Since they happily issue requirements elsewhere, it’s hard to avoid the conclusion that even they don’t see much point in sitting in on a colloquium. As they must know from first-hand experience, it’s a better use of your time to join a discussion online or through email.

The fact that colloquia are voluntary gives hope that, even though many may persist in wasting their time, others will be freed up to more effectively communicate on some topic. Think of how dismal the intellectual output was before the printing press made setting down and ingesting ideas cheaper, and before strong modern states made postage routes safer and thus cheaper to transmit ideas. You could only feed at the idea-trough of whoever happened to be physically near you, and you could only get feedback on your own ideas from whoever was nearby. Even if you were at a “good school” for what you did, that couldn’t have substituted for interacting with the cream of the crop from across the globe. Now, you’re easily able to break free from local mediocrity — hey, they probably see you the same way! — and find much better relationships online.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Economics, Science • Tags: Academia, Economics, Education, Technology 
🔊 Listen RSS

After reading Arthur De Vany’s Hollywood Economics and Winners, Losers, and Microsoft by Stan Liebowitz and Stephen Margolis, I got the impression that antitrust cases on the whole have been misguided and often remarkably stupid. Looking a little more into it, I found that economists now are pretty much agreed on that picture. Here is the entry on antitrust from the Concise Encyclopedia of Economics, which has a nice brief list of references. Most cases are not brought by public representatives, whether elected or self-appointed, but by private companies, often rivals of the defendant who are being driven out of business. Businessmen believe that competition is good if they win but bad if the other guy wins.

Because these facts are not widely known outside of economics circles, and because most of us learned bogus stories about Standard Oil, etc., in high school history class, I figured I’d illustrate them with a recent complaint about alleged anti-competitiveness in the dairy industry. The farmers on the losing side of the commercial contest claim one thing, but I show that the facts prove the opposite.

First, here are free WSJ articles about the small farmers’ complaints and a follow-up on the response of the DoJ’s antitrust division. We can ignore the complaints of all the farmers quoted, as well as the talk from politicians in dairy states, because the very first sentence says that there is a “price-depressing glut of milk.” A monopoly harms consumers by restricting output in order to shoot prices up — think of a diamond company that owns almost all diamonds but only allows a tiny amount to get into circulation. So right away we see that there is the exact opposite of monopolistic practices in dairy — there is a glut rather than a dearth of output, and prices are plummeting rather than soaring.

Is the 2001 merger of two large dairy processors to “blame” for greater output and lower prices, as suggested by the complainers? No. The article doesn’t provide a broader perspective, but I looked up data from the Statistical Abstract of the United States’ agriculture tables. Here is the price of milk received by farmers from 1980 to 2009, both unadjusted and adjusted for inflation using the CPI:

There is clearly no change in the trend during or after 2001. The real price of milk has been falling at least since 1980, and in this decade it has actually slowed down — it’s “showing signs of stabilization,” as we would hear in another context. The nominal price shows no trend up or down, just greater volatility starting around 1995. OK, what about output — was the recent merger responsible for flooding the market? Let’s have a look:

The left graph shows that output has been increasing steadily at least since 1970. The only somewhat recent change is that the increase appears to get faster around 1995, compared to its shallower rate from 1985 to 1995. Again we see no effect of the 2001 merger — let alone a harmful downward one. The graph on the right shows the trend for milk cows’ productivity, or output per cow: it too has been steadily increasing since at least 1970, probably due to some combination of better technology and selective breeding. Here there is no change whatsoever in the rate around 2001 — it’s basically linear after 1975.

So we have greater output, lower prices, and greater productivity. What about having “too much” market share? The articles say that Dean Foods buys less than 15% of the nation’s supply of raw fluid milk, which is hardly a concentration of the industry — even if market concentration mattered per se (which it doesn’t). It is a red herring that it has market shares closer to 70% or 80% in some regions — it could not try to restrict output and thus raise prices in these regions anyway. Why not? If Dean Foods tried to gouge consumers in Michigan, anyone in Michigan could simply buy milk from a state where the supposed monopolistic gouging was absent, transport it to Michigan, and sell it below what the monopolist was charging. And — boom — just like that, competition neuters gouging.

(Looking more generally, milk is a commodity like gold, so just imagine if Michigan residents were charged up the ass for gold, while Ohio residents weren’t. You could get rich quick in Michigan by buying gold in Ohio and selling it in Michigan, low enough to undercut the monopolist but high enough to cover your costs. Since these get-rich-quick opportunities would quickly exhaust themselves and drive down the monopolist’s prices, we don’t expect to see such price-gouging even if the company did have an incredibly large market share.)

But are the big bad companies even driving the little guy out of business? In my quick search, I didn’t find data for this year, but a press release on the state of US agriculture in 2007 says that it’s the middle-sized farms that are getting cleared out, suggesting greater specialization (like Wal-Marts co-existing with tiny local boutiques):

The latest census figures show a continuation in the trend towards more small and very large farms and fewer mid-sized operations. Between 2002 and 2007, the number of farms with sales of less than $2,500 increased by 74,000. The number of farms with sales of more than $500,000 grew by 46,000 during the same period.

Census results show that the majority of U.S. farms are smaller operations.

Granted, this is for all farms, not just dairy farms, but I’d be surprised if the pattern were in the other direction for the subset of dairy farms. Again, even if it were, that might make us feel bad about small farmers going out of business, but it would not be evidence of monopoly, anti-competition, or whatever else. Output and productivity are going up, and prices are going down. It doesn’t get any simpler than that.

As the CEE antitrust entry notes, most lawsuits are brought by companies who are suppliers or buyers of the targeted company. That’s what we have here, since Dean Foods buys milk from the embittered dairy farmers. The incentive to make it an antitrust suit is that they can win three times the damages than if they didn’t.

So the next time you hear about some company coming under antitrust scrutiny, just keep this big picture in mind. Pretty much all such cases are bogus. Rather than crusades in the consumers’ interests, they are cowardly attempts by a loser to have the referee handicap the winner just as they’re about to get knocked out. I encourage readers to look through some of the references in the CEE entry; it is quite illuminating to see how backwards the history of antitrust has been, and how baldly we were lied to in high school about Standard Oil and the rest.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Economics, Science • Tags: Economics, Food 
🔊 Listen RSS

Toward the end of this episode of EconTalk, Nassim Taleb (Fooled by Randomness, The Black Swan) talks about religion and the history of medicine. He notes that one of the benefits of adhering to religious practices was that you probably avoided going to a doctor when you were in trouble — you prayed to a god or whatever other supernatural entity your religion said would help you out. Why was this a benefit? Because before roughly 50 to 100 years ago, going to the doctor was worse than doing nothing. He bled you, gave your wife a disease by not washing his hands while delivering her baby, etc.

Basically, before very recent times, doctors were parasites. They did not specialize in healing you, but in conning you into thinking that they could heal you — for a small fee — all while making you worse, on average. This makes me think: there would have been a selection pressure on human beings to be skeptical of materialist claims about the world — or at least about the nature of ourselves — and thus, by default, to be naturally inclined toward supernatural beliefs. Of course, praying to Zeus might not have done an awful lot of good — but at least it wouldn’t have given you new infections like a hospital would, and at least it wouldn’t have bled you dry. (And there may have been some benefit from all the social interactions that you got by attending religious services regularly vs. being socially isolated.)

Natural selection operates on the tiniest differences in relative fitness, and for most of human existence there must have been more than a little difference in fitness between those who eagerly sought out the help of a medicine man / doctor and those who just went to church (or wherever) and prayed to the spirits instead. This may be an original hypothesis, but I don’t claim so since I haven’t read much on the various theories of why religion is part of human nature. Taleb came pretty close to saying so, but not explicitly. Most economists talk about what’s rational or utility-maximizing, without making that final link to evolutionary fitness. To its credit, the idea has a pretty solid basis for the necessary differences in relative fitness between believers and non-believers.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Evolution, Medicine, Religion 
🔊 Listen RSS

Recently at my personal blog I’ve been focusing on the idiocy of Web 2.0′s central strategy for growth, namely creating online networks or communities where costly participation is given away for free. (The profitable online papers charge, YouTube and Facebook still not profitable, and a more general round-up of the second dot-com bust.) The hope was that hosting a free party with an open bar would attract a large crowd, and that this in turn would lead to ever-increasing ad revenues. That business model was doomed to failure during the first dot-com boom, and it is just as doomed during the second one (Web 2.0). In the meantime, following this strategy leads to cultural output typical of attention whores rather than the output of inventors and creators with secure patronage.

I was delighted today to discover that all of this is about to change. It’s still pretty hush-hush — no “buzz in the blogosphere” — as I’ve read a fair number of articles on the topic, yet none has mentioned the coming change, even if they’ve mentioned the change earlier in the year. Starting sometime this fall, online newspapers will finally start to charge for access to their sites, although who they charge, how much, and in what manner (yearly, per article, etc.), is entirely up to the individual papers, and we don’t know what shape that will take just yet. The business model of Journalism Online, the group that’s spearheading the change, says they’re aiming to get revenues from the top 10% of readers by visit frequency. In any case, the point is that the era of unlimited free access to online journalism is dead.

Journalism Online seems to be a central hub that readers will go through to get to the various member organizations’ publications, perhaps the way college students go through their university library’s website to get access to various journals. According to co-founder Leo Hindery (as I heard on Bloomberg TV today), there are over 600 papers on board, and you can bet that includes most or all of the big ones, as they provide the best quality and yet receive no money from users (other than the FT and WSJ). All of the customer’s payments will be kept track of through this one site. I don’t have much more detail to give, since the Journalism Online website lays it out succinctly. Go read through the business model section and the press section (the 31-page PDF listed under “Industry Reports” is the most detailed).

This is the first nail in the coffin of Web 2.0, and once the other give-it-away internet companies see how profitable it is to actually — gasp! — charge for your product, they will wake up from their pipe dream of growing by attracting a big crowd and pushing ads. YouTube, Facebook, MySpace, perhaps other components of Google, Wikipedia — they can either charge and profit or get shoved out of the market by those who are growing by charging. The winners will have more to invest in improving their products and maybe even funding their industry’s equivalent of basic R&D, we’ll see a cultural output that won’t pander quite so much to the lowest common denominator to chase ad revenue, and best of all — the quality newspapers, social networking sites, and so on, will continue to exist and grow rather than be claimed as further casualties of the moronic dot-com boom mentality. At last the internet is sobering up from its 15-year Bender of Free.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Economics, Science • Tags: Economics, Media, Technology 
🔊 Listen RSS

At the end of an otherwise good reflection in the WSJ on where Google can go from here, we read the following:

It would be foolish to predict that Google won’t have another business success, of course. Microsoft managed to leverage its strength in PC operating systems into a stranglehold over the word-processing and spreadsheet applications.

Stan Liebowitz and Stephen Margolis debunked this at least 10 years ago in their book Winners, Losers, and Microsoft, and probably earlier, though I can’t recall which journal article it originally appeared in. Scroll down to Figure 8.18 at Liebowitz’s website, which shows the market share of Excel and Word in the Macintosh vs. Windows markets. They conclude:

Examination of Figure 8.18 reveals that Microsoft achieved very high market shares in the Macintosh market even while it was still struggling in the PC market. On average, Microsoft’s market share was about forty to sixty percentage points higher in the Macintosh market than in the PC market in the 1988-1990 period. It wasn’t until 1996 that Microsoft was able to equal in the PC market its success in the Macintosh market. These facts can be used to discredit a claim sometime heard that Microsoft only achieved success in applications because it owned the operating system, since Apple, not Microsoft, owned the Macintosh operating system and Microsoft actually competed with Apple products in these markets.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Economics, Science • Tags: Economics, Technology 
🔊 Listen RSS

Here is a brief description of the idea that price bubbles are caused by people buying something, not necessarily because they think it’s worth anything, but because they think they can find an even greater fool to buy it at a higher price. This continues until no more such fools can be found, and this bust drives prices back down to what they were before the boom began.

I didn’t see any references to mathematical models of the theory at Wikipedia or through Googling around a bit, so I made one up today at Starbucks since I didn’t have anything to read to pass the time. Because I’m not an economist, I don’t know how original it is, or how it compares with alternative models of the greater fool theory (if they exist). So, this is intended just as an exercise in modeling, explaining the model, and hopefully shedding some light on how the world works. I’ve kept most of the exposition straightforward and largely verbal, so that you don’t need to know much math at all to understand what the model says and what its implications are.

In part 1, I lay out the logic of the model and explain enough of it to show that it is capable of producing a single round of boom-and-bust for price hype. Part 2 will provide more mathematical detail about how the dynamics unfold, a phase plane analysis, and graphs of how the variables of interest would change over time, to better wrap your brain around what the model predicts.

This is a dynamic model, or one that tracks how things change over time — after all, we want to see how price, the number of fools, etc., evolves. It is made of several differential equations, and all these equations say is what causes something of interest to go up or go down over time. (You may recall that the sign of a derivative tells you whether a function is increasing or decreasing, and the magnitude says by how much.) I’ll only explain what is absolutely necessary for the reader to see what’s going on, with the less necessary math being confined to footnotes.

First, we set up the basic picture before we write down equations. My version of the greater fool theory goes like this. There is a population of people, and during a price bubble they can fall into three mutually exclusive groups: suckers (S), who are susceptible to joining in on the bubble; investors (I), who currently own the speculative stuff (such as a home bought for speculation); and those who are retired from the bubble (R), who used to be investors but have gotten rid of their investment. And of course there is the price of the thing — I model only the extra price that it enjoys due to hype (P), above its fundamental value, since this is the only component of price that changes radically during the bubble.

I set the population to be fixed in size during the bubble, since growth or decline is negligible over the handful of years that the bubble lasts. I also set the amount of speculative stuff to be fixed, which is less general — supply should shoot up to meet the rising demand during a bubble. So, this model is restricted to cases where you can’t produce lots more of the stuff, relative to how much already exists, on the time-scale of the bubble’s boom stage (say, 5 years or less). Or perhaps no more of it will be produced at all, such as video game consoles from decades ago that the original manufacturers will never bring back into production, but which nostalgic fans have taken to buying and selling speculatively (like NEC’s TurboDuo). Last, the amount of stuff that each investor has is the same across all investors and stays constant — say, if each investor always owned just one speculative home.

At the start of the bubble, there is a certain number of early investors. In order to sell their stuff, they need to meet a sucker to sell it to. When they meet — and I assume the two groups are moving around independently of each other — there is a probability that the sale will be made. If they make a deal, the sucker is now an investor, and the former investor is now retired. In this model, retireds do not again become suckers — they consider themselves lucky to have found a greater fool and stay out of the bubble for good afterward. That’s the extent of how people change between groups.

As for price hype, again I’m not an economist, so the exact formula may differ from what’s standard. I take it to respond positively to demand — namely, the number of suckers — and that there is a multiplier that serves as a reality check. This reality check should be weak at the start when most non-investors are suckers, and should be strong near the end when most non-investors are retired. In other words, the price hype at the beginning is a near total distortion — nearly 0% accurate — whereas the price hype near the end is nearly 100% accurate. This will make more sense once we write down formulas.

Now we get to the differential equations for how these things change. We write down one equation for each variable whose values we’re tracking over time. I use apostrophes to denote the derivative with respect to time (i.e., rate of change):

S’ = -aSI

Since suckers can only lose members (by turning into investors), there is only one term, and it shows how suckers decline (negative sign). Remember, retireds do not go back into the pool of potential buyers. And investors either make a sale and go into the retired group, or they sit on their stuff in hope of selling, so they never contribute to the growth of suckers. Thus, there is no growth term. The parameter a shows the probability that, when a sucker and an investor meet, the investor will transfer his stuff to the sucker. (“Parameter” is another word for “constant,” in contrast to a variable that changes.) The reason we use the product of S and I is that this is essentially the rate at which the two groups encounter each other when they move around independently of each other.[1]

I’ = aSI – aSI = 0

Investors both grow and decline, so one term is positive and the other negative. They grow by having a sucker join their ranks, which as we saw above happens at rate aSI. However, each time that happens, the investor loses his stuff and becomes retired. That happens at the same rate, and the negative sign just shows that this causes I to decline. When we simplify, we get I’ = 0 — that is, the number of investors does not change over time. That makes sense because each bundle of stuff always has an owner, regardless of how it may change hands, somewhat like the game of hot potato. When something doesn’t change, it is constant, so whenever we see I from now on, we’ll know that this is just another parameter, not a variable that changes. In particular, it refers to the initial number of early investors who get the bubble going.

R’ = aSI

Retireds never join the suckers again. And recall the mindset of a retired person — they knew the stuff was junk and are glad to have gotten through the selling process, so they cannot be sold the stuff again to become investors once more. Thus, there is no way for them to lose numbers. They grow by former investors making a sale and becoming retired, which once again happens at rate aSI.

Here’s the neat thing: notice that S’ + R’ = -aSI + aSI = 0. The sum of the two derivatives equals zero, and since taking a derivative shows the distributive property, this also means that (S + R)’ = 0. That is, the sum of suckers and retireds does not change over time. This makes sense since, if the number of investors stays constant, the leftovers — suckers and retireds — is constant, regardless of how each separate group grows or shrinks. We can take this further to note that S’ + I’ + R’ = 0, which means (S + I + R)’ = 0. That is, the combined size of all three groups does not change over time — which is just what we claimed by keeping total population size constant. (Otherwise, each group would have birt
h and death terms, aside from the terms that show how their members switch between groups.)

We’ll call this constant total population size N. So, S + I + R = N. Now, I is just a constant, so we’ll move it to the other side: S + R = N – I. We have two variables, S and R, but we just wrote an equation connecting them, so we can re-write one in terms of the other. I’ll choose R, but it doesn’t matter. So, R = N – I – S, and anywhere we see R, we can replace it with N – I – S. In other words, we’ve removed R from our focus — we can always get it from knowing what the variable S is, as well as the two parameters N and I. That means the equation for R’ only gives us redundant information, and we can ignore it. We can also ignore the I’ equation, since it just tells us that I is constant, and we’re only interested in things that change. So we’re left with just the S’ equation.

Now we move on to the price hype formula and how it changes over time. First, the formula for price as a function of demand and the reality check, since hype is never totally irrational and at least tries to take stock of reality:

P = bS(R / Rmax) = bS(R / (N – I))

Demand is driven by the number of suckers — the ones who eventually want to get in on the bubble — and the parameter b says how strongly demand responds to the number of suckers. The multiplier (R / Rmax) provides a reality check. If you landed from Mars and only knew the number of suckers, you would also want to know how many retireds there were — if there were few retireds, that would tell you the bubble had only just begun, so that hype is likely to be high and to go even higher short-term. Thus, this filter should not let much of the demand information through. Indeed, when R is very low compared to Rmax, the multiplier is near 0.

However, if you saw that there were many retireds, that would say the bubble was near its bust moment, and that the information from demand is very accurate by this point. Indeed, when R is near Rmax, the multiplier is near 1 and the filter lets just about all of the demand information through. What is Rmax? It is the value when no one is a sucker and everyone is retired, aside from the constant number of investors. Looking above at the equation S + R = N – I, we see that when there are no suckers, R = N – I.

Now we need to find the differential equation for how P changes over time. Using the product rule for derivatives[2], we get:

P’ = (abI / (N – I)) * S(2S + I – N)

Since a, b, I, and N – I are always positive, and since S is positive except for the very end of the bubble when it is 0, in the meantime, whether price hype shoots up or crashes down depends on whether the term 2S + I – N is positive or negative. It is positive and price hype grows when S exceeds (N – I) / 2, which is half the size of non-investors. It is negative and price hype declines when S is below (N – I) / 2. It is 0 and price hype momentarily stalls out when S is exactly (N – I) / 2.

Because the bubble starts with all non-investors being suckers, S is initially N – I, which is greater than (N – I) / 2. So at first the price hype shoots up. However, remember that S only declines — as more and more of the suckers are drawn into the bubble (some of whom may also make sales and become retireds), S will inevitably fall below (N – I) / 2 and price hype will start to contract.

When S inevitably reaches 0 — when all non-investors are out of the bubble for good — then P = 0 (recall that P = bS(R / Rmax)). Moreover, at that time P’ = 0 too. Thus, at the end, price hype has completely evaporated and it will stay that way. This is a single round of boom-and-bust for price hype.

In this post, I’ve shown how some pretty simple “greater fool” dynamics can lead to a boom-and-bust pattern for price hype. You can quibble with all of the assumptions I’ve made, but the model shows that the greater fools theory is a viable explanation for price bubbles. I’ve relaxed some of the assumptions to see if it makes a difference, like making the decline of S be a saturating rather than linear function of S, and so far they don’t seem to affect things qualitatively. A more realistic model would have P appear in the equation for S’ — that is, to have price hype affect the probability of making a sale. Or rather, the trend of prices (P’ ) should affect sale probability — if suckers see that price hype is increasing, they should want to get in on the bubble, and to stay put if price hype is dropping. Also, allowing retireds to re-enter the pool of suckers would be more general and would almost certainly lead to sustained cycles of boom-and-bust, rather than a single round. But that’s for another slow afternoon.

In part 2, I’ll go into more mathematical detail about how we see what states this system is at rest in, and whether they are stable to disruptions or not. I’ll look more at the formula for the maximum level of price hype, and interpret that in real-world terms in order to see what things will give us larger-amplitude bubbles. I’ll provide a picture of the phase plane, which shows what the equilibrium points are, and how the variables will change in value on their way from their starting values to the final ones. I’ll also have a couple of graphs showing how the number of suckers and retireds, and the amount of price hype, change over time.

[1] Draw one person at random, and the chance that they’re a sucker reflects S. Draw another one at random, and the chance that they’re an investor reflects I, since the draws are independent. The chance of doing both is just the product of the two separate probabilities.

[2] P’ = (bS(R / Rmax))’ = (b / (Rmax)) (S’ * R + S * R’)
A little algebra, which you can confirm by hand or using Maple, gives the equation in the main body for P’.

(Republished from GNXP.com by permission of author or representative)
 
🔊 Listen RSS

Steve points us to a brief review by Steven Pinker on the decline in war and violence. Focusing just on homicide rates, what exactly does that mean — a decline in violence during modern times? It is impossible to have a solid feel for the observation Pinker wants to explain without seeing time series data on homicide rates (one of which he includes in his TED talk on the same subject). The pictures come from Manuel Eisner’s review article in the British Journal of Criminology.

This is required reading (only 20 pages) for anyone who wants to understand crime, and especially changes in crime — changes in the overall rate, differences across regions in the decline, differences in the decline across social classes, etc. If you don’t have access to it, it’s one of those rare articles that is worth the one-time price of $28 — or just request it from one of your friends or colleagues who does have university access.

Below the fold, I’ve included the pictures for all countries that Eisner found data for, along with a brief remark on the trend for each country. The vertical axis is homicides per 100,000 population and is on a logarithmic scale (so that the visible changes are by orders of magnitude). Also note that the recent decline in crime since the early-mid 1990s may not be easily visible in these pictures, given that Eisner’s article came out in 2001 — not very long for the reversal to jump out of the graphs.

First, England:

Increases during the High Middle Ages, decreases sometime starting in the Late Middle Ages or Early Modern period.

Netherlands and Belgium:

Decreases starting in Early Modern period.

Scandinavia:

Decreases starts as late as the 17th C — Scandinavia being one of the last parts of Western Europe to become civilized.

Germany:

Apparent increase during High Middle Ages, decreases starting in Late Middle Ages or Early Modern period.

Italy:

Barely visible change during 18th C, while steady decline only starts in 19th C — Italy having lacked a strong central state until then. Article says that Northern Italy shows a much earlier decline than Southern Italy (no surprise).

Also notice the presence of cycles about the overall trend. Just because there were recurring crime waves and abatements of crime waves during the 19th and 20th centuries — see here for the US, or see the Scandinavian graph above — should not distract us from the clear downward trend going only a few centuries farther back. Any account of rises or declines must deal with all of these patterns, making it impossible to generalize the narrow hypotheses for the 1990s decline in crime — there were no cell phones before then, the trend since 1500 has been toward less corporal punishment and harsh sentencing rather than more, and so on.

What we would do is write down a system of differential equations that claimed how two or more groups of people interacted with each other — say, “criminals,” “law-abiders,” and “police” — and fool around with them until they produced a solution that would show cycles or oscillations around an overall downward trend. The interactions between these groups of people are what real historical causes are made of — not the sudden introduction of some technology or law (or sudden disappearance of some technology or repealing of a law).

I’m up for a math modeling jam session if anyone else is. I remember seeing ODE models from ecology where one species replaces another, although the values oscillate around the upward trend of the winner, as well as around the downward trend of the loser.

(Republished from GNXP.com by permission of author or representative)
 
• Category: History, Science • Tags: Crime, History 
🔊 Listen RSS

Those are the first three articles that I’ve posted to a new blog of mine, Patterns in science and culture, where all of my data-rich posts will go from now on. I’ll still review existing work or throw out “what if?” posts here, but if it requires looking up and analyzing data, you’ll only be able to read about it there. These are the longer, original, buzz-starting ones I usually put up here or at my personal blog. The only change is that now the data-driven ones will be for-purchase. (If it takes a lot to put together, I can’t do it for free.) For $10, you’ll get access to 20 “feature-length” articles — ones that require a decent amount of investigation, labor, or ingenuity — plus all of the shorter ones that strike my fancy or that you request. They will be put up roughly once or twice a week. After the first 20 are done, I’ll start another site will 20 more, and so on. Purchase info is at the bottom of the full entry.

The first three are already up:

1) Climate and civilization among Blacks, where I look at how climate affects IQ, imprisonment rates, and college degree-earning rates among Blacks, using state-level data. This is a follow-up to a similar post I wrote about Whites.

2) Was there a decline in formality during the 20th C? Here, I look at data on changes in naming preferences that question the widespread view that we’ve “become less formal.”

3) Are the arts in decline? I’ve dug up annual data on theater attendance and the number of playing weeks for both Broadway and road shows from 1955 to 2006. I discuss the overall trend, the notable departures from the trend, and how in-synch or out-of-synch the Broadway and road show data have been over time.

Upcoming articles will include a look at turnover rates in Billboard #1 songs as far back as the various charts go — when has there been rapid turnover, and when has there been stagnation, and do these accord with what we think is good or bad music? I’ve also put together a series of graphs that show quite striking generational changes in the popularity of getting a driver’s license among teenagers of different ages. I also plan to round out a series where I looked at the elite vs. popular valuation of painters and of composers, using Charles Murray’s Human Accomplishment and sales data. Next I’ll look at literary figures. As usual, I will put the data into a larger picture (or story).

Just as a reminder for older readers, or as further examples of what I’ve done for newer readers, here’s a brief selection of original work I’ve done:

The death of silly academic theories such as Marxism, psychoanalysis, and even postmodernism, using JSTOR archives. This story was picked up by the Toronto Globe and Mail, Arts and Letters Daily, and a few others I’m forgetting. (Here’s a follow-up.)

How different social classes react to adolescent sex
, using the GSS, and proposing a life history account of these differences.

Debunking a study on the supposed hindering effect that feminine names have on women’s progress in the sciences. To this day, the study has not been published, and I can only hope that we were part of that (cyber peer review).

How much different generations enjoy various music genres, using the GSS. This provides pretty clear data that you imprint on the popular music from when you were about 15 and stay that way for the rest of your life.

How the American diet has changed over the 20th C., using pretty fine-grained data such as red meat, fish, poultry, etc., rather than just “meat.” There’s also data showing that heart disease and obesity has only gotten worse as we’ve switched to a more carboholic diet since the 1970s.

How the blondness of Playboy Playmates has changed over time, as well as some speculation about why it changes the way it does.

The stagnating pace of revolutionary technological innovation, linking it to the decline in monopolistic bodies like AT&T’s Bell Labs or the Defense Department.

Purchase info

Although blogging doesn’t eat up a lot of time, the more data-intensive posts do. This is not something that most bloggers do — most are linkers or gasbags, some very entertaining and others very boring. But I actually do a bit of investigation, find clever ways to attack a question, provide data, and put it into an easy-to-read visual. Not everyone will agree with my interpretation, but at least I’ve done lots of homework that others will benefit from, and that’s something you find at very few places on the internet, especially if it’s a new finding. But these more exciting posts take time away from earning money, so I’m asking fifty cents per long article, with all the briefer data-containing posts thrown in free.

The new blog is by invitation only, so you’re simply paying to be put on the list of allowed readers. There is a PayPal button at the end of this entry. You will need a PayPal account, and a Google account — they’re free, and you just provide them with an email address. When you pay, leave me a message via PayPal with the email address associated with your Google account. I need this to invite you to the blog. If you don’t say so, I’ll assume it’s the one attached to your PayPal account. If you forget to mention it, you can always send me a correction through your PayPal account.

Once I invite you, you’ll get an email that has a “join this blog” link that you click on. And with that, you’re all set. You will need to be signed into your Google account, but you can stay signed in forever.

I expect that most purchasers will not be trolls or flamers — they want to harass people for free — but if you exhibit classical spammer behavior, you’ll be kicked out with no refund. It just takes a couple people like that to ruin a site, so I’ll be strict about that.

If you have any questions, feel free to leave a comment here or email me at icanfeelmyheartbeat at the hotmail-ish site.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

In the Angry Nintendo Nerd’s video about the Virtual Boy — a short-lived video game console that claimed to offer a “virtual reality” experience — he says that back in the mid-1990s, it seemed like the coolest thing, but that now no one cares about virtual reality. This, he claims, is why even with better technology than before, no one is making virtual reality systems for the average consumer anymore. Certainly that seems true for pop culture: the Virtual Boy, the movies The Lawnmoer Man and The Matrix, Aerosmith’s video for “Amazing,” and a whole bunch of video games with “virtual” in the title came out then, vs. nothing like that now.

But when I went to check the NYT, I found a little surprise. Sure enough, there was a flaring up and dying down of the phrase that jibes with what we’d expect — but there’s been a modest yet steady increase in the phrase’s usage since 2003. I skimmed the titles of the articles and didn’t notice any clear pattern; maybe they’re simply using it more in military training, and the news items are about that. Whatever it is, there’s something to be explained. Not knowing anything about virtual reality, I’ll leave it up to others to hazard a better guess. The graph of its appearance in the NYT is below the fold.

Here’s the first epidemic craze, followed by a recent increase:

Hmm.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Technology 
🔊 Listen RSS

Mathematical models of contagious diseases usually look at how people flow between three categories: Susceptible, Infected, and Recovered. In some of these models, the immunity of the Recovered class may become lost over time, putting them back into the Susceptible class. This means that if an epidemic flares up and dies down, it may do so again. If we treat irrational exuberance as contagious, then we can have something like a recurring exuberant-then-gloomy cycle within people’s minds. That is, people start out not having strong opinions either way, they get pumped up by hype, then they panic when they figure out that the hype had no solid basis — but over time, they might forget that lesson and become ripe for infection once more.

I’m in the middle of Stan Liebowitz’s excellent post-mortem of the dot-com crash, Re-thinking the Network Economy, and in Chapter 3 he reviews the “first mover wins” craze during the tech bubble. According to this idea, largely transplanted into the business world from economists who’d already spread the myth of QWERTY, the prospect of lock-in was so likely — even if newcomers had a superior product — that it paid to rush your product to the market first in order to get the snowball inevitably rolling, no matter its quality.

The idea was bogus, of course, as everyone learned afterward. (There were plenty of examples available during the bubble, but the exuberance prevents people from seeing them — Betamax was before VHS, WordPerfect was before Microsoft Word, Sega Genesis was before Super Nintendo, etc. And there were first-movers who won, if their products were highly rated. So, when you enter doesn’t matter, although quality of product does.) But when I looked up data on how much the media bought into this idea, I was surprised (though not shocked) to see that it was resurrected during the recent housing bubble, although it has been declining since the start of the bust phase. Below the fold are graphs as well as some good representative quotes over the years.

First, here are two graphs showing the popularity of the idea in the mainstream media. The first is from the NYT and controls for the overall number of articles in a given year. (I excluded a few articles that use “first mover” in reference to the Prime Mover god concept in theology.) I don’t have the total number of articles for the WSJ, so those are raw counts. Still, the pattern is exactly the same for both, and it very suggestively reflects the two recent bubbles:

The first epidemic is easy enough to understand — after languishing in academia during the mid-1980s through the mid-1990s, the ideas of path dependence, lock-in, and first-mover advantage caught on among the business world with the surge of the tech bubble. When it became apparent that the dot-coms weren’t as solid as was believed (to put it lightly), everyone realized how phony the theory supporting the bubble had been. Here’s a typical remark from 2001:

WHEN they were not promoting the now-laughable myth of ”first mover advantage,” early e-commerce proponents proffered the idea that self-service Web sites could essentially run themselves, with little or no overhead.

But clear-headedness eventually wears off, and when another bubble comes along, we can’t help but feel exuberant again and take another swig of the stuff that made us feel all tingly inside before. Here’s a nugget of wisdom from 2006:

Media chieftains may be kicking themselves a few years from now because they didn’t step up to pay whatever it took to own the emergent first mover in online video.

And a similar non-derogatory, non-ironic use of the phrase from 2007:

For the current generation of Internet applications, sometimes referred to as “Web 2.0,” the data collected from users is the true source of competitive advantage. And the first movers, the companies that understand and apply this insight, have services that get better fast enough that their competition never catches up.

Thankfully we’ve been hearing less and less of this stupid idea ever since the housing bubble peaked, and at least the most recent peak was lower than the first one, but we can still expect to hear something like this during whatever the next bubble is. Note that the first-mover-wins idea wasn’t even being applied primarily to real estate during the housing bubble — the exuberance in one domain carried over into a completely unrelated domain where it had flourished before. So, if you’re at all involved in the tech industry, be very wary during the next bubble of claims that “first mover wins” — it wasn’t true then (or then, or then), and it won’t be true now.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Economics, History, Science • Tags: Economics, History, Technology 
🔊 Listen RSS

In 1990, Stan Liebowitz and Stephen Margolis wrote an article detailing the history of the now standard QWERTY keyboard layout vs. its main competitor, the Dvorak Simplified Keyboard. (Read it here for free, and read through the rest of Liebowitz’s articles at his homepage.) In brief, the greatest results in favor of the DSK came from a study that was never officially published and that was headed by none other than Dvorak himself. Later, when researchers tried to devise more controlled experiments, the supposed superiority of the DSK mostly evaporated.

Professional typists may have enjoyed about a 5% faster rate, or maybe not — despite the conviction of the claims you hear, this isn’t a well established body of evidence, such as “smarter people have faster reaction times.” Moreover, most keyboard users aren’t professional typists, and the vast bulk of their lost time is due to thinking about what they want to say. Therefore, the standardization of the QWERTY layout is not an example of our being locked in to an inferior technology. Which isn’t to say that the QWERTY layout is the best imaginable — but certainly not a clearly inferior layout compared to the DSK.

While Liebowitz and Margolis may have hoped that their examination of the evidence would have thrown some cold water on the “lock-in to inferior standards” craze that had gotten going in the mid 1980s, with QWERTY as the proponents favorite example, the idea appears too appealing to academics to die. (Read this 1995 article for a similar debunking of Betamax’s alleged superiority over the VHS format.) Liebowitz appeared on a podcast show just this May having to reiterate again that the standard story of QWERTY is bogus.

To investigate, I did an advance search of JSTOR’s economics journals for “QWERTY” and divided this count by the total number of articles. This was done for five four-year periods because it’s not incredibly popular in any year, and that creates more noise in a year-by-year picture. I excluded the post-2004 period since there’s typically a 5-year lag between publication and archiving in JSTOR. This doesn’t show what the author’s take is — only how in-the-air the topic is. With the two major examples having been shown to not be examples of inferior lock-in at all, you’d think the pattern would be a flaring up and then dying down as economists were made aware of the evidence, and everyone can just leave it at that. But nope:

Note that the articles here aren’t the broad class discussing various types of path dependence or network effects, but specifically the kind that lead to inferior lock-in — as signalled by the mention of QWERTY. I attribute the locking in of this inferior idea to the fact that academia is not incentivized in a way that rewards truth, at least in the social sciences. Look at how long psychoanalysis and Marxism were taken seriously before they started to die off in the 1990s.

Shielded from the dynamics of survival-of-the-fittest, all manner of silly ideas can catch on and become endemic. In this case, the enduring popularity of the idea is accounted for by the Microsoft-hating religion of most academics and of geeks outside the universities. For them, Microsoft is not a company that introduced the best word processors and spreadsheets to date, and that is largely responsible for driving down software prices, but instead a folk devil upon which the cult projects whatever evil forces it can dream up. Psychologically, though, it’s pretty tough to just make shit up like that. It’s easier to give it the veneer of science — and that’s just what the ideas behind the QWERTY and Betamax examples were able to give them.

Overall, Liebowitz’s work seems pretty insightful. There’s very little abstract theorizing, which modeling nerds like me may miss, but someone’s got to take a hard-nosed look at what all the evidence says in support of one model or some other. He and Margolis recognized how empirically unmoored the inferior lock-in literature was early on, and they also saw how dangerous it had become when it was used against Microsoft in the antitrust case.[1] He also foresaw how irrational the tech bubble was, losing much money by shorting the tech stocks far too early in the bubble, and he co-wrote an article in the late 1990s that predicted The Homeownership Society would backfire on the poor and minorities it was supposed to help. (Read his recent article on the mortgage meltdown, Anatomy of a Train Wreck.) Finally, one of his more recent articles looks at how file sharing has hurt CD sales. Basically, he details everything that a Linux penguin shirt-wearer doesn’t want to hear.

[1] Their book Winners, Losers, and Microsoft and their collection of essays The Economics of QWERTY attack the idea from another direction — showing how the supposed conditions for lock-in or market tipping were met, and yet time and again there was turnover rather than lock-in, with each successive winner having received the highest praise.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Economics, History, Science • Tags: Culture, Economics, History, Technology 
🔊 Listen RSS

There’s a post on porn and rape that’s making the rounds (among the blogs I read, at Half Sigma and Roissy so far). The author claims to show that a greater availability of pornography is associated with lower rape rates. But it is not — nor are the two directly related. They simply appear unrelated altogether.

First, the original post’s author is not an idiot; he just made an honest mistake in getting his crime data. (And he is right in his side-point about how moronic feminists are when they suggest that rape has little to do with meeting the guy’s sexual urges.) But let’s focus on what the crime data say.

The Bureau of Justice Statistics website has a page of summary statistics that includes the graph in his post that shows what looks like a decline in the rape rate from the early-mid 1970s until today. Those data appear to be from the National Crime Victimization Survey, and one drawback here is that minors are often questioned in the presence of their parents or guardians. They’re much less likely to report something embarrassing and painful as rape when the adults are there, especially if it was a family member or acquaintance of the family, as is typical. And young females are the most at risk. The definition of rape there seems too broad also, including attempted rape and psychological intimidation — what people really have in mind when they hear “rape” is someone using physical force to gain sexual access to another person.

Luckily, though, the BJS also has data on the forcible rape rate (“real” rape), and this series goes back even further than the NCVS data — back to 1960. What do these data say? If you’re a regular reader, you already know because I’ve reviewed the change in violent crime and forcible rape rates before. Go to that post to see the graph and get the details. In brief, there was a sharp rise from about 1964 through 1992 and a decline thereafter.

What was the change in porn availability from 1960 to 2006? I’ve reviewed that topic too. Again, go there for the graphs and details. Looking just at Playboy to stand in for pornography generally, its circulation in 1960 was about 1 million and shot up to 7.2 million at its peak in 1972, dropping to 3 million by 1987, where it has stayed since. Population size isn’t the main factor here since the US population did not multiply by 7 between 1960 and 1972. There was an explosion in Playboy circulation, and even through the 1980s it was still 3 times as high as in 1960. Therefore, from 1960 to 1972, there was a surge in porn availability and a surge in the forcible rape rate. This much of the data contradicts the “more porn, less rape” idea.

But Playboy circulation dropped sharply from 1973 to 1987, and that didn’t cause the rape rate to drop. Its circulation has remained pretty steady since 1987, while the rape rate has steadily fallen since 1992. There are other data in the above post from the General Social Survey on what percent of men have watched an X-rated movie in the past year. Again there are no clear patterns that suggest an association with the forcible rape rate. If anything, the availability of porn has increased since the mid-late 1990s with the adoption of the internet. That suggests the “more porn, less rape” idea since rape was falling — but it had peaked in 1992, about a half-decade before most guys had easy access to internet porn.

Putting all of the data together, it doesn’t look like there’s a relationship at all between availability of porn and the forcible rape rate. It’s trivial to choose a time period in which your preferred hypothesis pans out, but looking at the big picture is always more revealing. In this case, we discover a big let-down — neither side is right, and rape has little to do with porn. Debates like “porn and rape” or “poverty and crime” serve mostly as a full employment plan for gasbags. What if the two things aren’t related in the first place? Well, that’s a pretty boring debate — way to rain on our parade.

(Republished from GNXP.com by permission of author or representative)
 
• Category: History, Science • Tags: Crime, History, Porn 
🔊 Listen RSS

In the new issue of The New Yorker, Malcolm Gladwell reviews some book about using the appeal of FREE to grow your business. This is supposed to apply most strongly to information, so that as more and more of a firm’s product / service consists of information, the more it can use the appeal of FREE to earn money.

What both Gladwell and the reviewed book’s author, Chris Anderson, don’t seem to realize is that the appeal of FREE creates pathological behavior.

Gladwell even cites a revealing behavioral economics experiment by Dan Ariely:

Ariely offered a group of subjects a choice between two kinds of chocolate — Hershey’s Kisses, for one cent, and Lindt truffles, for fifteen cents. Three-quarters of the subjects chose the truffles. Then he redid the experiment, reducing the price of both chocolates by one cent. The Kisses were now free. What happened? The order of preference was reversed. Sixty-nine per cent of the subjects chose the Kisses. The price difference between the two chocolates was exactly the same, but that magic word “free” has the power to create a consumer stampede.

In other words, FREE caused people to choose an inferior product more than they would have if the prices were both positive. Thus, in a world where there is more FREE stuff, the quality of stuff will decline. It’s hard to believe that this needs to be pointed out. And again, this is not the same as prices declining because technology has become more efficient — prices are still above 0 in that case. FREE lives in a world of its own.

If you’re only trying to get people to buy your target product by packaging it with a FREE trinket, then that’s fine. You’re still selling something, but just drawing the customer in with FREE stuff. This jibes with another behavioral economics finding — that when two items A and B are similar to each other but very different from item C, all lying on the same utility curve, people ignore C because it’s hard to compare it to the altneratives. They end up hyper-comparing A and B since their features are so similar, and whichever one is marginally better wins.

So if you have three more or less equally useful products, A B and C, where B is essentially what A is, just with something FREE thrown in, people find it a no-brainer to choose B.

An exception to the rule of “FREE leads to lower quality” might be the products that result from dick-swinging competitions, where the producer will churn out lots of FREE stuff just to show how great they are at what they do. They’re concerned more with reputation than getting by. Academic work could be an example — lots of nerds post and critique scientific work at arXiv, PLoS, as well as the more quantitatively oriented blogs.

But in general, you can imagine the quality level you’d enjoy from a free car or an all-volunteer police force. Even sticking with just information, per Chris Anderson, look at what movies you can download without cost on a peer-to-peer site or whatever — they mostly all suck, being limited to the library of DVDs that geeks own. Sign up for NetFlix or a similar service, and you have access to a superior library of movies, and it hardly costs you anything — it’s just not FREE. Ditto for music files you can download cost-free from a P2P site vs. iTunes, or even buying the actual CD used from Amazon or eBay.

Admittedly I don’t know much about computer security, but just by extending the analogy of a voluntary police force, I’d wager that security software that costs anything is better than FREE or open source security software.

To summarize, though, Gladwell’s discussion about FREE misses the most important part — it tends to lower quality. I don’t want to live in a word of lower quality of items that aren’t of major consequence, and (hopefully) the people in charge of high-consequence items like the police and my workplace’s computer security will never be persuaded to go for FREE crap in the first place. This aspect alone answers the question he poses in the sub-headline, “Is free the future?” However, wrapping your brain around the idea that FREE tends to lower quality is discordant with a Progressive worldview, which explains why Gladwell just doesn’t get it.

(Republished from GNXP.com by permission of author or representative)
 
🔊 Listen RSS

Some readers here may already follow the food-related stuff I write about at my personal blog. Well, to allow myself to write more about diet, nutrition, and food in general, I’ve started a new blog called Low Carb Art and Science. Lord knows there are already lots of blogs that deal with the topic, but this one will have lots more data and a stronger emphasis on evolution. But there will be plenty of less serious stuff and easy recipes too. Plus I’ll take an occasional interdisciplinary approach, as with an earlier post I wrote about the late Medieval shift away from carbs and toward meat.

The first post up is about the changing American diet and poorer health — except that the graphs show that the changing American diet has been one that’s rigidly adhered to what the health experts tell us to eat. The data weren’t hard to find, analyze, and present, but I’ve never seen them before, let alone in a clear-to-see visual format. If you doubted whether the anti-meat, pro-grain message was being followed or not, and if so, whether it was making us healthier — this will be a real eye-opener. Take-home lesson: eat more saturated fat and cholesterol, and less carbohydrates.

Comments closed here; comment over at Low Carb Art and Science.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Food, Health 
🔊 Listen RSS

Updated

This may be old hat for some readers, but it’s worth reviewing and providing some good new data for. The motivation is the idea that monopoly-haters have that when some company comes to dominate the market, they will have no incentive to change things — after all, they’ve already captured most of the audience. The response is that industries where invention is part of the companies’ raison d’etre attract dynamic people, including the executives.

And such people do not rest on their laurels once they’re free from competition — on the contrary, they exclaim, “FINALLY, we can breathe free and get around to all those weird projects we’d thought of, and not have to pander to the lowest common denominator just to stay afloat!” Of course, only some of those high-risk projects will become the next big thing, but a large number of trials is required to find highly improbable things. When companies are fighting each other tooth-and-nail, a single bad decision could sink them for good, which makes companies in highly competitive situations much more risk-averse. Conversely, when you control the market, you can make all sorts of investments that go nowhere and still survive — and it is this large number of attempts that boosts the expected number of successes.

With that said, let’s review just a little bit of history impressionistically, and then turn to a new dataset that confirms the qualitative picture.

Taking only a whirlwind tour through the pre-Information Age time period, we’ll just note that most major inventions could not have been born if the inventor had not been protected from competitive market forces — usually from protection by a monopolistic and rich political entity. Royal patronage is one example. And before the education bubble, there weren’t very many large research universitities in your country where you could carry out research — for example, Oxford, Cambridge, and… well, that’s about it, stretching back 900 years. They don’t call it “the Ivory Tower” for nothing.

Looking a bit more at recent history, which is most relevant to any present debate we may have about the pros and cons of monopolies, just check out the Wikipedia article on Bell Labs, the research giant of AT&T that many considered the true Ivory Tower during its hey-day from roughly the 1940s through the early 1980s. From theoretical milestones such as the invention of information theory and cryptography, to concrete things like transistors, lasers, and cell phones, they invented the bulk of all the really cool shit since WWII. They were sued for antitrust violations in 1974, lost in 1982, and were broken up by 1984 or ’85. Notice that since then, not much has come out — not just from Bell Labs, but at all.

The same holds true for the Department of Defense, which invented the modern airliner and the internet, although they made large theoretical contributions too. For instance, the groundwork for information criteria — one of the biggest ideas to arise in modern statistics, which tries to measure the discrepancy between our scientific models and reality — was laid by two mathematicians working for the National Security Agency (Kullback and Leibler). And despite all the crowing you hear about the Military-Industrial Complex, only a pathetic amount actually goes to defense (which includes R&D) — most goes to human resources, AKA bureaucracy. Moreover, this trend goes back at least to the late 1960s. Here is a graph of how much of the defense outlays go to defense vs. human resources (from here, Table 3.1; 2008 and beyond are estimates):

There are artificial peaks during WWII and the Korean War, although it doesn’t decay very much during the 1950s and ’60s, the height of the Cold War and Vietnam War. Since roughly 1968, though, the chunk going to actual defense has plummeted pretty steadily. This downsizing of the state began long before Thatcher and Reagan were elected — apparently, they were jumping on a bandwagon that had already gained plenty of momentum. The key point is that the state began to give up its quasi-monopolistic role in doling out R&D dollars.

Update: I forgot! There is a finer-grained category called “General science, space, and technology,” which is probably the R&D that we care most about for the present purposes. Here is a graph of the percent of all Defense outlays that went to this category:

This picture is even clearer than that of overall defense spending. There’s a surge from the late 1950s up to 1966, a sharp drop until 1975, and a fairly steady level from then until now. This doesn’t alter the picture much, but removes some of the non-science-related noise from the signal. [End of update]

Putting together these two major sources of innovation — Bell Labs and the U.S. Defense Department — if our hypothesis is right, we should expect lots of major inventions during the 1950s and ’60s, even a decent amount during the 1940s and the 1970s, but virtually squat from the mid-1980s to the present. This reflects the time periods when they were more monopolistic vs. heavily downsized. What data can we use to test this?

Popular Mechanics just released a neat little book called Big Ideas: 100 Modern Inventions That Have Changed Our World. They include roughly 10 items in each of 10 categories: computers, leisure, communication, biology, convenience, medicine, transportation, building / manufacturing, household, and scientific research. They were arrived at by a group of around 20 people working at museums and universities. You can always quibble with these lists, but the really obvious entries are unlikely to get left out. There is no larger commentary in the book — just a narrow description of how each invention came to be — so it was not conceived with any particular hypothesis about invention in mind. They begin with the transistor in 1947 and go up to the present.

Pooling inventions across all categories, here is a graph of when these 100 big ideas were invented (using 5-year intervals):

What do you know? It’s exactly what we’d expected. The only outliers are the late-1990s data-points. But most of these seemed to be to reflect the authors’ grasping at straws to find anything in the past quarter-century worth mentioning. For example, they already included Sony’s Walkman (1979), but they also included the MP3 player (late 1990s) — leaving out Sony’s Discman (1984), an earlier portable player of digitally stored music. And remember, each category only gets about 10 entries to cover 60 years. Also, portable e-mail gets an entry, even though they already include “regular” e-mail. And I don’t know what Prozac (1995) is doing in the list of breakthroughs in medicine. Plus they included the hybrid electric car (1997) — it’s not even fully electric!

Still, some of the recent ones are deserved, such as cloning a sheep and sequencing the human genome. Ove
rall, though, the pattern is pretty clear — we haven’t invented jackshit for the past 30 years. With the two main monopolistic Ivory Towers torn down — one private and one public — it’s no surprise to see innovation at a historic low. Indeed, the last entries in the building / manufacturing and household categories date back to 1969 and 1974, respectively.

On the plus side, Microsoft and Google are pretty monopolistic, and they’ve been delivering cool new stuff at low cost (often for free — and good free, not “home brew” free). But they’re nowhere near as large as Bell Labs or the DoD was back in the good ol’ days. I’m sure that once our elected leaders reflect on the reality of invention, they’ll do the right thing and pump more funds into ballooning the state, as well as encouraging Microsoft, Google, and Verizon to merge into the next incarnation of monopoly-era AT&T.

Maybe then we’ll get those fly-to-the-moon cars that we’ve been expecting for so long. I mean goddamn, it’s almost 2015 and we still don’t have a hoverboard.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Economics, History, Science • Tags: Economics, History, Politics, Technology 
No Items Found
PastClassics
The “war hero” candidate buried information about POWs left behind in Vietnam.
What Was John McCain's True Wartime Record in Vietnam?
The evidence is clear — but often ignored
Are elite university admissions based on meritocracy and diversity as claimed?
A simple remedy for income stagnation