The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 
Email This Page to Someone

 Remember My Information



=>
Topics/Categories Filter?
Science
Nothing found
 TeasersThorfinn@GNXP Blogview

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

Via the Demography Matters blog, Russian birthrate seems to have recovered:

By 2009, the official TFR had risen to 1.537, 1.417 in urban areas and 1.900 in rural areas. Both urban and rural TFRs rose by about the same amount from 2000 to 2009, about 0.330. Vital statistics for 2010 were just released by the national statistics office, GOSKOMSTAT, also known as ROSTAT. The birth rate continues to rise but not as sharply in the past two years as it did in 2007 and 2008. One must wonder if the slower increase in the past two years suggests the birth rate revival may be running out of steam or that it may be due to the global recession. But natural decrease is now but one-fourth of what is was in 2000 and that is a truly dramatic turnaround. The TFR can be estimated at about 1.56 for 2010 although we must wait for the official TFR when it is released later this year. Births for January 2011 have also been released and those are down slightly from January 2010, 131,454 from 132,371. One month hardly defines a trend but I thought I’d pass that along.

This is still below replacement, but is substantially higher than the estimates from 2000, when the birth rate per woman bottomed out to roughly 1.2. At the time, everyone was extrapolating a near-certain birth spiral.

This brings to mind an article from Nature from a couple of years ago that argued that fertility follows a “J” curve with respect to human development. The graph plots fertility against human development (HDI) by country in two time periods:

That is, rather than fertility declining irreversibly with higher levels of development (which is what one might have thought in 1975, or in Russia through the 1990s); it appears that fertility seems to recover a bit at the highest levels of development. This doesn’t apply to all countries — Japan and Italy may have been left behind — but partially explains the relatively high fertility rate of, say, native-born Americans. Explaining the drop in fertility with rising development is easy; explaining the subsequent rise is a little tougher. I see two basic options:

1) It’s important that the measure here is HDI, as opposed to GDP/capital. What’s crucial is the level of female empowerment. Where women have the option to work and raise children, they frequently do so. Where they cannot as easily (Germany for instance, where a substantial cohort of women remain childless and attached to the workforce), women are simply forced to choose. It’s no coincidence that countries like Japan or Italy see plummeting fertility even at high levels of income.

2) This represents the optimal parenting strategy across income ranges. At Malthusian levels of income, additional income is spent on more children. As incomes rise, families start to face a “quantity/quality” tradeoff that leads to them invest more in fewer children. At yet higher levels of income, families are able to invest fully in multiple children.

It’ll be worth seeing whether some of the low-fertility countries out there today — particularly in Southern/Eastern Europe and Eastern Asia — recover. At some point, many countries will also start maxing out their HDI, and we’ll need another indicator. Perhaps people are reading Selfish Reasons to Have Children.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Demographics, Fertility 
🔊 Listen RSS

Via Razib, I checked out Clive Finlayson’s T he Humans Who Went Extinct. On the human migration to Australia, Finlayson writes:

The long-tailed macaque, primate beachcomber par excellence, can teach us another lesson. These monkeys have managed to establish viable populations on a number of remote islands over a wide area of south-east Asia. They even reached the Nicobar (south of the Andamans) and Philippine Islands, which were never connected to the mainland…

Nobody, to my knowledge, has suggested that these macaques had found ways of making canoes or other watercraft and they do not seem to have developed maritime navigation sk ills either. The simple combination of their habits, which often brought them close to drifting rafts, and chance allowed them to populate many distant islands. Yet when it comes to the dispersal of humans across these same islands and onto Australia the prerequisites in all accounts of the epic journeys are watercraft and navigation skills.

That is, rather than humans settling Australia via rafting across the long channel across from East Timor, Finlayson suggests that instead humans were washed out to sea onto New Guinea, more in the manner of other species which have managed the same trick. Presumably his argument would also apply to other early human island hopping events, such as on Crete. The tenor of the book is based on arguments such as this, which reject human triumphalism in favor of naturalistic primate comparisons.

To me, however, this adds to the puzzle of how there were Hippos in Madagascar. Contrary to popular impression, adult Hippos can neither swim nor float; they navigate in the water by pushing up from the bottom. Their daily dietary needs are quite large, involving the consumption of up to 150 lbs of grass a day.

So how on earth did they manage their way across the ~275 mi channel between Africa and Madagascar? This strait — at times miles deep — was sufficient to keep out the vast majority of African wildlife, preserving a largely endemic plant and wildlife. Humans never made it across before inventing watercraft. There are a few islands in between, which at the present day largely lack the fresh water needed to sustain viable hippo populations. Only a handful of mammals have made the transit in the last 50 million years, most of which were fairly small in size.

And it’s not like this was a freak event. Hippos made it to several Mediterranean islands, establishing dwarf island populations. If anyone has insight on this pressing question, do let me know in the comments.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Dale and Krueger have responded to Robin Hanson at his blog, which commented on their most recent paper. I’ve also commented on this paper, here.

Most of Dale and Krueger’s comments relate to the stability of estimates that suggest that women earn less after attending high-SAT Colleges. I don’t see particularly compelling evidence here either way, though Hanson is right to note that many of the estimates are consistent in nature. I was surprised by their comment, “The paper is not about gender differences from college selectivity, and we have little reason to suspect that there are such differences.” Well, all three drafts of this paper that are online emphasize the results for attending College on various subgroups — for instance, by race, parental education, and parental income. Surely gender is an equally interesting subgroup.

They do also address the selectivity question — that is, why the Barron’s selectivity measure was large and statistically significant in the working paper, but not used in the published paper. They argue that precise manner in which the Barron’s selectivity measures were coded made a huge difference, and the result was important only for one specification. I’m happy to accept this answer. But as far as the “grand conspiracy” is concerned, I’ll note that even the published paper did make a compelling case that both the identity of the school and tuition paid were hugely important in determining future income. This result, for various reasons, may still have been incomplete. Yet it was the basic message of the published paper, and it’s simply the case that the popular press did not emphasize that result. For the record, I don’t think there was any conspiracy here. But it is awfully easy to trumpet the counter-intuitive but pleasing result — the College you went to doesn’t matter!

Also on the Barron’s measure, Dale and Krueger argue:

“While we did report a 23% return associated with attending the most selective colleges (according to the 1982 Barron’s ranking) in our earliest working paper, these results were from our basic model–which does NOT adjust for student unobserved characteristics.”

Here is the relevant section from Table 7 of their working paper:

If you haven’t seen a regression table, this will be confusing. The dependent variable — what they’re testing the effects for — is a logarithm transformation of wage. They’re testing which of the variables listed on the left matter for that, and each column represents a different specification.

The first three columns select on men. The first one tests to see how these variables impact future wages, without taking into consideration other Colleges you applied to, or where you got in. This is the “basic model,” and the .0234 here next to “Most Competitive” corresponds to the 23% return they mention above (relative to the lowest category of selectivity). But skip over to column 3. This “self-revelation” model is designed to get at student unobserved characteristics. As the authors write:

“The effect of the Barron’s rating is more robust to our attempts to adjust for unobserved school selectivity than the average-school SAT score. Based on the straightforward regression results in column 1, men who attend the most competitive schools earn 23% more than men who attend very competitive colleges, other variables in the equation being equal. In the self-revelation model, the gap is 13 percent… [An] F-test of the null hypothesis that the Barron’s ratings jointly have no effect on earnings is rejected at the .05 level in the matched applicant model for men.”

Now, this was in response to Hanson’s point. Hanson picked up on the 23% number, and Dale and Krueger are right to note that’s a little high (and Hanson is right to concede). But note that the very next sentence reports results from a specification which does adjust for student unobserved characteristics; and it is also quite high.

Finally, I’ll note that while the authors emphasize the significance (or lack of significance) for individual estimates in individual years, my simple calculations suggest that the aggregate, pooled effect of their variables might be quite large in economic importance.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

One of the topics I’ve covered here is the all-important issue of whether your choice in College matters in terms of your future earnings. To recap: the best research in the field until a few days ago suggested that the returns to going to a more selective College were quite large; a result which was somehow interpreted by many to suggest the exact opposite claim.

The original result that kicked this off a working paper by Dale and Krueger. They realized that simply comparing students who went to top schools with students who didn’t generates an obvious source of bias: students who go to highly ranked schools tend to earn more than others, but this may be due either to the impact of the school or the personal characteristics that got them in to begin with. To correct for this, the authors compared students who got into top schools, and chose to go; with students who got into those same schools, but decided to matriculate elsewhere. This is also not a perfect comparison, but manages to correct tremendously for this form of bias.

Their results suggested that something about the school was important. In the jargon, they ran a regression of future self-reported income against the identity of all 30 schools in their sample, and found that going to one school instead of another impacted your future income. Then they looked at the particular factors which might explain that, and found that what mattered was the tuition the school charged as well as its level of admissions selectivity as reported by Barron’s; but not the average SAT of the school.

Their publication paper performed virtually the same calculations. They found that the choice of school mattered; the tuition charged mattered; and that the average SAT of the school did not. Bizarrely, they claim that the results of the test for Barron’s selectivity was now no longer important, but they did not report any estimates from that specification (I’m not quite sure how that result could have changed, since the authors did not make any sample or specification changes between the two papers). In any case, even if the Barron’s selectivity measure doesn’t matter, it was clear that something about the choice of school matters, and that tuition charged is a good proxy for figuring out what that something is. In fact, their results suggest that every extra dollar of tuition provides something like a 13-15% internal real rate of return (down from a nominal 20-30% in the working paper). As is covered elsewhere, the results for SAT were highlighted, while the results for tuition were less discussed — even by the authors of the original paper.

Dale and Krueger are back with a new paper, which looks at another age group and also gets income data from government as opposed to self-reported income. Given that the correlation between self-reported income and actual income is .90, you might expect the results to be quite similar. Certainly, this is what David Leonhardt suggests in his writeup. In fact, the results are rather different. The authors now claim that neither the average SAT of the school, the tuition it charges, nor its selectivity influence future income. I have a few quibbles with this paper:

1) Unlike prior versions of their study, in this paper Kruger and Dale don’t run a specification testing whether Colleges matter at all, as opposed to the particular variables of SAT, Selectivity, or tuition. So even if the authors are correct in suggesting that, with the availability of new data and different age groups, none of their chief College selectivity variables predict future income — we don’t know whether some other aspect of College does. It’s possible that your choice of College matters even more than before, but in a different matter — ie, tuition paid could be a worse measure today given widespread tuition inflation; the US News & World report could have changed College rankings, or so forth.

2) In looking at why their results changed for this paper, Krueger and Dale find that their effects already diminish when using the sample of Colleges used for this paper as opposed to the sample from the old paper; and diminish even more when using government income rather than self-reported income. This tells us two things. One, the schools dropped for this paper (Denison, Hamilton, Kenyon, Rice, UNC) may matter a lot for future income, or else the inclusion of two historically black Colleges might affect the results. Second, it’s puzzling to think of why the results would change dramatically depending on the source of income. We know from other studies that individuals systematically under-report income both to surveys and in official government data. It’s not clear that the government data is “better” in the sense of getting a more accurate picture. The authors also exclude income received from capital gains, which doesn’t strike me as a good exclusion. Either students who went to elite schools lie more about their income, or are better at hiding it from the government (or else receive more of it in the form of capital gains). All that we can seriously say is that the conclusion you draw depends enormously on the data source you use for income and set of Colleges.

3) The results for both tuition and selectivity still show sizable effects for the 1976 cohort. Their Table 5 breaks out the effects of tuition and College selectivity by years. While none of these regressions are statistically significant on their own, the net effect is quite large. I applied the estimates on wages to the actual median wages in each time period (interpolating when the authors did not provide actual wage statistics). I estimate that a one percent increase in 1976 tuition (perhaps $100 total over four years) results in roughly a two percent increase in overall compensation through 2007 (assuming that you work for all 24 years), or $43k in non-inflation adjusted dollars. Alternately, a category shift in the Barron’s selectivity criteria (ie, from Highly Competitive to Most Competitive) is associated with $45k more in lifetime income. The effects of both selectivity and tuition grow over time, and are at their highest for wages observed in 2003-2007 (at this point, a one percent increase in tuition paid in 1976 gets you roughly $4k more a year per year. Presumably, this will rise even more by the time this cohort retires.

While the results from any one regression may not be statistically significant, that may simply be due to their sample size. The cumulative effect appears rather large in magnitude for both of the measures that were quite important in earlier drafts of the paper. This does change substantially when looking at the 1989 cohort, and it very possible that College selectivity is less important today (or else that group has not been in the workforce long enough to measure an effect).

4) Robin Hanson has some good commentary as well, focusing on the fact that the estimates for average school SAT on female earnings is negative and statistically significant. He suggests women going to more prestigious schools marry high earners, and so feel less need to make money themselves. I’ll only note that the results do subset among full-time earners, so it’s unlikely that this result is being generated by women withdrawing from the workplace altogether. The tuition/selectivity results above apply to a pooled sample of men and women, and so may result in even higher estimates for male workers.

Anyway, go check out all the papers referenced here. My prior belief on this, created by the first two Krueger and Dale papers, is that the College you go to affects your earnings. This new paper shakes this belief somewhat, and I am now not sure either way. Unfortunately, this data isn’t released publicly, so I can’t check to see if the authors calculations hold up depending on how you cut the data. In any case, you probably shouldn’t be basing your choice in Colleges on the basis of any study, and certainly not from this blog.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

The New York Times highlights the issue of hospitals opting not to hire smokers. It’s not clear how many places of employment are really banning smoking (or even how strictly such regulations will be enforced), but certainly there have at least been some high-profile cases (ie, Cleveland Clinic). One question that comes immediately to mind are — who still smokes? The NYT article includes this bit:

But the American Legacy Foundation, an antismoking nonprofit group, has warned that refusing to hire smokers who are otherwise qualified essentially punishes an addiction that is far more likely to afflict a janitor than a surgeon. (Indeed, of the first 14 applicants rejected since the policy went into effect in October at the University Medical Center in El Paso, Tex., one was applying to be a nurse and the rest for support positions.)

I had the impression that the remaining prevalence of smoking is strongly stratified by social class, geography, and education; and found this study from the CDC confirms as much:

One of the biggest predictors of smoking is education. Interestingly, the least educated (<8 years) have a lower smoking rate than average, particularly if female. This rises with more education, peaking with GED holders (42%) and falling to a low of 7.2% for graduate holders. This confirms the pattern, seen elsewhere, that credentials matter as well as years of education. ie, even among the group with 12 years of education, there is a large variance between those without a diploma (31%); those with a GED (42%), and those with a diploma only (25%). Similarly, there is a large difference between some college (23%, similar to HS grads) and getting a diploma (12%).

There is also a strong gender disparity among Asians — the smoking rate for Asian men is not much lower than the average (19%), yet among women, being Asian has a comparable effect as having a graduate degree (6.5% v. 6.4%). Asian countries also have these stark gender differences when it comes to smoking rates.

Income is another big factor — 24% smoking rate above the poverty line, 33% below. I checked this out a little more in the GSS. Here, you sometimes see the "inverse U" pattern as with education — the smoking rate for 1991 stays under 33% for the first few thousand dollars, goes up to the 40s-50s for the next few thousand, and then falls to 17% for the $75k+ crowd.

Here's political affiliation:

I’ve seen this pattern a few other places as well — Independents differ on some criteria from both Republicans and Democrats. Their lack of a coherent political ideology is indicative of other traits.

Anyway, it does seem that the class concerns of a smoking ban are somewhat warranted. This is a policy unlikely to affect the doctors, surgeons, or administrators at hospitals — while it will act as a much stronger burden on less educated support staff (who are of course facing substantially higher unemployment rates now anyway).

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Governments are large or small depending on the level of trust and civic attitudes people have for one another. These attitudes shape peoples’ taste for redistribution and public ownership, and also affect the quality of governance. This position has been advanced by a large literature; most recently in this interesting paper put out by IZA.

Here’s a graph which gets at the central idea:

One key advance in this paper is isolating the non-linear nature of this relationship. Broadly, there are three clusters of countries here — Scandinavian countries (lots of government, high-trust, high-quality); Continental European countries (lots of government, low-trust, low-quality); and Anglo-Saxon countries (low levels of government, medium-trust, medium-quality).

One explanation of this result (provided in the paper) is the following: high levels of government spending can be sustained under two social equilibria. In the low-trust world; you have a chronic levels of mistrust and civic mindedness. Nevertheless, the fact that uncivic minded people benefit from public services, but evade paying taxes, encourages more spending. High levels of corruption and low levels of public trust make the government work poorly. Yet individuals remain attached to the state, as in societies marked with a marked in-group bias it may remain a treasured source of largess and security. Where everyone cheats, as in Greece, it makes sense to demand more for yourself and leave the bill for someone else.

On the other hand, you can also sustain a large and efficient welfare state when everyone is civic minded and people typically do not shirk. High levels of trust allow individuals to coordinate the public provisioning of social insurance. Individuals are less likely to free-ride. I also wonder about thinking about this in light of Amar Bhide’s book, which argues against robotic finance in favor of a more discretionary, case-by-case Hayekian approach. Well, bureaucrats can be trusted with discretionary power in high-trust societies, while they either become corrupt in low-trust societies, or else you have to resort to dumb regulatory rules.

Many Anglo-Saxon countries (and Japan) appear in the middle. They are not so full of shirkers demanding large public provisions; nor are they so trusting that they sustain a Nordic utopia. In the absence of higher levels of trust or pro-social attitudes, it seems plausible that a larger government in these countries would come up somewhere between Sweden and Italy in effectiveness.

It’s also interesting to examine social trust in developing countries, as Ajay Shah and Vijay Kelkar do here:

China comes out as a very high-trust society. One wonders whether its governance successes, if any, ought to be credited to the citizenry of China rather than the wonders of Chinese central planning.

Other countries come out looking much worse — the rest of the “BRICs” for instance, plus Turkey and South Africa. As Arnold Kling and Nick Schulz point out, these countries have built governments much larger as a percentage of their economy than countries like Britain or America had at a comparable level of development. And as their levels of trust suggest, these governments are not particularly effective. Many social democrats expect these countries to build large welfare states as they grow richer, and it will be interesting to see how countries so large and distrusting will handle the challenge. Of course, there is substantial variation here, trust can change over time, the correlations are loose, etc. America for example has a small government even taking its trust into consideration.

So how do countries generate a more cooperative citizenry? One suggestion comes from Garret Jones, economist extraordinaire, who argues that the best way to drive cooperation is to induce patience and perceptivity, which are in turn driven by higher IQs. Jones in fact suggests that one of the ways in which IQ drives growth is through exactly one of the channels by which IQ generates a large “social multiplier.” This multiplier refers to the observation that a two standard deviation increase in IQ increases a person’s wage by 30%; but increases a nation’s wage by 700%.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Stanley Engerman and Kenneth Sokoloff famously argued that patterns of growth across the Americas can be traced back to historic levels of inequality. Natural factor endowments in certain areas (for instance, Caribbean islands) encouraged rent-seeking extraction over investments in human capital, and led to the political empowerment of rich landowners. These elites, in turn, created historical institutions that fostered economic coercion rather than entrepreneurship.

A recent paper by Melissa Dell looks into this thesis in more granular detail by examining the role of Peru’s mita system in sparking long-run development. Under the mita, certain local communities were forced to send one seventh their male population to work in Peru’s silver mines; other communities were exempt. Districts under the mita system now have 25% lower household consumption, pointing to a durable, long-run effect of this historical institution.

Dell’s explanation for this difference relies on the role of large-scale hacienda estates that grew up in the areas outside the mita zone. These hacienda owners were able to lobby politically for public goods like roads. They were also able to secure inhabitants from the extractive levys of the state; and under established property rights were able to better make long-term investments. On one level, these results confirm the point of view that “history matters.” But the manner in which history matters here is at odds with the traditional narrative — espoused by Engerman/Sokoloff and Oded Galor, among others — that land inequality is bad. In Peru, wealthy landowners here appear to leave inhabitants better off (at least, relative to other areas subject to extractive labor levies).

Interestingly; the opposite pattern can be found in India. Abhijit Banerjee and Laxmi Iyer highlight the impact of different land tenure systems dating back to the British Raj. Some regions of British India fell under the zamindari system, in which government officials collected revenue directly from landlords. In other parts of India, village communities or individual farmers provided tax revenue.

Fortunately for their analysis, the particular form of land tenure adopted in British India was more dependent on the prevailing political ideology in Britain when the region was occupied rather than local characteristics. For instance, Holt Mackenze implemented the an individual-based raiyatwari system in the Bombay Presidency under the influence of James Mill (father of John Stuart Mill).

Indian areas under landlord-based systems had persistently worse outcomes, especially after the beginning of the Green Revolution dramatically grew agricultural yields. Non-landlord areas had 16% higher agricultural yields and applied 45% more fertilizer. This would reinforce the Engerman/Sokoloff view that inequality of a sort entrenches a rentier class and harms long-run productivity.

Also, compare the above graph (which shows the various British Indian land tenure systems), with the one below, which shows districts in India facing a “naxalite,” or Maoist, insurgency:

This may just be me, but I see some sort of overlap here. This guerilla insurgency is certainly fueled by resource extraction in hilly tribal areas, but as the graph suggests is also strong areas of strong land inequality like Bihar.

The only common theme here is that the relation between inequality and growth is complicated. While an unequal landowner based system was a boom in Peru, it may have hurt in India. Possibly, this is because the British Indian state was better able to protect small landowners from attacks by brigands; while a small landowner lifestyle was simply unsustainable in Peru, where large landholdings provided a second-best solution to the problem of property rights and physical security.

It’s tempting to infer from these cases some general rule that can apply to today’s sky high inequality. Yet the lesson from at least these two studies on Peru and India may provide some reasons for reassurance. In Peru, elite landowner dominance actually led to better outcomes. It did not in India, but the problem there inequality caused by differences in endowments, not differences in earned income. When inequality fosters a rentier class that grows its status through economic coercion; inequality might be bad. But it’s not obvious that this is the case now. Many of today’s rich are “working rich”; and have made their income through entrepreneurial activities. Many plan on donating large proceeds of their income. To the extent we worry about such issues, ideal policies might target, say, copyright or other monopolies, as opposed to income inequality itself.

It’s also not clear how much about inequality we can learn from the gini coefficient. It’s true that high-inequality Latin America had a high gini coefficient, but so did comparable European societies without coercive economic institutions. The problem, according to one team of scholars, is that the total feasible inequality varies from society to society, given the fact that people have certain substance needs. They argue that the relevant comparison is not income inequality by itself; but rather the overall share of surplus extracted by elites. Taking this into account, they find that Latin America historically has had a higher level of elite extraction than Europe.

America, too, comes of looking better in their measure. While it’s gini coefficient measured as of 2000 is reasonably high by international standards, its inequality extraction ratio lies substantially lower than a number of developing countries like Brazil or South Africa.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Jim Manzi has a good reply up at TAS on our degree of medical knowledge, discussing an Atlantic article I also go into here.

While he makes a number of good points, I don’t think he quite addresses some of the issues raised by Robin Hanson and the original Atlantic piece. Manzi defends medicine in general; and this may serve as a useful corrective to those who believe that medical knowledge is completely useless. But with a few exceptions (maybe Robin Hanson), I don’t think many medical skeptics fall in that camp. Perhaps the quoted estimates of medical error are on the high end. But that doesn’t take away from the fact that there are serious issues in how medical knowledge is formed.

Take, for instance, several past Hanson posts. Doctors believe in breaking fevers, though there is no evidence that helps. Flu shots also don’t seem to work. I’ve also mentioned how uclers came to be declared a disease due to “stress”, when in fact they were clearly due to bacterial infection. Meanwhile, several large-scale tests of medicine use — from the RAND insurance study, or the 2003 Medicare Drug expansion — find minimal evidence that more medicine leads to better health.


I think our body of medical knowledge does illustrate how hard it can be to generate reliable knowledge, even in cases when we can easily run numerous experiments on a randomized basis. While Manzi emphasizes difficulties with long-term, behaviorally-oriented interventions; you can see that the corpus of verifiable medical mistakes is quite large and runs across several fields.

Manzi also argues for the difficulty with judging the effects of particular medical treatments when considering complex causal pathways and different lifestyle choices. This is a reasonable point — and one alluded to in the Atlantic piece as well (“Just remove all the meds”). But this is a point that goes against the tenor of his original essay — that randomized experiments can serve as a useful corrective in exactly the situations where we have causal density, where mere observational or correlation studies (even ones that “control” for various background characteristics) are insuffient to generate true knowledge.

If Manzi (of today) is correct; than the sheer complexity and chaos of understanding the functions of a human body defy the bounds of even randomized experiments, which by their nature are designed to test specific hypotheses, and rarely examine the impact of numerous treatments in different settings. If this is the case, it’s difficult to imagine any scientific procedure that would reliably generate medical knowledge. It’s also difficult to see randomization as a silver bullet in the social sciences as well.

I haven’t even gone into the several ways (as Heckman emphasizes) in which randomized trials may be flawed. While held up as the “Gold Standard” of estimating treatment effects, randomization as practiced faces many flaws. Randomized treatments cannot answer general equilibrium questions, and often, they provide estimates that only apply in limited domains, without broader generality. As the use of randomized experiments increases; scholars increasingly start answering addressing questions that are easy to answer (ie, do Sumo wrestlers cheat?) rather than tougher questions that do not have easy solutions.

I’m not against randomized experiments. But I don’t think they will fix the Social Sciences, given that they haven’t fixed medicine. Rather, it seems entrenched forms of human bias plague both fields.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Courtesy of Robin Hanson, I see that The Atlantic has an excellent article on medical knowledge:

One of the researchers, a biostatistician named Georgia Salanti, fired up a laptop and projector and started to take the group through a study she and a few colleagues were completing that asked this question: were drug companies manipulating published research to make their drugs look good? Salanti ticked off data that seemed to indicate they were, but the other team members almost immediately started interrupting. One noted that Salanti’s study didn’t address the fact that drug-company research wasn’t measuring critically important “hard” outcomes for patients, such as survival versus death, and instead tended to measure “softer” outcomes, such as self-reported symptoms (“my chest doesn’t hurt as much today”). Another pointed out that Salanti’s study ignored the fact that when drug-company data seemed to show patients’ health improving, the data often failed to show that the drug was responsible, or that the improvement was more than marginal.

Salanti remained poised, as if the grilling were par for the course, and gamely acknowledged that the suggestions were all good—but a single study can’t prove everything, she said. Just as I was getting the sense that the data in drug studies were endlessly malleable, Ioannidis, who had mostly been listening, delivered what felt like a coup de grâce: wasn’t it possible, he asked, that drug companies were carefully selecting the topics of their studies—for example, comparing their new drugs against those already known to be inferior to others on the market—so that they were ahead of the game even before the data juggling began? “Maybe sometimes it’s the questions that are biased, not the answers,” he said, flashing a friendly smile. Everyone nodded. Though the results of drug studies often make newspaper headlines, you have to wonder whether they prove anything at all. Indeed, given the breadth of the potential problems raised at the meeting, can any medical-research studies be trusted?

This discussion reminded me of Jim Manzi’s earlier essay. There, he argued that the Social Sciences were so far behind the hard sciences because of the problem of causal density. Without the benefits of randomization and experimentation available in the physical sciences, it’s hard to figure out causality — or so Manzi argues.

Yet as the Atlantic article points out, having recourse to randomization isn’t sufficient to generate knowledge. The real problem there is experimenter bias. When there are large incentives to produce results in a particular way, those results tend to be published. Trial-and-error got us thousands of medical papers, but it appears that the vast majority of them are just wrong (one researcher above suggests 90%). In some cases, our knowledge even regresses over time. Ulcers are caused by a bacteria, for instance, not stress — a fact that we used to know, but then somehow forgot.

An old joke goes that the Moon mission was a horrible idea, because it allowed every wiseguy to go “If the government can get us to the moon, why can’t it do X, Y, Z?”. Similarly, it looks like the immense success of physics has obscured how difficult it actually is to generate knowledge, even in cases where randomized experiments are possible.

As further demonstration of that, look at behavioral economics. This field was supposed to deal with problems of neo-classical economics by employing more realistic assumptions about human psychology. We recently got a good test of this theory, taking the 2008 stimulus — which Manzi already flags as an instance where we know less than we think. The 2008 ARRA stimulus incorporated payroll witholding on the behavioral assumption that people would be more likely to spend money if the tax cut was not salient — if their paychecks simply got larger. I saw Bill Maher repeating this idea as if it were established fact.

One analysis suggests that was not the case — that the decision to administer the stimulus in the form of higher paychecks (as opposed to getting the money in a lump sum) resulted in far less spending. That’s billions of dollars in “wasted” taxpayer money as the result of behavioral economics research that turned out not to work — and we don’t know why.

I’d suggest that the reason why we think the certainty of our knowledge goes from Math > Physics > Chemistry > Biology > Economics > Psychology has as much to do with human fudge factors and politics as with the underlying difficulty of the material. This study by Daniele Fanelli, for instance, found that the softer sciences were more likely to report positive results, indicative of bias — either from the freedom to fudge results, or publication bias.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

As Benjamin Friedman laid out in http://www.amazon.com/Moral-Consequences-Economic-Growth/dp/0679448918">The Moral Consequences of Economic Growth; a tolerant, accepting society is predicated on running the growth treadmill. Simply being prosperous is not enough — people need to feel that conditions will steadily improve over time, or else populism, xenophobia, and other measures of intolerance go up.

So as we enter the a “New Normal” phase where the steady economic growth and low unemployment of the Great Moderation can no longer be sustainably maintained, there will be substantial political upheaval as well.

One manifestation of this is the strongly anti-elitist attitude espoused by anti-establishment political candidates, among others. Barack Obama (Columbia, Harvard Law) and Sonia Sotomayor (Princeton, Yale Law), for instance, have been attacked for holding Ivy League credentials.

Whatever one thinks of anti-elitist attacks (here is someone against them, for instance); it seems worthwhile to point out that the anti-elitists are onto something. Top College attendance remains a path to influence and wealth.

A recent post of mine pointed out that this is very much the case for income. A new paper by Lauren Cohen and Christopher Malloy (both teaching, of course, at Harvard) shows that this is the case for political power in the Senate as well.

Harvard, in particular, comes off as superstar — close to ten percent of Senators in the period covered (101-110th Congress) had a degree from a certain school in Boston. This is again consistent with the income data, and suggests that Harvard graduates collect certain “rents” that are not properly accounted for in most economic models. In certain areas, I think education is better modeled as a fixed quantity like “land” rather than a dynamic agent of production such as “capital,” and ought to be taxed accordingly.

Of course, there is the fact that politically minded individuals may be more likely to go to Harvard; or else hail from families that benefit both from legacy Admissions as well as name recognition in political races (I’m looking at you, Rodney Freilinghuysen, whose family has been in politics since 1817) . So it may be tougher to figure out causality here.

Still, what we can do is figure out the impact of the political effects of elite University dominance. The paper goes on to analyze the role of the social networks generated by College attendance on legislative behavior. Somewhat predictably, they find that politicians who attended the same school are more likely to vote with each other.

Aside from the direct role of elites entering the Senate directly, wealthy Ivy League graduates influence the system indirectly. There is some evidence that the American system works on a principle of one-dollar-one-vote, which will of course disproportionately benefit elite University graduates. That, too, has political consequences. As the author of that link (Harvard phD, teaching post at UChicago) points out:

However, the growth of the rich’s income relative to the mean in the US exceeded the growth of rich’s income relative to the mean in Europe. According to the one dollar, one vote theory of the welfare state, the faster growth of the rich’s income in the US allowed the rich to increase its political influence and tilt policy closer to its most preferred redistribution which involves a smaller welfare state. [emphasis added]

It seems that colleges are at the hub of this. Wages for the richest are rising in part due to the fact that Colleges act to restrict entry at the best Colleges. In turn, that allows their graduates to earn more, influence the political system to clamp down demands for redistribution, and instead give back to schools to get their own children a chance at the good life. I don’t want to push this story too much, but it strikes me as reasonable. If this is how the system works now, we might as well move to a system where one dollar buys you one vote, and at least get some public revenue from this.

All in all, the US Senate comes off resembling the Roman Senate to an extent — both are/were headed by elites who are broadly unrepresentative of the “people”, who hold large financial interests spanning the known world, have interlocking connections with other elite groups, are successful at intergenerational wealth transfer, and live in a highly unequal society. There is even some evidence that Congressmen use their inside information to beat the stock market. Arguably, one of the biggest differences is that Roman Senators were far more involved in military service.

Americans aren’t stupid. They recognize this. They’re just far less willing to put up with it when times are bad for everyone, and don’t look as though they’re getting better soon. I’d guess that the Harvard share of Senators is about to go down. I’m with Tyler Cowen — American political economy is built like a shark and cannot function without continued growth.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Razib’s link to the discovery of a new mammal species in Madagascar makes the following point:

This species is probably the carnivore with one of the smallest ranges in the world, and likely to be one of the most threatened. The Lac Alaotra wetlands are under considerable pressure, and only urgent conservation work to make this species a flagship for conservation will prevent its extinction.

In the case of this particular species, that makes sense. This cat/rat looking thing appears to inhabit a threatened microclimate, and so faces a high risk of extinction.

You hear statements like this all of the time, and not always with respect to animals inhabiting odd niches. For instance, we are told that ongoing destruction of the Amazon jungle is a bad thing, as it will result in the extinction of more species.

I can’t vouch for anyone else, but my general impression in reading these press releases is that the relationship between habitat destruction and animal extinction is roughly linear. The more habitat we destroy, the more animals go extinct.

But if you think about this, that doesn’t quite make sense. A little more habitat destruction in, say, the Amazon will only erode a bit of jungle at the margins. Many animals should be able to migrate elsewhere; immobile species should still presumably have counterparts elsewhere. The only way a bit of additional logging will result in species extinction is if the extremity in question happens to be a unique microclimate home to species not found elsewhere.

And that seems to be an unreasonable model for the Amazon as a whole. To be sure, there may be certain areas of extreme biodiversity. But the Amazon jungle in general appears to be a reasonable homogenous environment, home to species that are reasonable spread out throughout the breadth of the area. To the extent species are not; this may reflect the presence of subspecies with little differentiation.

So I might instead expect habitat destruction to follow a non-linear pattern with respect to extinction in a unique environment. The first few trees you chop down may do little damage; the last several may eliminate the last survivors of a number of species.

Bjorn Lomborg has made similar arguments in the past — that large amounts of climate degradation (even in the range of 98-99 percent in the cases of Puerto Rico and the Eastern US) does not result in substantial animal extinctions, as long as some of the environment remains intact. Though I understand this remains a hotly disputed claim. The “unknown unknown” problem is bad here — we don’t know what species we are unknowingly killing off.

Meanwhile, the continued existence of various species even in the event of large natural climate disasters has been taken as evidence for the existence of various refuga that sustain whole ecosystems. For instance, glaciation appears to have disrupted various tropical forests, yet many animals survived — though with odd geographic dispersion patterns. The http://en.wikipedia.org/wiki/Toba_catastrophe_theory">Toba volcano incident ~70k years ago resulted in massive layers of ash spread throughout South and South East Asia; yet the nearby Mentawai islands appear to have suffered no species loss. On the other hand, the introduction of humans (even when it does not result in massive environmental change) seems to be clearly linked to mass megafauna extinction. That would seem to suggest that “poaching” is far more destructive than cutting down a few trees.

If this is true, than ongoing destruction of Amazon, say, would not by itself be particularly troubling on the grounds of animal extinction. As long as we keep sufficiently large patches of the original jungle (or several different patches if we think there are a few microclimates), we should be able to keep the vast majority of the existing species. Even if some large fraction of the jungle is required to maintain a water cycle; presumably that is less forest than currently exists. We already recognize that some areas are more important conservation-wise than others (ie, coral reefs v. Russian forests). It may also be more important to maintain the existence of unique refuga environments (even in sharply reduced form) than the quantity of overall protected space.

But I realize this isn’t my area of speciality, and would like to hear back from more knowledgable commenters. Hopefully, you’re smart enough to see that my point isn’t to support rainforest destruction. I’d also like to hear back from anyone who can tell me how much a species is “worth.” While I’m opposed to animal extinction, I don’t have a good sense why other than a vague sense that it’s wrong, and a desire to see more National Geographic specials. Yes, I understand that there are some pharmaceutical applications, but that doesn’t seem to be a good reason by itself to maintain large non-human occupied environments. Tourism is more plausible, but presumably only redirects tourist dollars from some alternate attraction.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

The Incidental Economist has an excellent series by Aaron Carroll and Austin Frakt on what makes the US Health system so expensive. Here’s the killer graph:

What’s doing a lot of work here is “adjusting for relative wealth.” This phrase pops up a lot on their series, which finds that America spends more on healthcare in all sorts of categories, even when “adjusted” for wealth.

Yet it’s interesting that wealthier countries seem to spend more on healthcare in general, even correcting for purchasing price parity. This adjustment is designed to correct for differences in relative prices between countries, and is based on the principle that trade should equalize the price of transportable goods across countries.

This clearly does not apply in the case of healthcare, which gets progressively more expensive with relative wealth. That’s because healthcare, unlike a typical tradable good, is a service highly intensive in local skilled services. You need highly trained doctors and technicians to deliver healthcare, and the general level of prices of those inputs grows with relative income. For instance, barbers earn more in richer countries than poor ones; not because they are more productive, but because they can implicitly draw upon the greater productive resources in the richer society.

As countries get richer, they face Baumol’s cost disease. Tradable goods like manufactured goods experience large gains in productivity that steadily lower consumer costs and required labor inputs. These sectors of the economy steadily shrink as countries get richer, like how agriculture went from employing 70-80% of the US population in 1870 to 2-3% today. On the other hand, service and labor-intensive industries tend to see rising costs and grow to dominate the economy over time.

Yet while this would predict rising health costs over income, it doesn’t say whether that trend should be linear or quadratic or what have you. The regression line drawn in the table assumes something like a linear trend of health spending with respect to per capita GDP. While this may be reasonable for the cluster of European countries in the middle of the graph; it’s not at all clear that extrapolates to America, an outliner in terms of income.

In the absence of a model for health spending, which doesn’t seem to be provided, there’s no reason to assume a roughly linear relationship from the other OECD countries. Here’s another situation that has some similar characteristics to healthcare: higher education among OECD countries:

Like healthcare, tertiary education is a sector intensive in the use of highly skilled labor, expensive buildings, and land in prime locations. The relationship between expenditure per student and PPP-adjusted GDP appears to be non-linear, suggesting that societies spend proportionately more on these education expenses as they grow richer.

To be sure, there are also differences. American higher education spending is driven upwards by large resources spent on research, as opposed to pure student expenses, though this may be the case to an extent for healthcare as well. But America spends far more on education that would be “accounted for” by this simple regression model. Yet we don’t see a great deal of worry on where this excess spending is coming from, or where all the waste is in College education.

With the exception of the people at Cato, few analysts assume that the variance in costs of providing education can be attributed largely, or even partially, due to wastage. Economists typically assume that differences in relative prices ultimately reflect differences in productivity or product type valued at prices determined by supply and demand. Yet in the case of healthcare, economists tend to attribute cost variation as a problem to be addressed by various solutions, either left or right-wing in nature.

There’s a similar situation within the US. The Dartmouth study examines variation in Medicare reimbursements around the country. Their implicit assumption in their raw figures is that all the variation in costs around the country can be attributed to “wastage.” As Richard Cooper explains:

Regional variation is a product of regional differences in wealth, overlaid with differences in poverty. It’s not generally appreciated that health care expenditures for people in the lowest 15% of income are 50% to 100% greater than for people of average income. There’s also a difference at the high end. The wealthiest 15% also consume more, but only about 20% more. So there’s greater utilization at both ends of the income spectrum, but for different reasons and with different outcomes.

More spending at the high end improves outcomes, not simply for a specific condition but across the board, because the care consists of a broader spectrum of beneficial services. More yields more. But among the low-income patients, outcomes are poor despite the added spending. In fact, the added spending is because of poor outcomes – more readmissions, more care for disease that’s out of control.

And these differences are exaggerated in dense urban environments, like Detroit, Chicago and Philadelphia. Now, when you blend all of this into “regional” studies, which average rich and poor, urban density and ex-urban comfort, racial and ethnic groups, you get just what you’d expect. High costs with average outcomes in urban areas (the average of excellent and poor outcomes at different ends of the income spectrum)…

And you would expect that things would be very different in these vastly different “regions.” But everything was the same. Quality, access, satisfaction, even mortality. When things were average in each of these heterogeneous groups, it was all the same. Differences were not discerned because differences were not discernable. But, then, if they had, what could be made of it. Why would anyone want to know how Newark compares with Nebraska?

Most people I went to school with would have said, “Oh gee, there are no differences, I must have done something wrong.” But not the Dartmouth crowd. They said that because differences were not found despite all of that extra spending, the extra money must have been wasted. And if health care could be the same in both regions, the US could spend 30% less.

As the recent closure of St. Vincent’s shows, hospital in urban environments face unique cost challenges. These reflect different prices for urban land, higher costs of living in urban areas, and unique demographic characteristics of local populations. These factors explain, for instance, why education spending varies so dramatically around the country. Yet only in healthcare do we interpret these cost differences through the lens of “why are we spending so much?” as opposed to “what factors of regional variation would cause price differences?”

Aside from inherent differences in costs of inputs, price indices, and population differences; there are several other reasons to expect that healthcare will consume an increasing share of national output:

1) Healthcare is consumption. Suppose that one found that America spent more on automobiles than France. This difference may be due to underlying inefficiencies in American car production. Alternatively; it may be that Americans simply prefer to buy more expensive cars — more BMWs than Kias. Perhaps even the same amount of “driving” may be going on. Yet purchasing luxury cars still counts as a form of consumption.

In draw the analogy to healthcare: it seems that at least a part of higher healthcare spending comes from spending that does not improve health outcomes, but does affect patient experiences. Single-patient rooms and flat-screen TVs are more common now, for instance (even if hospital food remains poor). In higher education, too, it looks like much of higher spending is going to capital improvements that result in more gyms, better dorms, and better facilities. These forms of spending do not improve health or education outcomes, narrowly defined. Yet they should still count as improved consumption. Medicine is not about healthcare; education is not about learning, etc.

After all, what should increasingly prosperous Americans spend their money on? We’re headed for a society that produces all of its goods and food using under a tenth of the workforce. Everyone else is a service worker. What’s wrong with spending more and more resources on health services? Why is that any worse than spending money on flat screen TVs or concert musicians?

2) Health Spending Provides Value. I’m hoping to read Aaron and Austin’s future posts on outcomes. But certainly there is evidence to suggest that higher health spending results in better health outcomes.

The Dartmouth Atlas found otherwise: that Medicare spending had no correlation to health outcomes. Ignoring the connection between health spending and outcomes (you might expect areas of high health needs to require higher spending); this appears to be a result isolated to Medicare spending. Richard Cooper suggests that there is an aspect of cost-shifting going on as states tend to bill Medicare in order to recover losses from other health provisions. He finds instead that overall health spending is positively correlated with health outcomes.

Of course, America overall has relatively poor life expectancy statistics relative to other countries that spend less on healthcare. Yet as Samuel Preston and Jessica Ho find, differences in life expectancy reflect both differences in health systems, as well as differences in public health and individual choices. For instance, they cite a study finding that:

[I]f deaths attributable to smoking were eliminated, the ranking of US men in life expectancy at age 50 among 20 OECD countries would improve from 14th to 9th, while US women would move from 18th to 7th.

Of course, as Aaran and Austin point out, America has relatively low smoking rates today. Yet America’s per capita cigarette consumption was once among the world’s highest, leaving a durable mortality burden.

To be sure, the authors find that America has a higher prevalence of various diseases. But these could reflect differences in medical treatment, or differences in detection, or longer survival durations from diseases.

To address these issues, Preston and Ho focus on diseases, like cancer, that are likely to be similarly diagnosed across various countries. They find that America has developed relatively effective methods of treating and curing such chronic diseases compared to other lower-spending countries. Patients with cardiovascular diseases, too, appear to fare better in America than Europe.

Another rigorous study tackling these issues comes from Joseph Doyle at MIT. In order to address the endogeneity issue (that health costs may be high exactly because health needs are high), he examines the medical treatment of tourists in Florida. Tourists don’t tend to choose to have heart attacks based on the spending rates of local hospitals, so they can be assumed to be randomly assigned to high and low cost hospitals.

The key result is that high-cost hospital delivered better results for out-of-town tourists, but not so for locals. The interpretation here is that higher medical spending is associated with better health outcomes. However, this is not a result you would find simply by examining how hospital treat local patients. High cost hospitals spend a lot exactly in order to deal with sicker patients, so simple correlations between health spending and health outcomes seem to suggest that health spending is “wasted.”

Of course, this research does not show that better health outcomes coming at a higher price is worth the money. Yet research by Richard Topel and Kevin Murphy at the University of Chicago suggests that the value of health improvements is large. They find that benefits of longer longetivity due to health improvements has yielded roughly $2.8 trillion a year from 1970 to 1990. Gary Becker, for one, suggests that even modest gains in longevity as a result of higher medical spending may prove to be worthwhile given conventional estimates of the value of life.

Still, I would probably support many of Aaron and Austin’s proposed suggestions. I agree with them that the RAND Insurance study represents an important landmark study. I would be happy to voucher-ize both health and education spending, and I would expect those measures to cut costs. There may even be enough room to cut medicine in half.

Where I disagree is in our ability to actually deliver these reforms and make them work through government action. The fact that Aaron and Austin don’t isolate higher health spending into any one category suggests to me that America’s higher health spending ultimately comes down to our higher consumption of healthcare in general as valued at prevailing prices. The fact that Medicare spends about as much just on the elderly as many other countries do on their entire health systems suggests that pure government control is not the problem. Can we sustainably cut down these costs over time? I’m skeptical. I don’t think our higher education costs are coming down anytime soon either. I think we’re headed for durable increases in health spending, and that’s not the worst thing in the world.

However, the fact that medical costs are disproportionately borne by the public sector suggests that dealing with permanently high medical spending will mean either a) shifting these costs to consumers, b) coping with permanently larger government, or c) else fiscal collapse). Like Aaron and Austin, I would prefer to see a) happen. But I think b) and c) are more likely.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Mark Palko at the excellent blog Observational Epidemiology has a post arguing that the appeal of Ivy League degrees is primarily peer effects and selection. It’s popular these days to sneer at the Ivys. For instance, this WSJ article argues that State schools have an edge in business recruitment.

Still, it’s worth looking at the data. I’ll follow Robin Hanson’s idea that the best place to find an estimate of “X” is to find a paper that looks for “Y”, but controls for X.

There was a study that got a lot of press recently looking at the impact of kindergarden teachers on future economic outcomes. The focus was on how different classroom treatments left long-run impacts, but the slides of the paper also had this interesting result (not found in the actual paper itself):

Clearly, going to a top-ranked school seems to deliver far higher earnings at age 28 than poorer ranked schools. In fact, the relationship is highly non-linear. Contrary to what you may have heard (“All top-ranked schools are the same”); it in fact looks like the difference between top-ranked Harvard and 9th ranked Dartmouth is on the order of ~$4,000 a year (perhaps $100,000-$200,000 over the course of a lifetime?). That difference grows to something like $18,000 over 25th ranked UCLA, per year. However, when one gets down to the 75th ranked school; school rank doesn’t much matter anymore. You’re pulling in ~$43k regardless. In fact, these are likely under-estimates of the value of going to a top school. Many elite graduates are still in graduate studies by age 28, and earnings tend to increase as people hit prime working years.

The pure monetary benefits of going to a top-ranked school (including peer and selection effects) are very substantial; and these benefits rise proportionately with the rank of the school. The marginal benefit of getting into the next highest ranked school is actually higher the higher the rank of your current school. In other words, Yale grads should really really want to go to Harvard. A very rough calculation suggests that everyone who turns down a top-ranked school for a safety to avoid student loans is making a big mistake (though, this is the same type of rough calculation people make when they conclude that there is a high College premium in general).

(Also check out the earnings of those from schools ranked 75 and lower. High school graduates can make perhaps ~$30k a year. So those guys just gave up ~$120k in earnings, plus paying College tuition, for the privilege of making ~$44k by age 28. How big is that College premium again?)

Of course, the large effects of going to a top-ranked school could be coming purely from those peer and selection effects. Another paper, by Krueger and Dale, tackles these issues. They have data on people accepted to both selective and non-selective schools. By tracing the incomes of people who were accepted at both Harvard and Mass U, but choose Harvard against those were accepted at both but choose Mass U; they are able to better parcel out what going to Harvard has on your earnings, as opposed to just being the type of person who can get into Harvard.

If you check out their working paper (these results do not appear in subsequent versions of the paper and media reports), they find that the “Selectivity” of the College you finally go to as judged by Barrons matters a great deal in explaining the variance of future income, though the average SAT of the school does not.

One simple interpretation is that peer and selection effects don’t matter as much as you think. Just being the type of person who can get to Harvard isn’t enough; you need to actually go to Harvard to get the bonus. And having lots of high-SAT score friends at College doesn’t seem to help (though, interestingly, the Kindergarden study above did find evidence of peer effects. Your kindergarden pals matter but your College roommate doesn’t for your future earnings?).

Rather — it’s selectivity that delivers the extra premium, consistent with the findings above. Something about actually going to an Ivy League school — the connections, the signal — imparts the sizable bonus. Potentially, they could even be learning more, but I think we all know that’s not true.

I don’t want to push on this too much. Obviously, the type of person accepted to both Harvard and Mass U, but goes to Harvard, is going to be different from the person who chose Mass U in the same situation. And SAT scores don’t constitute the full universe of peer effects. Arguably, going to a highly selective school where everyone was able to pass through a very tough selection process produces a peer effect stronger than simply going to a school of lazy smart people.

But certainly I think this research points to strong and durable rents being earned by graduates of elite Universities. I think that makes Admissions processes that focus so strongly on legacy identity so pernicious. If Dan Ariely is really concerned about inequality, perhaps he should complain more about his University’s (Harvard) Admissions processes. There is perhaps no force There are few forces in the country so strongly promotive of inter-generational wealth accumulation as College Admissions practices, particularly at the top end of the wealth distribution, yet they remain very under-discussed whenever these topics come up.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Razib has an excellent post with information familiar to India-watchers: India is very diverse. In particular, it has a South which does well on levels of human development (and increasingly income as well); while the states in the “BIMARU” North perform abysmally on both economic and human development indicators. These kinds of disparities are frequently overlooked by commentators — both in India and elsewhere — who have as their primary analytical unit the nation-state.

Will Wilkinson has dubbed a related phenomenon the “UN Fallacy” — the error of assuming that two areas can be usefully compared simply because they are nation-states. So, for instance, you hear nonsense related to how “China has overtaken Japan.” Of course, on a per capita basis China remains poorer than El Salvador. Yet because the Chinese have aggregated themselves into a relatively large political unit, we think of the Chinese as “getting rich” and the Salvadorians as “poor.” We think of India as surging ahead, though it has more poor people than Africa.

Of course, the role of geography intersects with numerous other determinants of inequality in India. Yet rough geography remains an enormous predictor of income and life status. For instance, tribals — a broad category referring to various unassimilated groups — are among India’s poorest populations. Yet tribal income in the (relatively prosperous) hill states is the same as the income for an upper caste in a poor state. Dalits — former “untouchables” — make almost as much in rich states as upper castes do in poor states, and earn substantially more in urban environments than upper castes do in rural areas (these statistics are from Sunil Jain’s excellent book).

Also interesting are fertility differences. Several South Indian states are already below replacement in their fertility rates, while women have around 4 children per average in poor northern states like Uttar Pradesh and Bihar — which would lead to a doubling of population every generation.

This means that India’s demographic dividend could easily become a bust. The demographic dividend is the idea that countries which transition from high-fertility to low-fertility equilibria experience a temporary sweet spot in which a high proportion of working-age adults can drive an economic transition. India, as a whole, is going through this process. However, it is not on a region-by-region basis. The excess labor is, overwhelmingly, coming from poor, undereducated, and underfed states. The jobs, overwhelmingly, are located in richer and coastal regions.

Bihar, for instance, is a state of over 80 million people, with a per capita income of around $150 (no, that’s not a typo — Bihar’s per capita income is lower than India’s by a factor of perhaps 6); yet industry accounts for less than 10% of GDP. 58% of Biharis are below the age of 25; but the only way they are going to contribute meaningfully to India’s economy is if they move elsewhere, or industry magically pops up within the state.

TeamLease has carefully documented the impact of this geographical mismatch. Between 2010 and 2020, the states of Uttar Pradesh, Bihar, and Madhya Pradesh will account for 40% of the increase in 15-59 year olds, but 10% of the increase in GDP. Four richer western and southern states will account for 45% of the increase in income, but 20% of the increase in workforce.

These economic differences have already sparked large increases in internal migration, and will presumably continue to do so in the future. However, this leads to political pressures (for instance, the rise of ethno-linguistic chauvinistic parties in rich Mumbai protesting against Bihari immigrants, among others), and mass migration is not a feasible option.

One comment I want to add in the Indian context is that even looking at states can be misleading. In part due to robust sub-national loyalties, India’s sub-national administrative divisions are hugely diverse, and contain within them areas of extreme poverty. A state like Uttar Pradesh has close to 200 million people — which would make it the world’s sixth largest largest country on its own. It, too, is home to a bewildering variety.

For instance, the district of Hardoi in Uttar Pradesh had (in 1991) a population of 2.7 million and an infant mortality rate of 129 per 1,000 births. Compare this to Guinea-Bissau, a country with 1 million people and an infant mortality rate of 148. Bahraich in the same state had a population of 2.8 million, and a female literacy rate of 11% (up to 23% by 2001). Compare that to Benin, a country with 4.8 million people in 2001 and a female literacy rate of 17%. Uttar Pradesh has 71 districts, many of which are as large as African countries; several of which rank equally bad on development indicators.

Meanwhile, the western part of the state is similar to neighboring Haryana and Punjab — which are hailed as economic successes. All three regions are home to a large Jat population which has eagerly adopted technologies related to the Green Revolution.

This part of the country (located in the far NW) is largely responsible for India’s abysmal statistics on gender ratios:

The brown regions indicate where the sex ratio is particularly low — indicating the killing, neglecting, or aborting of baby girls. Mass gendercide on this scale has attracted particular attention, and is behind the “missing women” phenomenon in which India has far fewer women than expected. On western countries, there tend to be more women than men. In the graph, the brown color indicates that there are fewer than 800 women per 1000 men.

While this situation is typically blamed on India’s “culture”; it’s clear from the graph that the problem of missing women is closely affiliated with particular folkways in certain parts of the country, and not especially related to income. Punjab, Haryana, and Western Uttar Pradesh have high incomes, but the worst gender ratios in the country. Much of the North does badly, but surprisingly even the high-income western states of Gujarat and Maharashtra have fewer women than men. By contrast, the low-income eastern tribal-dominated states do well in terms of gender ratios, as does the relatively egalitarian South.

It’s interesting to contrast this graph with the following one, which indicates those districts (in brown) with a female literacy rate above 50% as of 2001 (a useful proxy for “development” broadly):

Surprisingly, many districts with few women have (relatively) high female literacy rates.

This graph confirms the stylized fact from the state data that northern areas are worse off. Those districts in blue have a female literacy rate below 50%. In states like Bihar and Uttar Pradesh, this is true for nearly all districts; it is true for no district in high-literacy Kerala in the South.

But also check out the cluster of low literacy states in the “rich” South. These roughly correspond to the old princely state of Hyderabad. This is a part of the intra-state disparities which are growing as India’s exposure to globalization and growth proves uneven, and are playing an increasingly important role in the political process.

In this region, the state of Andhra Pradesh (a relatively rich state) has been hit by violence as protesters from its Telangana region (an all blue area) hope to break off from the wealthier coast. The floodwaters of the Krishna and Godavari rivers disproportionately benefit these irrigated coastal areas (a colonial legacy — thanks to Sir Arthur Cotton), leaving the uplands to rely on rain-fed agriculture that is at the whims of seemingly increasingly variable monsoons. Similar conflicts roil many other Indian states.

These issues relate to how you think the China v. India battle will play out. Will China’s catastrophic malinvestment in capital prove more destructive than India’s chronic underinvestment in its people? Your answer to that question determines the fate of Asia’s two rival giants.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

File this one under the list of “infrequently-asked questions.” This is an issue I’ve discussed extensively over at TGGP’s blog. Basically, here is the puzzle: Jews are among the most wealthy groups in America, with a median income close to $100,000 a year.

Naively, you might expect Israel to be about as wealthy. Isreal is, after all, a country filled with Jews. Yet Israel is far poorer than a hypothetical “Jewish-America,” and is also poorer than America in general. Israel’s wealth is a little difficult to parcel out due to the presence of a large Arab minority. Yet even if you assign an income of non-Jews of 0, you arrive at an Israeli per capita GDP of $36,000. By comparison, both Ireland and America are in excess of $45,000 per capita. Corrections for Purchase Price Parity correct for some, but not all, of this difference.

This is part of a general phenomenon. As Tino has calculated, virtually all hyphenated-Americans do better in America than their home country.

With respect to European countries, this makes sense. As has been argued elsewhere, America/Europe income differences are due in no small part to different taxation policies. Lower marginal tax rates do have long-term effects on standards of living, even if they don’t “pay for themselves.”

Figuring out what’s going on in Israel is a bit trickier. This country looks more free-market than European standards. There are two crucial issues here:

1) Trade is very low. For its given level of wealth, Israel has a very small export sector. For instance, it has about an eighth as many exports as Belgium, a country with an economy less than twice as large. This should come as no surprise—the largest determinant of trade volume is geography, and Israel faces a far rougher neighborhood than Belgium. This forces Israel to produce more products domestically, rather than specialize.

2) Israel is a low trust society. I caught a lot of flack for this in the comments, but I stand by this statement. There is an enormous literature tracing the effects of trust and cooperation on firm size and GDP. In general, where social trust and cooperation is low — corruption is higher, and firms tend to be small-scale organizations built on kinship links. Managers must actively monitor their employees, rather than being able to scale up.

Somewhat surprisingly, Israel stands out as a country with high IQ but low levels of trust. Some 56% of Israelis report that you cannot trust others, which is a figure comparable to other low-trust societies like South Korea or Italy. Think of Israel as a Mezzogiorno with nuclear weapons.

One manifestation of this is that there are very few large Israeli firms. Teva, a generic drugs manufacturer, is a notable exception. By contrast, high-trust Switzerland is home to several national superstars like UBS, Novartis, TAG Hauser, etc. Israel’s economy is dominated by small-scale firms, many of which are founded by people who formed close bonds in the IDF. Even in America, there are host of large Jewish-founded firms, like Google.

Of course, a lack of large, productive multinationals may play a role in explaining Israel’s relatively poor economic performance. Firms with scale and branding are able to tap into the tail ends of a “smiley curve” economy by focusing on the value-added activities of branding, design, and distribution. Smaller firms operating in heavily competitive industries, by contrast, earn few economic rents.

It’s in this vein I want to revisit Helen Thomas’ comments that the Jews should return home. Set aside the Ashkenazi bias implicit in this question (where would Jews from or Iraq go?). A simple calculation would suggest that a mass relocation of Jews to other Western Democracies (particularly free market ones like America) would allow them to plug into a high-trust societies with more robust institutions.

More broadly, though we frequently assume that individuals are the same everywhere and growth should be possible for all — this just doesn’t look to be the case. The roots of post-Malthusian explosive growth are deeply rooted in a particular institutional form which was innovated in the Netherlands and exported to England in the Revolution of 1688. From there, it spread to a group of Anglophone settler nations and colonial dependencies like Singapore and Hong Kong (and by further American tutelage to countries like Taiwan, South Korea, and Japan). Alternative reforms were innovated in France after the French Revolution and spread by force throughout Europe.

We really have few examples of countries doing successfully economically without borrowing heavily from these models. For instance, Chile is the really successful part of South America, and they have also done the most to mirror First World institutions through the imposition of reforms led by the Chicago boys. China is a potential counter-example, yet the lesson here remains ambiguous as long as the country remains as poor as El Salvador. Again — look at the enormous success of Chinese transplants in more Anglo environments like Singapore, Taiwan, or Hong Kong. That is presumably indicative of Chinese potential GDP; and by that standard China is doing abysmally. Its institutions are to blame. (Interestingly, though China reports itself a high-trust society, it is also home to few large brands. Those that come to mind — like Huawei — are frequently state-supported. By contrast, poorer India already has several well-known brands like Tata, Infosys, Wipro, etc.)

There’s not a lot we can do to fix this. Moving the Jewish or Palestinian population out of Israel en mass isn’t a feasible option. Yet transforming these into high-trust societies overnight isn’t an option either. Paul Romer has advocated establishing charter cities around the world which would set up top-notch institutional enclaves in poor areas, yet he has found few backers.

This also has implications for America’s nation-building efforts. Nation-building succeeded in countries like Japan to the extent that parliamentary norms were successfully transplanted, while Japan was able to free-ride on the American military buildup. American demand for Japanese goods provided additional impetus for industrial growth. By contrast, interventions in Iraq, Afghanistan, and Pakistan have entrenched local interests by throwing in large amounts of aid. Exports of minerals and oil also generates a resource curse, making a transition into democratic, industrialized states even less likely.

I’m less certain about this issue than I have been in the past. I’m certainly open to other suggestions in the comments.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
No Items Found
PastClassics
The “war hero” candidate buried information about POWs left behind in Vietnam.
What Was John McCain's True Wartime Record in Vietnam?
The evidence is clear — but often ignored
Are elite university admissions based on meritocracy and diversity as claimed?
A simple remedy for income stagnation