…A 2012 survey of social psychologists throughout the country found a fourteen-to-one ratio of Democrats to Republicans. But where were the hard numbers that pointed to bias, be it in the selection of professionals or the publication process, skeptics asked? Anecdotal evidence, the Harvard psychologist Daniel Gilbert pointed out, proved nothing. Maybe it was the case that liberals simply wanted to become professors more often than conservatives. “Liberals may be more interested in new ideas, more willing to work for peanuts, or just more intelligent,” he wrote. The N.Y.U. political psychologist John Jost made the point even more strongly, calling Haidt’s remarks “armchair demography.” Jost wrote, “Haidt fails to grapple meaningfully with the question of why nearly all of the best minds in science find liberal ideas to be closer to the mark with respect to evolution, human nature, mental health, close relationships, intergroup relations, ethics, social justice, conflict resolution, environmental sustainability, and so on.”
…“There’s often a lot of irony in this area,” he said. “The same people who are exquisitely sensitive to discrimination in other areas are often violently antagonistic when it comes to political ideology, bringing up clichéd arguments that they wouldn’t accept in other domains: ‘They aren’t smart enough.’ ‘They don’t want to be in the field.’ ”
…As the degree of conservatism rose, so, too, did the hostility that people experienced. Conservatives really were significantly more afraid to speak out. Meanwhile, the liberals thought that tolerance was high for everyone. The more liberal they were, the less they thought discrimination of any sort would take place.
Call it “liberal privilege” in academia. The assumption is often that the normative framework you are working with is on the cultural Left (these discussions are much more relevant for social issues since the high socioeconomic status backgrounds of many academics means they are less interested in Left populism in economic domains, even if they give lip service to it). Two examples will suffice personally. I was at a dinner hosting a speaker once when someone started talking about how “we” could reach out to conservatives. The assumption was that of the two dozen people around the long table none would be conservative. Second, I had an exchange with a patronizing reader who advised that I tone down any mention of politics on my blog since that would alientate readers. I responded that it seemed to work for PZ Myers, to which he responded “Oh, I hadn’t thought of that.” The issue here is that the reader didn’t think of liberal politics as politics.
Second, the idea that liberalism just aligns with reality is a nice and neat conceit, but it is, as a nice liberal would say, “problematic.” Chris Mooney has a short piece on the priors of sociologists and human nature. “Blank slate” dead-enders aren’t just found on the political Right, though some of that exists there too (e.g., homosexuality, the important of shared family environment). Additionally, the natural sciences tend to be less politically liberal overall than fields like sociology and social psychology. I’m skeptical that this suggests that the best minds are in sociology and social psychology.
Finally, I do have to note that Haidt still offers a fundamentally liberal solution to the conservative deficit: “Haidt believes that the way forward is through a system of affirmative action: engaging in extra recruitment and consciousness-raising efforts for political conservatives in much the same way as for women or ethnic minorities.” All the same issues that afflict racial and ethnic affirmative action might be relevant in the case of conservatives. It may actually be the case that conservatives don’t have the inclination and aptitude for particular fields. This doesn’t negate the reality of discrimination, but it questions axiom that discrimination in a crude sense is shaping the different demographics we see in different fields, and a simple fiat fix can solve the problem. It is true that social psychology probably suffers from the lack of ideological diversity, but it won’t prevent the field from publishing lots of fluffy results which are picked up by the media oblivious to p-value fishing. Yes, people won’t take social psychology seriously, but most scientists probably never have. Bias is real. And discrimination is real too. But sometimes the solutions are more pernicious than the problems. That’s a conservative insight.
Archaeologists have found that early farming culture didn’t change drastically for the next 3,700 years. But about 4,000 years ago, the Bronze Age arrived. People started using bronze tools, trading over longer networks and moving into fortified towns.
Dr. Pinhasi and his colleagues found that the era also brought a sudden shift in human DNA. A new population arrived on the Great Hungarian Plain, and Dr. Reich believes he knows who they were: the northern Eurasians.
It seems really unlikely that Europeans are special in this tendency, with broad world-wide trends as outlined in Towards a new history and geography of human genes informed by ancient DNA. For decades there have been books written about the coming for the Indo-Europeans. David Anthony’s The Horse, the Wheel, and Language is probably the best recent summation of the “they came out of the steppes” viewpoint. If there is a major update on this it now looks like the demographic impact of the Indo-Europeans was much greater than we had previous imagined. But are Indo-Europeans special? Probably not…earlier work suggested major discontinuities in Europe with the arrival of agriculture. Later in Africa you had the Bantu expansion, which replaced most of the local people. And as John Hawks points out the ancient Siberian who lived ~45,000 years ago probably comes from a group with no modern descendants. With the disappearance of the Ma’lta boy’s people ~20,000 years later from eastern Siberia it suggests that the heart of Eurasia has been roiled multiple times since the arrival of anatomically modern humans.
Addendum: I would take minor issue with the title of The New York Times piece. The picture isn’t really clearer, but cloudier. It’s just that the old clear picture was wrong, and the new cloudy picture is less wrong. Ultimately the clouds may clear, but we need more samples for that.
In case you are one of the few people interested in population genomics who is not religiously following Twitter, the 11th Bay Area Population Genomics Meeting will be held at UC Davis on December 6th. Save the date. It is free. But you have to register. Details the Coop lab website (they are hosting). I’ll be tweeting from it.
Over at Greg Cochran’s blog he’s been posting on Indo-Europeans. He’s had many of these ideas for a long time, but after I recounted to him some more information from ASHG 2014 it crystallized a lot in terms of specifical detail. For example, the Kalash of Pakistan share a lot of drift with “Ancestral North Eurasians” (ANE). By “a lot”, I mean in the same range as North Caucasus and Eastern European groups. Other HGDP samples from Pakistan are somewhat lower in their signals, but it still noticeable.* In Iosif Lazaridis’ presentation at ASHG 2014 he outlined the likelihood that the widespread distribution of ANE ancestry in Europe probably had something to do with the migrations of the Yamna culture, from which derived the Battle Axe Culture. The genetic variation you see in eastern and central Europe today is representative of the Yamna people. They know because they have ancient samples from those regions. The Yamna themselves are a mix of an Armenian-like Middle Eastern population, and “Eastern Hunter-Gatherers” (EHG) which resemble those to the west but have a higher fraction of ANE (so the are WHG + ANE, while the Armenian-like population is similar to, but not exactly the same as, the “European First Farmers” (EFF).
But that’s not the point of this post. There were two Y chromosome posters which were of interest. One showed a Bayesian skyline plot which illustrated that many of the Y chromosomal lineages you know and love went through very rapid population expansion on the order of 5 to 10 thousand years ago. A second poster had a phylogeny of Y chromosomes derived from high coverage whole genome sequencing. They had four individuals from the R1 lineages, two of them from R1a1a. One individual was Indian and the other was Russian. The coalescence was ~5,000 years ago. The individual who did this analysis was not aware of the Bayesian skyline plot poster, so she immediately ran off to look at it when I told her. The coalescence with R1b for the R1a individuals was ~10,000 years ago.
I know that there are lots of debates about clocks and calibration when it comes to Y chromosomes. But the archaeology, ancient DNA, autosomal work, and uniparental lineages are all coming together with a coherent picture. The Y chromosomal data strongly suggests that we’re talking about “star phylogenies” in the recent human male past.
* And for what it’s worth the Kalash are not descended from the soldiers of Alexander. Rather, they seem an early example of the admixture which led to modern South Asians. Their drift from other populations is due to them being isolated and endogamous.
Yesterday CBS had a segment on hyperbolically titled Breeding out Disease. First, we will never “breed out” disease. Part of the reason is that a large fraction of disease is due to non-genetic factors. Perhaps in the future with nanotech we might get at all the biological misfires due to developmental problems which emerge out of the “environmental” (a word for stuff we basically can’t understand in any causal sense) effects. But genes aren’t everything.
Second, the CBS piece had two segments, which differ a lot in terms of their implications. The first involves preimplantation genetic diagnosis (PGD). This is already happening, so what you will see in the future is a matter of scale or magnitude, not a paradigm shift. I do think it is possible that in the next generation we will see the diminishing of recessive diseases due to highly penetrant deleterious alleles. Every birth of a child who is diagnosed with such a disease will allow us to predict future births, because presumably their parents will have rare variants which can then be put in the database. I don’t think this is controversial or scary in any way. It’s classic “science makes the world better.” Your child having a recessive disease or a karyotype abnormality is not part of some grand plan.
But the next element of the segment dealt with the firm GenePeeks. I saw the founder speak at the Consumer Genetics Conference in 2013, and it seemed to be a reasonable idea. Basically right now the play is to simulate the outcome of genotypes for combinations of sperm (donors) and eggs (the founder herself has a child with a recessive disease due to herself and her sperm donor being carriers for a rare disease). Enter Lee Silver, a famous geneticist before genetics was even quite so big. He makes many claims, some entirely reasonable, and some which I view to be a stretch. It seems that in concert with PGD simulating genotypes and looking to avoid highly penetrant alleles is very smart. In fact this is just carrier screening on steroids. But then Silver begins to imply that genetic methods are going allow to predict complex traits. On the face of it this seems likely to be true. The work on height is just a trial run for all sorts of complex traits, in particular diseases. In the next 10 years it is entirely likely that genomic techniques will allow us to capture most of the heritable variation which we now classify as “missing heritability”. Making a prediction which is actionable is a different thing altogether.
If you have a trait whose genetics is distributed across thousands of loci then simulating the gentoypes is going to be a brute force affair. I trust computation to catch up to this problem, but then it is making predictions on the individual level. It is one thing to capture the heritable variation on the population scale, but predicting in an individual case is going to be harder. Then, once you have the prediction you have to screen an enormous number of genetic combinations. If you want more than one complex trait, and they are independent, then the problem becomes exponentially more difficult.
There are two things which I think can get around this. One, which I’ve already mentioned, is to skew the embryos which are enriched for a grandparent whose quantitative trait you value (intelligence, height, or agreeability). Second, as I have said, the 2010s are the decade of reading the genome. The 2020s are going to be the decade of writing the genome. That seems a more viable and probable solution than screening for variants which are “in house.”
Finally, there is the standard question about selecting for non-disease traits like eye color. Silver doesn’t blink, and admits that this might happen. Norah O’Donnell is unsurprisingly concerned. I would reassure that we already select for non-disease traits in our children by selecting our spouses. It’s not that big of a deal. I’m rather sure that O’Donnell’s husband didn’t marry just because she’s a great journalist.
In the end Gattaca is a great movie with contemporary relevance. And thanks to the statistical shenanigans which went on in fMRI research it seems that genetics is unchallenged today as the queen of the biological sciences in our age.* But a movie is not reality, and geneticists have not bit into the apple of knowledge and are not as the gods. Relax, though expect a better future.
* Neuroscience made a play, but I think that’s done.
Cite: Sandin, Sven, et al. “The familial risk of autism.” JAMA 311.17 (2014): 1770-1777.
My wife and I are hoping to have more children. I am pretty convinced of paternal age effect,* so aside from issues like Down Syndrome, which is due to maternal age, the risk for various behavioral issues such as autism are on the radar for us. Also, there are people in the extended pedigree who are probably on the Asperger spectrum (I probably would characterize myself as on the spectrum, but many people who have met me contend that this just isn’t a good description and so might confuse or mislead readers who only know me from the blog. It might be more accurate to say that I’m low on “agreeability”). But should I be worried?
This weekend I heard about a documentary, Mimi and Donna, which is about a 90 year old woman who has to confront letting her 60 year old daughter be institutionalized. Because of her age the woman has never been properly diagnosed, but she definitely has some sort of “intellectual disability.” Today she might have been diagnosed with autism. The film-maker is the grand-daughter of Mimi, and the niece of Donna. What she said was very interesting to me. Her brother has a son with autism, and she herself has a son with autism. As she admits, it runs in the family. Though as I stated above some of my family members exhibit behavior which might seem on the Aspergers end of the spectrum (and I do have a lot of physical scientists and engineers in my pedigree), none of them have been diagnosed autism, nor are they mentally retarded in any way.
As of now my daughter is nearly 3 years old and displays no sign of autism, and my son is young yet but makes eye contact and is socially typical (yes, I am aware of later onset). So I began to realize perhaps I should at least update my odds a bit. I may contribute de novo mutations for risk, but it seems that I don’t carry them from previous generations. I found a large Swedish study, The Familial Risk of Autism, which outlines clearly the odds conditional on affected or unaffected siblings. The figure above is from that paper, and here are the results:
Results In the sample, 14 516 children were diagnosed with ASD [Autism spectrum disorder], of whom 5689 had autistic disorder. The RRR [relative recurrence risk] and rate per 100 000 person-years for ASD among monozygotic twins was estimated to be 153.0 (95% CI, 56.7-412.8; rate, 6274 for exposed vs 27 for unexposed ); for dizygotic twins, 8.2 (95% CI, 3.7-18.1; rate, 805 for exposed vs 55 for unexposed); for full siblings, 10.3 (95% CI, 9.4-11.3; rate, 829 for exposed vs 49 for unexposed); for maternal half siblings, 3.3 (95% CI, 2.6-4.2; rate, 492 for exposed vs 94 for unexposed); for paternal half siblings, 2.9 (95% CI, 2.2-3.7; rate, 371 for exposed vs 85 for unexposed); and for cousins, 2.0 (95% CI, 1.8-2.2; rate, 155 for exposed vs 49 for unexposed). The RRR pattern was similar for autistic disorder but of slightly higher magnitude.We found support for a disease etiology including only additive genetic and nonshared environmental effects. The ASD heritability was estimated to be 0.50 (95% CI, 0.45-0.56) and the autistic disorder heritability was estimated to 0.54 (95% CI, 0.44-0.64).
I am aware that the basal risk is low. But the cumulative sum of a lot of independent low risks can be quite stressful.
* A presentation at ASHG 2014 which had huge sample sizes (~1000 trios) with whole genome sequencing convinced me that paternal age effect shouldn’t be doubted. It’s real.
The Austronesians were crazy and extraordinary. Starting about ~5,000 years ago they set off from the environs of Taiwan, and began to push outward. For ~30,000 years the people of Melanesia had defined the eastern edge of human habitation, but the Polynesian branch of the Austronesians blasted past that, going alway the way to Hawaii and Easter Island. At the other extreme the ancestors of the Malagasy settled Madagascar, and island which the peoples of Africa had not reached as of yet despite ~200,000 years of human habitation. We don’t know what was happening here, but it is hard to pinpoint particular cultural, environmental, or genetic forces which might result in these sorts of radical change in mores. Humans are conservative and cautious by nature. But our particular lineage of modern humans far less so than our forebears or cousins. After all we did make it Oceania and the Americas, while the others did not.
Cite: Genome-wide Ancestry Patterns in Rapanui Suggest Pre-European Admixture with Native Americans
The second paper has a somewhat more subtle result. The inhabitants of modern day Easter Island are descended in the main from the Polynesians who arrived from the west. This has long been known from classical genetics and non-genetic fields. There has also been suggestion of European and Amerindian admixture. Entirely reasonable in light of Easter Island being a possession of Chile, and 19th century migratory events. What these authors did is that by looking at the distribution of ancestry outcomes in the genomes of Easter Islanders, they inferred that the admixture with Amerindians far predated that with Europeans. The rationale here is simple: recent ancestry from divergent groups tends to exhibit patterns of long alternating blocks, due to a relatively small number of recombination events. In contrast older ancestry tends to be broken up by many recombination events over the generations, until deconvolution can’t separate the two elements and they fuse as one. As an example of the latter case modern day Europeans and South Asians are compound populations whose admixture dates of ~4,000 years or more makes it difficult to trivially deconvolute their ancestral components on a genome-wide scale (though ancient DNA from Mal’ta likely can help in the case of Europeans).
Figure 4 above shows the match of two demographic models with the empirical results. M2 is one where Mestizos from Chile bring European and Amerindian ancestry into the genomes of Easter Islanders. M1 is where there is an ancient Amerindian admixture, followed by a later European one. The solid lines show the predictions, while the points show the empirical results from the samples. It is clear visually that M1 fits the data. There are many short Amerindian blocks, evident of an old admixture, as opposed to more varied and longer European blocks. The rough dates for Amerindian ancestry admixture are in the range of 1300 to 1400 A.D., which match reasonably well with when Easter Island was settled.
These results are strong. Not definitive and probably not the last word, though more Easter Islander samples can end the debate of admixture at least. But they make us wonder how incredible human migrations have been over the past ~50,000 years! Ancient people were far more daring than we had imagined, and I think we need to reconsider what “crazy” exactly is in many ways.
So I recently listened to an interview with the author of The Sonic Boom: How Sound Transforms the Way We Think, Feel, and Buy. One datum which is of interest is that apparently we as human beings are conditioned to ambient sound, and environments which are truly silent (chambers designed for scientific experiments) are very uncomfortable for us, in large part because our own sounds begin to overwhelm us (e.g., the beating of the heart). Anyway, the issues at the heart of the book turn out to be very relevant to me. Normally I have an iPod shuffle on my person. I have had one since early 2008. But at ASHG 2014 I forwent it in the interests of being able to hear someone if they wanted to get my attention. This caused an unanticipated problem. It turns out I really hate the sounds that you encounter in a public restroom, and I’ve been habituated toward just turning the shuffle on whenever I feel like it. This gets to the point that we all create our own aural environment, and in the age of portable digital devices this is been taken to a new level.
By now you have read the paper in Nature, Genome sequence of a 45,000-year-old modern human from western Siberia. In The New York Times Carl Zimmer has an excellent write up, Man’s Genome From 45,000 Years Ago Is Reconstructed. The two major findings that are getting a lot of attention are that the Neandertal ancestry tracts in his genome are considerably longer than in modern humans and that he is basal to modern non-African populations. In regards to the first the distribution of Neandertal ancestry in the genome allowed them to infer backward to the point at which a pulse admixture might have occurred. Seeing as this individual has been dated to ~45,000 years before the present, Neandertal admixture occured 50-60,000 years ago. This happens to right around the time of the “Out of Africa” expansion.
But for me the Neandertal aspect is not the most interesting, as that simply refined our prior understanding. Rather, it is the relationship to modern human beings. The same first author gave us DNA analysis of an early modern human from Tianyuan Cave, China a few years ago, and in it she reported that a 39,000 year old individual found in China already exhibited clear affinities to eastern Eurasians and Oceanians. Now, one of the inferences which fit the results in this paper is that this 45,000 year old Siberian derives from a period when west and east Eurasians were not fully diverged (or, that divergence had been a recent event). Is ~6,000 years sufficient to account for drift away of the Tainyuan sample? My intuition is that it isn’t (they had ~85,000 positions at chromosome 21 to make the inference about the ancient Chinese sample, more than sufficient). I suspect that the dating is off somewhere here, though I don’t know which (if only one) sample.
Second, the issue of Basal Eurasians comes up again in the paper, and more extensively in in the supplements. I have a very hard time not believing that there is a paper on Basal Eurasians in the works, because they are very sketchy on specifics (also, Lazaridis did not really talk about the Basal Eurasians in the ASHG presentation). From page 60 of the supplements:
We caution that the TreeMix model is sensitive to which present-day populations are used for the analysis and the tree changes when we use different present-day human populations (Tabl S10.1). In particular, the analysis is sensitive to which African individuals are included. When we including only one African individual and all available Eurasian individuals, the Ust’-Ishim individual separates before the Eurasian split (position a) with bootstrap support between 68% and 100% depending on which population the individual is from (Table S10.1). These results suggest that recent gene flow between African and from European populations may influence the placement of Ust’-Ishim in the maximum likelihood tree. Nevertheless, the TreeMix models typically find that the Ust’-Ishim individual separates either before Eurasian split (position a in Figure S10.7) or just after the split and already on the eastern non-African lineage (position c in Figure S10.7).
The Sardinians have a bit of Sub-Saharan ancestry from the Roman era, but is this what they’re talking about? The Basal Eurasians are part of the greater Eurasian clade. That is, they are a branch of “Out of Africa.” But who knows where they are localized? No one knows. Thinking about it it probably is the case that they’re talking about recent African gene flow into Europe, even if it’s a few percent in Sardinia. But who knows? It’s all quite mysterious.
We’re at a very confused and exciting time right now. About a decade ago there was a stylized model of a rapid “Out of Africa replacement” all across the world. Mitochondrial Eve had even convinced people that Africans were subject to this. That is not the case. It seems clear that the Khoisan people diverged from the rest of the human lineages on the order of 200,000 years before the present. This predates any “Out of Africa” event. Much of Africa’s genetics has been reshaped by massive demographic expansion by the Bantus. There have long been hints out of the Reich lab about “Out of Africa” gene flow back into Africa that isn’t obvious (i.e., not Ethiopia or North Africa) leading to the genesis of contemporary Sub-Saharan populations, but excluding the hunter-gatherers. Such hints don’t emerge from a vacuum. The results are perplexing. It strikes me that now we know a fair amount about the demographic events which reshaped the Holocene, but the period before the Last Glacial Maximum is now far more clouded. Attempting to reconstruct the deep past with the algebraic variables of the recent past might be part of our problem here….
Update: From the comments:
It’s very likely Tianyuan was not really closer to East Asians than Ust-Ishim. U-I branches with East Eurasians with 100% bootstrap support, but branches off before Oceanians and Asians (supplements, page 57-58). This is similar to how Tianyuan branched, which was the main basis for that paper claiming it’s proto-East Eurasian. I think they did not realize during the Tianyuan paper’s writing that this could be simply because Tianyuan lacked the Middle Eastern/Basal/African admix of Europeans used in the tree (French and Sardinians).
Formal testing (f-statistics from Rasmussen et al 2014) which were not done in Tianyuan’s own paper show no difference between Tianyuan’s relation to East Asians and Europeans.
About a month ago I was listening to the host of America’s Test Kitchen, Christopher Kimball, talk about olive oil on the radio. I’m aware that most of the “extra virgin olive oil” we buy in this country is basically fradulent. A study came out of UC Davis on this several years ago, but it’s long been an open secret. It doesn’t really matter much if you use olive oil to cook, as a lot of the flavor disappears in that process anyhow. Rather, it matters if you use olive oil as dressing on salads and such. I do. Still, as I have a rather unrefined palette I’ve usually plunked down for the biggest container I could find. Yes, even Bertolli. But Kimball was asked by the host about his preferred brand, and he offered California Olive Ranch, which also came out high on the UC Davis study, though to be fair the firm apparently had a hand in the funding. This extra virgin olive oil is often described as having a strong aromatic taste, almost pungent. I don’t mind strong tastes, so I ordered some on Amazon.
Probably because my taste sensitivity has been modulated by my high consumption of spices I felt that that the tangy aspect some mention was rather subtle. The oil is very “clean,” and leaves less of a cloying taste in my mouth than what I’m used to. Though I’m conscious of the fact that this might be a subjective perception of the fact that I know that someone with refined tastes prefers this olive oil, I’ve decided to make the switch and made a large order. I’d be curious what olive oil readers prefer.
I don’t have time to comment in depth on the new Siberian genome paper. But I would like to mention that the text and the supplements both mention that this individual lacks the “Basal Eurasian” component which seems ubiquitous in modern West Eurasians, and was likely brought by Middle Eastern farmers. The Siberian genome seems to solidify the intuition that the non-Basal Eurasian Out of Africa populations diversified on the order of 50-60 thousand years ago. But another issue that comes to mind is that it looks like the Khosean of southern Africa might have diverged from other human groups ~200 thousand years ago. When considering the “mysterious” Basal Eurasians perhaps we should consider the possibility of a lot of population structure among anatomically modern humans within Africa. The ancestors of modern Eurasians may have been one of many African lineages, perhaps resident in a “Green Sahara” environment.
The last few weeks have been pretty busy with traveling and such, so I haven’t had much time to blog. I’ll be putting up an ASHG post, where I note what I saw and insights from the sessions. But not right one. This post will be a quickie.
Second, I was shooting the shit with some friends, and I’ve come to the conclusion that one way to characterize the state of genetics in this young century is that the 2000s were about learning to read. The 2010s were about learning to read well. The 2020s will be about learning to write. The last is a reference to CRISPR.
Third, it’s probably skewed by the nature of the conference goers, but it seems that I’m now more well known for my Twitter presence than my blog. It was a pleasure catching up with everyone though.
When race-county combinations are considered, life expectancy disparities are dramatically larger. For example, Native American males in the cluster of Bennet, Jackson, Mellette, Shannon, Todd, and Washabaugh Counties in South Dakota had a life expectancy of 58 y in 1997–2001, compared to Asian females in Bergen County, New Jersey, with a life expectancy of 91 y, a gap of 33 y.
Life expectancy is important because it can’t be contextualized and reinterpreted with sophistry. Asian Americans tend to live longer than white Americans. How’s that a model for you? (yes, I know, the immigration systems selects for longer lived Asians!)
…The stereotypical “American Dream” for South Asians includes children equipped with an above average education. As the model minority, 64 percent of Indian-Americans had a Bachelor’s degree or higher according to the US Census of 2004. In addition, 60 percent of Indian-Americans had management or professional jobs, compared with a national average of 33 percent.
First, what the hell is with the quotes? Shouldn’t the American dream be about equipping children with above average education? The author of the piece herself has a biography which runs like so:
Born and raised in California, Lakshmi is a journalist and educator currently based in Berkeley. Over the past few years, she has worked with newspapers, radio and magazines from Gaborone, Botswana, to Los Angeles. She is a graduate of Pitzer College where she studied global communications and studio arts. She is presently pursuing her master’s at UC Berkeley School of Journalism.
“She” sure seems “educated” to “me” (I have no idea why I put quotations here). Second, she is honest enough to straight up admit that Indian-Americans have social statistics which are perfectly in keeping with the idea that on average they are a model minority.
What’s going on here? The problem here is simple: a particular class of educated Asian Americans schooled in post-colonial critical race theory posits a model of the world where everything is dichotomized into white people with privilege and poor oppressed “people of color.” Another symptom of this tendency to think in a binary is to talk about the “Global North” and “Global South.” No matter the word games which might be offered to obscure the overall thesis, this model removes most agency from “people of color”, and makes white people the movers and shakers of the world’s phenomena (e.g., stuff like facial symmetry is asserted to be Western beauty standards). But, critical race theory inverts the moral valence which one finds among white supremacists and their ilk, with whom they share key presuppositions (e.g., white people are sui generis). Where the model for white supremacists is that white people have a particular virtuous genius, for critical race theorists white people are the “Ice People” who introduce the contagion of bourgeois oppressive patriarchal values. It is in many ways a resurrection of the theory of the Noble Savage, as the idyll of nonwhites was shattered by the all consuming nature of the colonial experience which the white devils imposed upon them.
You can see how then that the Asian American model minority is “problematic.” Asian Americans do better on a host of social statistics than white Americans. But since white privilege is the all determinative variable which explains all social phenomena this outcome is perplexing. The solution from what I can tell is a long campaign of obfuscation, lying, and outright propaganda. Asian American activists schooled in critical race theory simply assert that the model minority concept is a myth, and trust that their sympathetic audiences will ascent to their knowledge of this domain. Mind you, they do bring up examples such as the Hmong to highlight how Asian-Americans are diverse, and not all are Taiwanese or Indian professionals. But the fact is that the Southeast Asian refugee experience is a secondary narrative numerically. The inversion of weights in this case is purely in the service of propaganda, which is persuasive to their innumerate audience. It would be like debunking white privilege by pointing out the reality of the whites of Appalachia, and much of rural America. All of a sudden these race hustling sophists would point out the importance of averages.
The belief that ethnic majorities dominate ethnic minorities informs research on intergroup processes. This belief can lead to the social heuristic that the ethnic majority sets an upper limit that minority groups cannot surpass, but this possibility has not received much attention. In three studies of perceived income, we examined how this heuristic, which we term the White ceiling heuristic leads people to inaccurately estimate the income of a minority group that surpasses the majority. We found that Asian Americans, whose median income has surpassed White median income for nearly three decades, are still perceived as making less than Whites, with the least accurate estimations being made by people who strongly believe that Whites are privileged. In contrast, income estimates for other minorities were fairly accurate. Thus, perceptions of minorities are shaped both by stereotype content and a heuristic.
Basically those whites who are very conscious of white privilege as an idea underestimate Asian American income. This tells us that the propaganda is working, though that’s not a surprise as most people are stupid and uninformed, and use theory to explain the world.
In the comments below there was a question as to why outcomes for offspring from parents can vary a great deal even without regression toward the mean. First, about regression. It’s a confusing and misunderstood concept. There is a general statistical phenomenon here, but let’s focus on genetics. Often in the comments of this weblog I’ll get the rhetorical question which has the general form of “but what about regression toward the mean?” Usually this is a good clue that the person has no idea what they are talking about. What about regression toward the mean? It’s not a magical force which shifts populations back toward a set point in an orthogenetic fashion. Basically when you select an individual based on their traits, and infer about the likely character of their offspring, you can predict the expected impact of genes on the outcome. The phenotype is an intelligible signal of the nature of genes in a heritable trait, and genes are predictably transmitted to offspring. In contrast there is an “environmental”* component which you don’t understand, can’t control, and can’t account for. This component is often not transmitted across the generations, so fluke contingencies which lead to individuals who are sharply deviated from the average of a population are not replicated in subsequent generations, and individuals are expected to be more typical. A perfectly heritable trait would not regress at all on the population level.
But you can predict only so much from heritability. The above plot is from John Hawks’ anthropology class. You see that the regression line is 0.72, so the heritability as inferred from these data is such. That means that 72% of the variance in the phenotype, height, can be accounted for by variance in genes. That’s a population wide statistic. That doesn’t mean that height is “72% genetic” on the individual level. That’s not even wrong. Since heritability is a population wide measure, so you need to be judicious when inferring toward individuals.
Yet still tall parents tend to have tall children. If two tall parents had hundreds of children, then you could make some inferences about the average height of the children using the breeder’s equation. But observe that there’s still noise in the prediction. There’s going to be a distribution of outcomes. Height in the developed world is 80 to 90 percent heritable, but the correlation in heights between siblings is on the order of 0.5. Similarly, IQ is on the order of 50 percent heritable, but the correlation between siblings is on the order of 0.5. Presumably segregation and recombination are working in a fashion to mix and match the genomes of individuals so that even heritable polygenic traits aren’t quite as predictable as you’d think.
* Before someone points it out, I am aware this component often collapses non-additive genetic variance, such as epistasis.
1 This is the book of the generations of Adam. In the day that God created man, in the likeness of God made he him;
2 male and female created he them; and blessed them, and called their name Adam, in the day when they were created.
3 And Adam lived a hundred and thirty years, and begat a son in his own likeness, after his image; and called his name Seth:
4 and the days of Adam after he had begotten Seth were eight hundred years: and he begat sons and daughters:
Over at National Geographic Virginia Hughes has a very interesting follow up to her feature in Matter, Uprooted. It told the story of a woman who finds out that the man who she thought was her biological father was not, an her attempt using genetic genealogy to attempt to find blood kin. The ultimate ending was bittersweet, as the protagonist found a friend, but not a sister. But spoiler alert, it turns out that she did in fact find out who her father was! Nevertheless, not everyone was appreciative of the ending. Here is the first comment:
I don’t really understand why people do these things. This story worked out well enough, but it could have worked out very badly indeed, given the superstitious excitement some people have about ‘blood’. If someone was not part of one’s life in the world, even by report, then it seems to me they’re totally irrelevant.
This is a common sentiment. But the reality is it doesn’t really reflect much of our experience in revealed preferences. It’s common for many people, especially when they are young, to assert that there are so many children that need families that they’ll adopt. If I check on Facebook all the people who asserted this it turns out most of them ended up having biological children. There are practical reasons one can make for this in terms of one’s own life. Many traits are highly heritable, such as intelligence and personality, and children who are somewhat more like are easier to relate to. But this is really rationalization. Having biological children is a deeply human thing, selected for by evolutionary processes as a basic tautology. Those who lack this impulse do not flourish over the generations.
The whole reflex to dismiss biological ties as ‘superstition’ reminds me of something I saw on Facebook several years ago. A medical doctor of my acquaintance posted about “National Infertility Awareness Week”, and one of his “friends” decided to comment that he didn’t feel infertility was something to be sad about, seeing as anyone could adopt. This is again not a line of discussion that’s going to lead to reasoned argument. Obviously as a family we haven’t had to face infertility, but when you have children at an older age it’s someone you do think about it, and you are much more aware of the trauma and strain it causes in those who have experienced it. To just tell these people to adopt may seem “rational,” but actually it’s callous.
Ultimately it comes down to the facile assumption by some that they can reduce what the “Good Life” is to a few spare axioms and then infer for the rest of the human race what they should want. My post Against Vulgar Mohism for Our Age argues that attempts to reduce these sorts of highly textured and complex life decisions to rational elements of manipulable utilitarian algebras is futile and inhumane. Sometimes it is just best to smile and be happy for someone when they reach the end of a long hard road toward fulfillment, even if it isn’t your particular cup of tea.
It seems that rather regularly there is a debate within evolutionary biology, or at least in public about evolutionary biology, where something new and bright and shiny is going to revolutionize the field. In general this does not pan out. I would argue there hasn’t been a true revolution in evolutionary biology since Mendelian genetics and classical Darwinism were fused in the 1920s and 1930s during the period when population genetics as a field was developed, and the famous “synthesis” developed out of the interaction of the geneticists with other domains of evolutionary relevance. This does not mean that there have not been pretenders to the throne. Richard Goldschmidt put forward his “hopeful monsters,” neutralism reared its head in the 1970s, and evo-devo was all the rage in the 2000s. Developments that bore scientific fruit, such as neutralism, were integrated seamlessly into evolutionary biology, while those that did not, such as Goldschmidt’s saltationism fell by the wayside. This is how normal science works.
But every now and then you have a self-declared tribune of the plebs declaring that the revolution is nigh. For decades the late Stephen Jay Gould played this role to the hilt, decrying “ultra-Darwinism,” and frankly misrepresenting the state of evolutionary theory to the masses from his perch as a great popularizer. More recently you have had more muted and conventional revisionists, such as Sean Carroll, who promote a variant of evo-devo that acclimates rather well to the climes of conventional evolutionary biology.
Nature now has a piece out which seems to herald the launching of another salvo in this forever war, Does evolutionary theory need a rethink? It’s written in the form of opposing dialogues. I’m very much in the camp of those believe that there’s no reason to overturn old terms and expectations. Evolutionary biology is advancing slowly but surely into new territory. There’s no problem to solve. The one major issue where I might have to make a stand is that it focusing on genetics is critical to understanding evolution, and dethroning inheritance from the center of the story would eviscerate the major thread driving the plot. The fact that evolutionary biologists have the conceptual and concrete gene as a discrete unit of information and inheritance which they can inspect is the critical fact which distinguishes them from fields which employ similar formalisms but have never made comparable advances (such as economics).
One elegant model of the origin of modern humans as we understand them is that we exploded upon the hominin scene, and swept all before us with our suite of cultural creativity. This is the “Great Leap Forward” thesis, supported by the sudden appearance of symbolic expression in European ~40 thousand years ago. In this telling our “archaic” cousins were pre-humans at best, evolutionary dead ends. The archaeology in this case dovetailed with an extreme interpretation of the “Out of Africa” thesis, whereby H. sapiens sapiens issues fully formed in all its glory, and unleashes a demographic supernova on its cousins. Richard Klein’s The Dawn of Human Culture encapsulates this view in totality.
This model had many upsides. One of them was simplicity. Another is that our mental image of ourselves as sui generis, made in the imagine of the gods themselves, is suitably flattered. Unfortunately it seems entirely the case now that this model is wrong. The New York Times reports on the discovery of haunting symbolic expression on the island of Sulawesi, Cave Paintings in Indonesia May Be Among the Oldest Known:
A team of researchers reported in the journal Nature on Wednesday that paintings of hands and animals in seven limestone caves on the Indonesian island of Sulawesi may be as old as the earliest European cave art.
The paper in Nature is Pleistocene cave art from Sulawesi, Indonesia. Note that these findings are in Wallacea. Modern humans were certainly there around this time, though it is likely that there were also other lineages, such as H. floresiensis around. What all this is telling us is that we don’t know as much about the past as we think we did, and, that it was complex and multi-faceted.
The Ben Affleck vs. Bill Maher and Sam Harris debate about Islam is all over the interwebs, and seems like something of a Rorschach test. On my Twitter some people seem awfully impressed by Ben, while others (including me) think that it’s a pretty good illustration of the shallowness of contemporary Left liberalism when it comes to religion. One response is that “you can’t generalize about 1.5 billion people.” No, I don’t mean Catholics, I mean Muslims. When it comes to Christianity, or white males, Left liberals seem comfortable generalizing about a pattern of patriarchy or oppression, no matter that some white Christian males were at the forefront of movements such as abolitionism. Words like “problematic” or “complex” and “nuanced” don’t come up when people begin to hold forth upon the “white male Christian patriarchy.” It’s a vast monolith. Imagine if someone stated there was a problem with child sex abuse in the Catholic Church, and the response was that “you can’t generalize, most Catholic priests are not child abusers!” True. But enough are that it’s a problem. Affleck’s immediate response is that Maher and Harris’ assertions were “Gross and Racist.” This emotive explosion is really at the heart of it, criticism of Islam triggered a disgust and aversion response, not a rational reaction. Not that we should expect Ben Affleck to engage in deep analysis, just as Maher and Harris are not deep thinkers on religion either. One strange thing I note about Ben Affleck’s angry reaction is that he challenged Maher and Harris on their lack of deep scholarly credentials in Islam. Now, if a Muslim had demanded this it would kind of make sense, but I don’t understand why a secular liberal would talk as if only the ulema could speak authoritatively about Islam. This is somewhat similar to the Yale Humanist association objecting to Ayaan Hirsi Ali speaking about Islam, and demanding that someone with academic credentials be invited as well. Shall we impose the same criterion when it comes to Christianity? Only pastors and priests need apply?
Over at The Washington Post‘s Wonkblog there is a post up, Ben Affleck and Bill Maher are both wrong about Islamic fundamentalism. First, this idea that there is a “moderate Islam” and a “fundamentalist Islam” is only useful to some extent. A genuinely textured argument needs to introduce more multitudes, from the philosphically esoteric Ismaili sect, which in its most numerous Nizari form tends toward what one might call a liberal form of modern Islam, to various traditionalist Sunnis who reject the Salafi/Deobani views but still express very conservative perspectives. The assassin of Salman Tarseer was from the Barelvi movement, which is the “moderate” traditionalist alternative to the various Salafi and Deobandi “conservative” currents which have been roiling Pakistan over the past few generations. I put the quotes because the Salafi and Deobandi movements are reformist, and to a great extent the products of the past few hundred years and strongly shaped by a modernist viewpoint, even if their modus operandi strikes us as reactionary. The fact is that traditional Islam has accepted as a majority consensus that apostasy from Islam should result in the death penalty. But there was also a lot of latitude in this area, and in pre-modern times political entities were not totalitarian. These sorts of edicts may not have been enforced much at all (analogy, Theodosius’ banning of public paganism in the late 4th century probably was not enforced across much of the Empire, though it did allow for interventions in some cases, such as the destruction of the Serapeum). Additionally, the reality is that for particular classes and individuals there was a wide tolerance toward free thought. The great physician al-Razi clearly would be considered a free thinker, while the poet al-Ma’arri was a caustic atheist (no surprise that ISIS beheaded one of his statues).
The modern age is arguably one of more conformity due to the ease of communication & travel, and the homogenizing power of the force of the state and mass media. In any case, Wonkblog assertions:
Overall, the picture that emerges of fundamentalism among the world’s Muslims is considerably more complicated than either Affleck or Maher seem to realize. There’s no doubt that, particularly among some Middle Eastern Muslims, support for intolerant practices runs high. It’s quite easy to criticize these practices when a repressive regime is inflicting them upon an unwilling population. But things get much more difficult when such practices reflect the will of the people, as they seem to do in Afghanistan, Pakistan and Egypt.
On the other hand, majorities of Muslims in many countries — particularly Western countries — find these practices abhorrent. Maher tries to speak in broad brushstrokes of a “global Islam,” but Pew’s data show that such a thing doesn’t really exist.
How to be polite about it? This is stupid. First, repressive regimes fall back on Islamic populism when they are weak. The Baathist autocracies were Arab nationalist and secular. What they are doing when putting Islam front and center is pandering to public sentiment, which is becoming more and more conservative over the generations. And things don’t get more difficult when barbarism reflects the will of the people. When the people are tyrannical their will is irrelevant. That’s presumably why you have the Universal Declaration of Human Rights. It is not surprising that the Cairo Declaration on Human Rights in Islam endorsed by the Organization of the Islamic Conference did not vouchsafe that one could change religions. Second, numbers are of the essence. Western Muslims are important to Western people, because they live among us, but they are numerically trivial. Wonkblog provides the fraction of selected Muslim nations (or Muslims in selected nations) where proportions agree that apostates from Islam should be executed (which is truly the historical traditionalist view, even if there are details of implementation which make it difficult, and there are some dissenting views which are becoming louder). Pew also helpfully provides the number of Muslims in each nation estimated for 2010.
% death penalty for apostates
Muslim Population death penalty for apostates
The nations surveyed represent about half of the world’s Muslims (>800 million of ~1.5 billion). These data indicate that 36 percent of the these Muslims favor the death penalty for apostates. Much of the balance in terms of population is going to be in Africa and other Middle Eastern nations (e.g., Iran) and India. I don’t know how things will shake out, though Nigerian Muslims are not particularly liberal, and I am curious if Indian Muslims would be any more liberal than Bangladeshi Muslims. In any case, we are faced with a glass half empty and half full situation. The majority of Muslims certainly do reject the death penalty for apostates today. But the minority who accept it as normative represent hundreds of millions of individuals. I tend to see the half empty aspect because I really don’t care what peaceful Muslims who focus on their mystical inner life do. They’re free to practice their superstition in the privacy of their homes, or in public spaces which they own, it neither picks my pocket nor breaks my leg. The problem is that the hundreds of millions who have what I might say are “problematic” viewpoints, if I was a pretentious liberal who enjoyed equivocating, would quite likely break my leg. This is not an academic concern, I agree with Shadi Hamid that democracy and liberalism have not made their peace in much of the Arab world. To some extent the masses will always be suspicious of liberalism, because they are a dull and uncreative sort. The American populace supports banning flag burning, and often curtailment of various kinds of speech. Elites, whether on the Left or Right step in to block these sentiments through the courts. Elites in Muslim nations need to grow some balls in this area, though the pattern of assassination of those who speak against the barbarians in their midst from Tunisia to Pakistan illustrates how deadly serious these issues are.
According to witnesses cited in the report, Islamic State fighters dumped more than 60 Turkmen and Yazidi children in an orphanage in Mosul after they had witnessed the killing of their parents by the fighters. “It appears some of the older children may have been physically and sexually assaulted,” the report notes. “Later, ISIL fighters returned to the orphanage and made the children pose with ISIL flags so they could take photos of them.”
In a barbaric pre-modern age the children would have been killed. So perhaps ISIS is not quite as 7th century as they like to proclaim. But the intersection of modernity, taking the photos, and barbarity on display here is reminiscent of Rwanda more than anything else. But this is more worrisome to me:
The report said the Yazidi girl who was abducted by Islamic State fighters when they attacked her village on Aug. 3 was raped several times by different men before she was sold in a market.
“Women and girls are brought with price tags for the buyers to choose and negotiate the sale,” the report said. “The buyers were said to be mostly youth from the local communities. Apparently ISIL was ‘selling’ these Yazidi women to the youth as a means of inducing them to join their ranks.”
Sunni Arabs in Iraq and Syria do have rational self-interested reasons to align with ISIS, at least temporarily. The barbaric behavior meted out to Shia and non-Muslims is generally not something they have to worry about themselves, and some have even collaborated for material gains. Though there are impositions on their personal freedom, from the perspective of a Sunni Arab the erstwhile Maliki regime and that of Assad’s may not have been better bets. But no one forces you go to a slave market and buy slaves. Civilization seems to rest lightly upon the shoulders of some. That is gross. You may not want to generalize about the religion of 1.5 billion, but if I was a Christian or Yezidi in the Fertile Crescent and I saw Sunni Arabs I know what I would do. Run. Don’t ask if they are moderate or fundamentalist. Just run.
Addendum: It is here that my friend Omar Ali may ask if I am perhaps giving succor to the average Fox-News-watching imbecile . In other words, being frank and honest about the warts and all of international Islam might cause problems for Western Muslims. I don’t have suggestions for my Middle Eastern friends, but for South Asians there’s an easy recourse: bow down before the idols of your ancestors. Arabs, Turks, and Persians think you’re black Hindus anyway, so why not go whole-hog? (so to speak) You’re just replacing a thousand little idols for one black stone you otherwise worship. A simple name change will suffice. Of course the idiots will think you’re Muslim anyway, but eat a ham sandwich and prove them wrong.