The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
 TeasersGene Expression Blog
/
Psychology

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

About one week ago I wrote about bilingual education, and I admitted my mild skepticism about the research about the benefits of bilingualism. A friend emailed me and wondered why I was only “mildly skeptical.” Partly I didn’t want the comments to get sidetracked, but recently friends on Facebook have started to get exercised that Ron Unz is running for the Senate, and how bad he is for not giving the children the opportunity to be bilingual. And of course all the research that confirms how great bilingualism is referenced.

So here’s an article from last month that my friend sent me. I’ll quote the appropriate section, you’ve seen this movie before, The Bitter Fight Over the Benefits of Bilingualism: For decades, some psychologists have claimed that bilinguals have better mental control. Their work is now being called into question:

But a growing number of psychologists say that this mountain of evidence is actually a house of cards, built upon flimsy foundations. According to Kenneth Paap, a psychologist at San Francisco State University and the most prominent of the critics, bilingual advantages in executive function “either do not exist or are restricted to very specific and undetermined circumstances.”

Paap started looking into bilingualism in 2009, having spent 30 years studying the psychology of language. He began by trying to replicate some seminal experiments, including a classic 2004 paper by Bialystok involving the Simon task. In that task, volunteers press two keys in response to colored objects on a screen—for example, right key for red objects, left for green. People react faster if the position of the keys and objects match (red object on right half of the screen) than if they don’t (red object on left). But Bialystok found that twenty Tamil-English bilinguals from India were faster and more accurate at these mismatched trials than twenty English-speaking monolinguals from Canada. They were better at suppressing the location of the objects and focusing on their color—a sign of superior executive function.

“It was a really exciting finding and one that I thought would be easy to study with my students,” says Paap. “But we just couldn’t replicate any of the effects.” After years of struggling, he published his results in 2013: three studies, 280 local college students, four tests of mental control including the Simon task, and no sign of a bilingual advantage.“That broke the dam,” he says. “Others started submitting negative results and getting their articles published.”

Jon Andoni Duñabeitia, a cognitive neuroscientist at the Basque Center on Cognition, Brain, and Language, was one of them. In two large studies, involving 360 and 504 children respectively, he found no evidence that Basque kids, raised on Basque and Spanish at home and at school, had better mental control than monolingual Spanish children. “I am a multilingual researcher working in a multilingual society,” says Duñabeitia. “I’d be very happy to see an advantage for bilinguals! But science is what it is. We find no difference and we have replicated it several times, in older adults, kids, and young adults at university.”

For example, one group of researchers analyzed 104 abstracts on bilingualism that were presented at scientific conferences. They found that 68 percent of abstracts that found an executive-function advantage were eventually published in journals, compared to just 29 percent that found no advantage. This publication bias, a common problem in psychology and science as a whole, means that the evidence for the phenomenon seems stronger than it actually is.

But Paap doesn’t think much of the published evidence either. He found that a bilingual advantage only shows up in one in six tests of executive function, and mostly in small studies involving 30 or fewer volunteers. The largest studies, involving a hundred or more, all found negative results.

The proponents of bilingualism as a cognitive benefit have reacted angrily. Read the whole thing. But it’s probably not a real strong effect if there is any at all. Just another battle in the replication wars….

 
• Category: Science • Tags: Psychology 
🔊 Listen RSS

lifesat

A new paper on which has some results on life satisfaction, intelligence and the number of social interactions one has has generated some mainstream buzz. For example, at The Washington Post, Why smart people are better off with fewer friends. I looked at the original paper: Country roads, take me home… to my friends: How intelligence, population density, and friendship affect modern happiness. The figure above shows the interaction effect between intelligence, life satisfaction, and number of times you meet up with friends over the week. What you see is that among the less intelligent more interactions means more life satisfaction and among the more intelligent you see the reverse.

webpreview_htm_ece4380546ed7237 But take a look at the y-axis. It cuts off at 4.10. The scale is: 1 = very dissatisfied, 2 = dissatisfied, 3 = neither satisfied nor dissatisfied, 4 = satisfied, and 5 = very satisfied. The effect here is very small. The less intelligent group had a mean IQ of 81. This is over 1 standard deviation below the norm, at about the 10th percentile. The intelligent had a mean IQ of 115, 1 standard deviation above the norm, so at the 84th percentile. When looking at the two groups divided between the prosocial (nearly 1 interaction per day) and antisocial (about 2 per week), the Cohen’s d for the low IQ was 0.05 and for the high IQ was 0.03. A d of 1 would mean one standard deviation difference between the two distributions in life satisfaction. In other words, the difference here is very minor.

The authors corrected for a bunch of variables, like sex, marital status, education, and ethnicity. But the data were from the NLSY, so the mean age was about 22. I wonder if the results would be different if you had an older age cohort. The authors themselves are quite guarded about their interpretation: “Given that our data are correlational and frequency of socialization with friends and life satisfaction were measured at the same time, we cannot rule out an opposite causal order to what we hypothesize, where happier people choose to socialize with their friends more frequently.”

The study may be reporting a true result, even if the effect is modest. But I’m quite confident that my inverted title may also be correct, though again, I suspect the effect will be modest. These are not actionable results for anyone. That is all.

 
• Category: Science • Tags: Psychology 
🔊 Listen RSS

I’ve been rather bearish on candidate gene studies of human behavior (e.g., “hug gene” or “violence gene”) since 2007. The reason being the influence of friends who warned me that a lot of false positive results were being published because they could be published. Basically you might have one group publish on a plausible candidate gene, and other groups would follow up and publish only when p < 0.05, neglecting all the null results.

I'm a believer that much of variation in behavior has a biological basis in terms of variation in genes. But I’m a believer in robust and replicable science which I have faith in. With that, Is there a publication bias in behavioral intranasal oxytocin research on humans? Opening the file drawer of one lab:

The neurohormone oxytocin (OT) has been one the most studied peptides in behavioral sciences over the past two decades. Primarily known for its crucial role in labor and lactation, a rapidly growing literature suggests that intranasal OT (IN-OT) may also play a role in humans’ emotional and social lives. However, the lack of a convincing theoretical framework explaining IN-OT’s effects that would also allow to predict which moderators exert their effects and when, has raised healthy skepticism regarding the robustness of human behavioral IN-OT research. The poor knowledge of OT’s exact pharmacokinetic properties, crucial statistical and methodological issues and the absence of direct replication efforts may have lead to a publication bias in IN-OT literature with many unpublished studies with null results lying in laboratories’ drawers. Is there a file drawer problem in IN-OT research? If this is the case, it may also be the case in our laboratory. This paper aims to answer that question, document the extent of the problem and discuss its implications for OT research. Through eight studies (including 13 dependent variables overall, assessed through 25 different paradigms) performed in our lab between 2009 and 2014 on 453 subjects, results were too often not those expected. Only five publications emerged from our studies and only one of these reported a null-finding. After realizing that our publication portfolio has become less and less representative of our actual findings and because the non-publication of our data might contribute to generating a publication bias in IN-OT research, we decided to get these studies out of our drawer and encourage other laboratories to do the same.

 
• Category: Science • Tags: Psychology 
🔊 Listen RSS

The below is ~2 minutes from Julia Galef. Not really about Richard Dawkins per se. I’m thinking on this. Strikes me as important.

 
• Category: Ideology • Tags: Psychology 
🔊 Listen RSS

i-e0ed6daf62985b09716b019fe85fdc0e-invisible-gorilla A few years ago when I reviewed The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us, I joked that it was the anti-Malcolm Gladwell manifesto. The joke was only half serious. Chris Chabris and Daniel Simons presented in their book serious arguments which weren’t sexy and offered no easy shortcuts. As such it is no surprise that Gladwell is still rolling in the money, while Chabris and Simons are respected academics, though not public intellectuals on the same magnitude (the irony is that arguably they are intellectuals in a more substantive way than their famous bête noire). A more egregious individual when it comes to science popularizing than Gladwell was Jonah Lehrer (not surprising that Jonah was somewhat of a protege of Gladwell). Aside from the admitted fabrications, Chabris has been long pointing out that Lehrer seems to purposely misrepresent or misunderstand the process of science, taking isolated studies and stitching them together to support novel and counter-intuitive theses which might sell copies of books (it was ironic that he wrote a long piece for The New Yorker on problems with replication).

The fact that you shouldn’t hinge your perception about the validity of a hypothesis on one study isn’t an issue for most scientists. They know how science works. It’s a noisy process, with lots of fits and starts, and consensus emerges slowly, and is periodically overturned or extended. There’s a reason that John Ioannidis’ Why Most Published Research Findings Are False is highly cited. There are thousands and thousands of studies published every year. If you want, you can search through the stack and find “peer reviewed research” to support nearly any proposition. The issue isn’t whether there are scholars willing to support your position, but what the scholarly consensus is, if there is one.

thinking-fast-and-slow All this came to mind when I saw this blog post, A Trick For Higher SAT scores? Unfortunately no. The short of it is that a few years ago the author read Thinking, Fast and Slow, from Daniel Kahneman, a Nobel Prize winner. He reported with excitement results from a study which primed individuals to focus more with less clear fonts, and therefore increased their cognitive performance substantially. The reason why this study’s results are important is obvious to anyone, increasing median cognitive performance is a social good (this is why we put iodine in salt to combat cretinism).

Though Kanheman is a great scholar, most people are not going to know about this study from him. Rather, Malcolm Gladwell used the study in David and Goliath: Underdogs, Misfits, and the Art of Battling Giants to illustrate one of his points. Unfortunately Gladwell is a big deal for many people. Though I quite liked The Tipping Point when it came out, over the years I’ve come to see that Gladwell is less a communicator of scholarship than a storyteller who sells intellectually-themed yarns. Gladwall hasn’t seen a sample size that dissuades him from reporting enthusiastically on a result with a marginally significant p-value, so long as it supports one of his story arcs.

Three years on the author of the blog post, and one of the original authors of the paper, have a follow up publication where they report that there is no effect at all from the priming with less clear fonts. The sample size of the original study was 40. The follow up, 7,000 total (they pooled multiple studies). The author of the blog post ends on a down note:

I expect that the false story as presented by Professor Kahneman and Malcolm Gladwell will persist for decades. Millions of people have read these false accounts. The message is simple, powerful, and important. Thus, even though the message is wrong, I expect it will have considerable momentum (or meme-mentum to paraphrase Richard Dawkins).

Probably descriptively correct. But you can do something about it. Be the asshole at the party to point out that the “latest research” your friend has read in the current issue of The New Yorker is most likely to be crap, especially if it is both counter-intuitive and supports your group’s normative priors. (yes, I am usually that asshole in real life too)

Note: the reason I say irrelevant, rather than false or wrong, is that a lot of research is trivial improvement on an already established consensus if when the results are robust.

 
• Category: Science • Tags: Psychology, Science 
🔊 Listen RSS

From what people tell me IQ is a social construct which is totally controlled by environmental variables, and so is not of much interest. But curiously the other day when I looked at the hits on this website over the past 3+ years a huge number of highly accessed posts had to do with intelligence and IQ. In any case, seeing as how many readers of this weblog are having, or going to have, children at a relatively advanced age (in an evolutionary sense) I thought this post would be a good public service announcement. Below is a figure from a preprint posted on arXiv, The effect of paternal age on offspring intelligence and personality when controlling for paternal trait level (via Haldane’s Sieve):


I’m assuming that there’s initially an upward slope because more intelligent men tend to reproduce later (you can confirm this by looking at AGEKDBRN and WORDSUM variables in the GSS). Once you control for education and IQ the effect disappears. But there isn’t a downward slope, which you might predict if the hypothesis of increased mutational load was valid. IQ is a high polygenic trait with variation controlled likely by thousands of genes, but one would presume that large effect de novo variants could change that architecture.

As always, more data is welcome.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Behavior Genetics, IQ, Psychology 
🔊 Listen RSS

Sir Francis Galton

Modern evolutionary genetics owes its origins to a series of intellectual debates around the turn of the 20th century. Much of this is outlined in Will Provines’ The Origins of Theoretical Population Genetics, though a biography of Francis Galton will do just as well. In short what happened is that during this period there were conflicts between the heirs of Charles Darwin as to the nature of inheritance (an issue Darwin left muddled from what I can tell). On the one side you had a young coterie around William Bateson, the champion of Gregor Mendel’s ideas about discrete and particulate inheritance via the abstraction of genes. Arrayed against them were the acolytes of Charles Darwin’s cousin Francis Galton, led by the mathematician Karl Pearson, and the biologist Walter Weldon. This school of “biometricians” focused on continuous characteristics and Darwinian gradualism, and are arguably the forerunners of quantitative genetics. There is some irony in their espousal of a “Galtonian” view, because Galton was himself not without sympathy for a discrete model of inheritance!

William Bateson

In the end science and truth won out. Young scholars trained in the biometric tradition repeatedly defected to the Mendelian camp (e.g. Charles Davenport). Eventually, R. A. Fisher, one of the founders of modern statistics and evolutionary biology, merged both traditions in his seminal paper The Correlation between Relatives on the Supposition of Mendelian Inheritance. The intuition for why Mendelism does not undermine classical Darwinian theory is simple (granted, some of the original Mendelians did seem to believe that it was a violation!). Many discrete genes of moderate to small effect upon a trait can produce a continuous distribution via the central limit theorem. In fact classical genetic methods often had difficulty perceiving traits with more than half dozen significant loci as anything but quantitative and continuous (consider pigmentation, which we know through genomic methods to vary across populations mostly due to half a dozen segregating genes or so).

Notice here I have not said a word about DNA. That is because 40 years before the understanding that DNA was the substrate of genetic inheritance scientists had a good grasp of the nature of inheritance through Mendelian processes. The gene is fundamentally an abstract unit, an analytic element subject to manipulation which allows us to intelligibly trace and predict patterns of variation across the generations. It so happens that the gene is instantiated in a material sense through sequences of the biomolecule DNA. This is very important. Because we know the material basis of modern genetics it is a much more fundamental science than economics (economics remains mired in its “biometric age!”).

The “post-genomic era” is predicated on industrial scale analysis of the material basis of genetics in the form of DNA sequence and structure. But we shouldn’t confuse DNA, concrete bases, with classical Mendelism. A focus on the material and concrete is not limited to genetics. In the mid-2000s there was a fad for cognitive neuroscience fMRI studies, which were perceived to be more scientific and convincing than classical cognitive scientific understandings of “how the mind works.” In the wake of the recession of fMRI “science” due to serious methodological problems we’re left to fall back on less sexy psychological abstractions, which may not be as simply reduced to material comprehension, but which have the redeeming quality of being informative nonetheless.

This brings me to the recent paper on SNPs associated with education in a massive cohort, GWAS of 126,559 Individuals Identifies Genetic Variants Associated with Educational Attainment. You should also read the accompanying FAQ. The bottom line is that the authors have convincingly identified three SNPs to explain 0.02% of the variation in educational attainment across their massive data set. Pooling all of the SNPs with some association they get ~2% of the variation explained. This is not particularly surprising. A few years back one of the authors on this paper wrote Most Reported Genetic Associations with General Intelligence Are Probably False Positives. Those with longer memories in human genetics warned me of this issue in the early 2000s. More statistically savvy friends began to warn me in 2007. At that point I began to caution people who assumed that genomics would reveal the variants which are responsible for normal variation on intelligence, because it seemed likely that we might have to wait a lot longer than I had anticipated. As suggested in the paper above previous work strongly implied that the genetic architecture of intelligence is one where the variation on the trait in the normal range is controlled by innumerable alleles of small effect segregating in the population. Otherwise classical genetic techniques may have been able to detect the number of loci with more surety. If you read Genetics of Human Populations you will note that using classical crossing techniques and pedigrees geneticists did in fact converge upon approximately the right number of loci segregating to explain the variation between European and African pigmentation 60 years ago!

Some of my friends have been arguing that the small effect sizes here validate the position that intelligence variation is mostly a function of environment. This is a complicated issue, and first I want to constrain the discussion to developed Western nations. It is an ironic aspect that arguably intelligence is most heritable among the most privileged. By heritable I mean the component of variation of the trait controlled by genes. When you remove environmental variation (i.e. deprivation) you are left with genetic variation. Within families there is a great deal of I.Q. difference across siblings. The correlation is about 0.5. Not bad, but not that high. Of course some of you may think that I’m going to talk about twin studies now. Not at all! Though contrary to what science journalists who seem to enjoy engaging in malpractice like Brian Palmer of Slate seem to think classical techniques have been to a great extent validated by genomics, it is by looking at unrelated individuals that some of the most persuasive evidence for the heritability of intelligence has been established. It is no coincidence that one of the major authors of the above study also is an author on the previous link. There is no contradiction in acknowledging difficulties of assessing the concrete material loci of a trait’s variation even if one can confidently infer that association. There was genetics before DNA. And there is heritability even without specific SNPs.

Additionally, I want to add one caveat into the “environmental” component of variation. For technical reasons this environmental component may actually include relatively fixed biological variables. Gene-gene interactions, or developmental stochasticity come to mind. Though these are difficult or impossible to predict from parent to offspring correlations they are not as simple as removing lead from the environment of deprived children. My own suspicion is that the large variation in intelligence across full siblings tell us a lot about the difficult to control and channel nature of “environmental” variation.

Finally, I want to point out that even small effect loci are not trivial. The authors mention this in their FAQ, but I want to be more clear, Small genetic effects do not preclude drug development:

Consider a trait like, say, cholesterol levels. Massive genome-wide association studies have been performed on this trait, identifying a large number of loci of small effect. One of these loci is HMGCR, coding for HMG-CoA reductase, an important molecule in cholesterol synthesis. The allele identified increases cholesterol levels by 0.1 standard deviations, meaning a genetic test would have essentially no ability to predict cholesterol levels. By the logic of the Newsweek piece, any drug targeted at HMGCR would have no chance of becoming a blockbuster.

Any doctor knows where I’m going with this: one of the best-selling groups of drugs in the world currently are statins, which inhibit the activity of (the gene product of) HMGCR. Of course, statins have already been invented, so this is something of a cherry-picked example, but my guess is that there are tens of additional examples like this waiting to be discovered in the wealth of genome-wide association study data. Figuring out which GWAS hits are promising drug targets will take time, effort, and a good deal of luck; in my opinion, this is the major lesson from Decode (which is not all that surprising a lesson)–drug development is really hard

Addendum: Most of my friends, who have undergraduate backgrounds in biology, and have taken at some quantitative genetics, seem to guess the heritability of I.Q. to be 0.0 to 0.20. This is just way too low. But is it even important to know this? I happen to think an accurate picture of genetic inheritance is probably useful when assessing prospects of mates….

Citation: Rietveld, Cornelius A., et al. “GWAS of 126,559 Individuals Identifies Genetic Variants Associated with Educational Attainment.” Science (New York, NY) (2013).

(Republished from Discover/GNXP by permission of author or representative)
 
🔊 Listen RSS

Prompted by my post Ta-Nehisi Coates reached out to Neil Risch for clarification on the nature (or lack thereof) of human races. All for the good. The interview is wide ranging, and I recommend you check it out. Read the comments too! Very enlightening (take that however you want).

When it comes to this debate I have focused on the issue of population substructure, or race. The reason is simple. Due to Lewontin’s Fallacy it is widely understood among the “well informed general public” that “biology has disproved race.” Actually, this is a disputable assertion. For a non-crank evolutionary biologist who is willing to defend the race concept for humans, see Jerry Coyne. When you move away from the term “race,” then you obtain even more support from biologists for the proposition that population structure matters. For example, a paper in PLoS GENETICS which came out last week: Analysis of the Genetic Basis of Disease in the Context of Worldwide Human Relationships and Migration. In other words, it is useful to understand the genetic relationships of populations, and individual population identity, because traits correlate with population history. Barring total omniscience population history will always probably matter to some extent, because population history influences suites of traits. If nothing in evolutionary biology makes sense except in light of phylogeny, much of human biology is illuminated by phylogeny.

But that doesn’t speak to the real third rail, intelligence. Very few people are offended by the idea of the correlation between lactase persistence and particular populations. Neil Risch says in the interview with Coates:

One last question. Your paper on assessing genetic contributions to phenotype, seemed skeptical that we would ever tease out a group-wide genetic component when looking at things like cognitive skills or personality disposition. Am I reading that right? Are “intelligence” and “disposition” just too complicated?

Joanna Mountain and I tried to explain this in our Nature Genetics paper on group differences. It is very challenging to assign causes to group differences. As far as genetics goes, if you have identified a particular gene which clearly influences a trait, and the frequency of that gene differs between populations, that would be pretty good evidence. But traits like “intelligence” or other behaviors (at least in the normal range), to the extent they are genetic, are “polygenic.” That means no single genes have large effects — there are many genes involved, each with a very small effect. Such gene effects are difficult if not impossible to find. The problem in assessing group differences is the confounding between genetic and social/cultural factors. If you had individuals who are genetically one thing but socially another, you might be able to tease it apart, but that is generally not the case.

In our paper, we tried to show that a trait can appear to have high “genetic heritability” in any particular population, but the explanation for a group difference for that trait could be either entirely genetic or entirely environmental or some combination in between.

So, in my view, at this point, any comment about the etiology of group differences, for “intelligence” or anything else, in the absence of specific identified genes (or environmental factors, for that matter), is speculation.

In response to this commenter Biologist states (note, I know who this is, and they are a biologist!):

Risch writes: “…the explanation for a group difference for that trait could be either entirely genetic or entirely environmental or some combination in between. … So, in my view, at this point, any comment about the etiology of group differences, for “intelligence” or anything else, in the absence of specific identified genes (or environmental factors, for that matter), is speculation.”

This is essentially correct. The quality of available evidence on which to estimate the contribution of genetic versus environmental factors to group differences in cognitive ability scores is quite poor by biomedical research standards — maybe more in line with standards for social science (I’m only half joking).

In light of that, one is forced fall back on to one’s priors. Without trying to speak for Risch, it is generally considered appropriate to adopt a uniform prior in the absence of other evidence. Under a uniform prior, “…the explanation for a group difference for that trait could be either entirely genetic or entirely environmental or some combination in between.” Maybe Risch would propose a different prior.

In fact, the uniform prior says that there’s a 25% chance that the explanation is 0% to 25% genetic, a 50% change that the explanation is 25% to 75% genetic, and a 25% chance that the explanation is 75% to 100% genetic. Obviously many people who write about this topic do not adopt a uniform prior. [my emphasis -Razib]

As Risch observes above intelligence is highly polygenic. There’s a fair amount of genomic evidence for this now. In other words the likelihood is not high that we will be able to account for the differential distribution of IQ between any two populations by differences in allele frequencies. Even if we do find the allelic differences, they’ll account for far too little of the variation in the trait. But there is another way we can get at the issues. Others have pointed out exactly how we can get more clarity on the race and IQ question before, so I’m not being original. And since I suspect that within the next decade this sort of analysis will likely be performed at some point somewhere because the methods are so simple, I might as well be explicit about it.

Let’s focus on the black-white case in the American context. On intelligence tests the average black American scores a bit less than 1 standard deviation below the average white American. As I’ve observed before the average black American is ~20% European, but there is variation around this value. Because the admixture is relatively recent (median ~150 years before the present) there is a wide range across the population of ancestry. In fact, the admixture is recent enough that siblings may even differ in the amount of European ancestry on a genomic level. An additional issue which is of relevance is that the correlation between ancestry and physical appearance in mixed populations is modest. By this, I mean that there are many individuals who are more European in ancestry in the African American population who have darker skins and more African features than those who have less European ancestry. Obviously on average more European ancestry predicts a more European appearance, but this is true only on average. There are many exceptions to this trend.

At this point many of you should have anticipated where I’m going. If the gap between blacks and whites on psychometric tests is totally driven by genetic differences between Africans and Europeans, then the gap should be obvious between pools of individuals of varying levels of European ancestry within the African American population. It seems unlikely that it would be that simple (i.e., all driven by genes without any sensitivity to environmental inputs or context). Therefore I suspect some design where you compare siblings would be more informative.

In a model where all of the between group differences are due to environmental inputs then genomic ancestry by geography within family should add little in terms of prediction of the phenotype. More plainly, when accounting for other variables which might correlate with ancestry (e.g., skin color), how African or European a sibling is should not influence outcomes on psychometric tests when looking at large cohorts of sibling pairs if those differences track nothing more than social construction/perception of race. If on the other hand there are many alleles of small effect distributed throughout the genome correlated with geographic ancestry which affect the final phenotype then adding ancestry as an independent variable into the model should be informative. This sort of indirect inference has already been performed with a character similar in genetic architecture to intelligence: height. Researchers have found that African Pygmies with more non-Pygmy ancestry are taller.

Ultimately I say that this issue might semi-resolve, because I think a hereditarian position in terms of group differences is not going to be tenable if the correlations with ancestry do not run in the direction expected within admixed populations. This sort of model is relatively straightforward in its predictions, and appeals to parsimony. Trying to salvage it with non-additive genetic variance is going to complicate matters. In contrast, those who champion the opposite position often dispute the very characterization of intelligence as a trait in the first place, so I presume that they would still exhibit skepticism if there was a correlation between genomic ancestry and the trait.

Addendum: I want to be clear: with the widespread availability of data sets and crappy security of said data sets this analysis is probably a few SQL joins away in 10 years.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Genetics, I.Q., Intelligence, Psychology, Race 
🔊 Listen RSS

As an aside in a fascinating City Journal piece on educational policy, A Wealth of Words:

Vocabulary doesn’t just help children do well on verbal exams. Studies have solidly established the correlation between vocabulary and real-world ability. Many of these studies examine the Armed Forces Qualification Test (AFQT), which the military devised in 1950 as an entrance requirement and a job-allocating device. The exam consists of two verbal sections (on vocabulary size and paragraph comprehension) and two math sections. The military has determined that the test predicts real-world job performance most accurately when you double the verbal score and add it to the math score. Once you perform that adjustment, according to a 1999 study by Christopher Winship and Sanders Korenman, a gain of one standard deviation on the AFQT raises one’s annual income by nearly $10,000 (in 2012 dollars). Other studies show that much of the disparity in the black-white wage gap disappears when you take AFQT scores—again, weighted toward the verbal side—into account.


Are we surprised that high verbals can talk themselves into more generous remuneration? But in any case the power of vocabulary is why I believe that the GSS and IQ correlation is probably robust. And speaking of vocabulary, the author alludes to the now well known phenomenon that children from low socioeconomic status backgrounds tend have a much smaller vocabulary than than those from higher socioeconomic status backgrounds, and how that leads to a positive feedback loop that determines life trajectory. The main confound that comes to mind is that those from low vocab households are probably also from less intelligent households, and intelligence is heritable. But, proactive social engineering probably can break apart the gene-environment correlation at least, and dampen the variance in phenotypic outcomes.

And this is where the policy prescriptions may not be to anyone’s liking. On the one hand this social engineering is social engineering, and probably will cost money. Conservatives will not like that. But, I also suspect that much of the positive value of a non-home environment is going to be abolished as the child matures and begins to self-select peer groups from their own socioeconomic milieu. In other words you need to attack the milieu, the culture of poverty and anti-intellectualism. And I suspect many liberals will not be comfortable with the aggressive paternalism that that implies. So nothing will get done.

Addendum: Again, the best thing you can do to have smart well behaved children is to select a spouse with those characteristics.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Psychology, Psychometrics 
🔊 Listen RSS

A paper on the psychology of religious belief, Paranormal and Religious Believers Are More Prone to Illusory Face Perception than Skeptics and Non-believers, came onto my radar recently. I used to talk a lot about the theory of religious cognitive psychology years ago, but the interest kind of faded when it seemed that empirical results were relatively thin in relation to the system building (Ara Norenzayan’s work being an exception to this generality). The theory is rather straightforward: religious belief is a naturally evoked consequence of the general architecture of our minds. For example, gods are simply extensions of persons, and make natural sense in light of our tendency to anthromorphize the world around us (this may have had evolutionary benefit, in that false positives for detection of other agents was far less costly than false negatives; think an ambush by a rival clan).*

 

But enough theory. Are religious people cognitively different from those who are atheists? I suspect so. I speak as someone who never ever really believed in God, despite being inculcated in religious ideas from childhood. By the time I was seven years of age I realized that I was an atheist, and that my prior “beliefs” about God were basically analogous to Spinozan Deism. I had simply never believed in a personal God, but for many of earliest years it was less a matter of disbelief, than that did not even comprehend or cogently in my mind elaborate the idea of this entity, which others took for granted as self-evidently obvious. From talking to many other atheists I have come to the conclusion that Atheism is a mental deviance. This does not mean that mental peculiarities are necessary or sufficient for atheism, but they increase the odds.

And yet after reading the above paper my confidence in that theory is reduced. The authors used ~50 individuals, and attempted to correct demographic confounds. Additionally, the results were statistically significant. But to me the above theory should make powerful predictions in terms of effect size. The differences between non-believers, the religious, and those who accepted the paranormal, were just not striking enough for me.

Because of theoretical commitments my prejudiced impulse was to accept these findings. But looking deeply within they just aren’t persuasive in light of my prior expectations. This a fundamental problem in much of social science. Statistical significance is powerful when you have a preference for the hypothesis forwarded. In contrast, the knives of skepticism come out when research is published which goes against your preconceptions.

So a question for psychologists: which results are robust and real, to the point where you would be willing to make a serious monetary bet on it being the orthodoxy in 10 years? My primary interest is cognitive psychology, but I am curious about other fields too.

* In Gods We Trust and Religion Explained are good introductions to this area of research.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Psychology, Psychology 
🔊 Listen RSS

There are many things that a given individual believes which are ‘heterodox’ in their social circle. For example, I have long thought that intelligence tests are predictive of life outcomes, and somewhat heritable in a genetic sense (these are both true, the objection of skeptics usually rests on the fact that they are skeptical of the construct itself). As I have explained here before I did not always hold to these views. Rather, when I was in seventh grade a teacher who mentored me somewhat took me aside after class, and suggested that perhaps some of my slower classmates were not quite as lazy as I obviously presumed (I tended to get impatient during mandatory group projects). When I was 5 years old and starting kindergarten my command of English was rather weak, and my mother explained to me that Americans were a very smart people. By the end of the year I was excelling. Throughout my elementary school years I frankly had a smugness about me, because I accepted what my parents told me, that academic outcome is a function of the virtue of effort. And I had quite a bit of virtue if the results were any gauge.

But as I said, it is the fashion today to reject I.Q. Usually people put intelligence in air quotes. The converse of intelligence, stupidity, is also not well acknowledged. Just as I took my realized intelligence to be a mark of my virtue (false, my virtue and moral compass are distinct, and perhaps even at some cross-purposes, with my analytic powers), I perceived stupidity as evidence of sloth and low moral character. This is just not so.

I.Q. is probably a hot-potato topic because of its associations with realized group differences, mostly race, but to some extent class. I think that the phenomenon is real and important, but that may not matter. I’ve been sobered by the realization recently that Soviet Communism persisted for 70 years. I don’t bring this example up to analogize skepticism of I.Q. with Communism, but to illustrate even patently grotesque and false views can persist for decades beyond their “sell-by” date. And yet sometimes it turns out that I’m not the only person out there who thinks that some people are smart, and some people are stupid. Here’s Felix Salmon, Who is speaking for the poor?:

My professional life is largely spent in a world of highly-numerate and highly-intelligent people, many of whom blow up spectacularly in the financial markets. And looking at hedge funds in particular, it’s very easy to find genius-level investors who have lost astonishing amounts of money: there’s clearly more to getting and holding on to vast sums than simply being off-the-charts smart. But the fact is that if you zoom out from the tiny group at the top, there’s a very strong correlation between numeracy, or intelligence, or financial literacy, on the one hand, and having a solid financial footing, on the other.

The distribution is clear: the smarter you are (as measured by IQ), the more likely you are to be invested in the stock market. And this distribution is independent of wealth: it applies to the rich as much as it does to the poor. Or, as the paper puts it, “IQ’s role in the participation decisions of the affluent is about the same as it is for the less affluent. The definition of affluence—net worth or income—does not affect this finding.”

There are various conclusions to be drawn here, one of which is that if we do a better job of financial education, then Americans as a whole will be better off. That’s true. But at the same time, financial illiteracy, and general innumeracy, and low IQs, are all perfectly common things which are never going to go away. It’s idiotic to try to blame people for having a low IQ: that’s not something people can control. And so it stands to reason that any fair society should look after people who are at such a natural disadvantage in life.

Let’s admit first that there’s more than just I.Q. Time preference matters, and that’s not perfectly correlated with intelligence. Though I suspect it too has a strongly heritable element. Second, is being stupid really a disadvantage? Frankly some of the most self-satisfied people I know are the stupid affluent. They are stupid enough that they can unreflectively enjoy their affluence. The correlation between income and intelligence is weak enough that there will be many stupid affluent and intelligent poor. The former are probably the happiest, and the latter the most miserable.

Also, see Matt Yglesias:

Unfortunately, what’s harder to see is how these trends are going to benefit the marginal college student in the United States. The kind of person, in other words, who these days tends to start a college career—typically at an unselective school—but all-too-often ends up dropping out. These are people who typically haven’t been incredibly well-prepared by their K-12 experience, who probably aren’t in the IQ elite, whose social and family networks aren’t full of college graduates, and who are only average in terms of motivation and discipline. That’s why they’re dropping out under present conditions. And they’re ending up not just with student debt, but with student debt that hasn’t purchased them much of anything in terms of valuable skills or credentials. Developments that help people like that are a real game-changer, but it’s not clear to me that anything that’s happening in the education technology space right now will really get us there.

The reality is that attitudes toward intelligence and I.Q. are rather flexible and situation dependent. People who deny the reality of I.Q. don’t believe that someone who has a low I.Q. should be executed (and conversely, those who accept I.Q. may still demand the execution of those with low enough I.Q.’s to be classified as mentally retarded!). I.Q. is just a social construct for some when it comes to the black-white difference, but they become more open to it when it is shown that conservatives have lower I.Q.’s.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Psychology 
🔊 Listen RSS

In light of the previous post I was curious about the literature on inbreeding depression of IQ. A literature search led me to conclude two things:

- This is not a sexy field. A lot of the results are old.

- The range in depression for first cousin marriages seems to be on the order of 2.5 to 10 IQ points. In other words ~0.15 to ~0.65 standard deviation units of decline in intelligence.

The most extreme case was this paper from 1993, Inbreeding depression and intelligence quotient among north Indian children. The authors compared the children of first cousin marriages, and non-bred in individuals, from a sample of Muslims in Uttar Pradesh of comparable socioeconomic status (though the authors note that inbreeding has a positive correlation with socioeconomic status in this community). A table with results speaks for itself:


(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: I.Q., Psychology 
🔊 Listen RSS

One point which I’ve made on this weblog several times is that on a whole range of issues and behaviors people simply follow the consensus of their self-identified group. This group conformity probably has deep evolutionary origins. It is often much cognitively “cheaper” to simply utilize a heuristic “do what my peers do” than reason from first principles. The “wisdom of the crowds” and “irrational herds” both arise from this dynamic, positive and negative manifestations. The interesting point is that from a proximate (game-theoretic rational actor) and ultimate (evolutionary fitness) perspective ditching reason is often quite reasonable (in fact, it may be the only feasible option if you want to “understand,” for example, celestial mechanics).


If you’re faced with a complex environment or set of issues “re-inventing the wheel” is often both laborious and impossible. Laborious because our individual general intelligence is simply not that sharp. Impossible because most of us are too stupid to do something like invent calculus. Many people can learn the rules for obtaining derivatives and integrals, but far fewer can come up with the fundamental theorem of calculus. Similarly, in the 18th century engineers who utilized Newtonian mechanics for practical purposes were not capable of coming up with Newtonian mechanics themselves. I’m using these two examples because calculus and mechanics are generally consider “high level” cognitive tasks, but even they at the root illustrate the principle of collective wisdom and group conformity. Calculus and mechanics is included in the curriculum not because all of the individuals who decide the curriculum understand these two topics in detail, but because individuals whom they trust and believe are worthy of emulation and deference, as well as past empirical history, tell them that this is the “reasonable” way to go. (science and engineering have the neat property is that you don’t just trust people, you trust concrete results!)

This sort of behavior is even more evident in political and social viewpoints. Recently there have been signs of shifts in African American attitudes toward same-sex marriage, and a more general trend in that direction across the population. Is this because individuals are sitting in their armchair and reflecting on justice? Of course people will enter into evidence the experience of knowing gay people, and the empathy which that generates, but are you willing to bet that these public policy shifts are primarily and independent driven by simply these sorts of dynamics? (i.e., run a regression and trying predict the change in attitude by the number of people coming out of the closet over time) Similarly,people like Chris Mooney have documented the shift among the Republican grassroots in issues like climate change which seem to have moved very rapidly likely due to elite cues, rather than a deep analysis of the evidence.

But let’s look at something less controversial, at least on this weblog. Most people who accept evolution really don’t understand how it works, nor are they very conversant in the reasons for why evolutionary process is compelling. The vast majority of the 50 percent of Americans who accept evolution have not read Charles Darwin, nor could they tell you what the neo-Darwinian Synthesis is. They have not read Talk Origins, or Why Evolution is True. So why do they accept evolution? Because evolution, like Newtonian mechanics, is part of established science, and educated people tend to accept established science. But that’s conditional. If you look in the General Social Survey you notice a weird trend: the correlation between education and acceptance of evolution holds for those who are not Biblical literalists, but not for those who are Biblical literalists! Why? Because well educated Biblical literalists accept a different set of authorities on this issue. In their own knowledge ecology the “well-informed” perspective might actually be that evolution is a disputed area in science.

At this point everything is straightforward, more or less. But I want to push this further: most biologists do not understand evolution as a phenomenon, though they may be able to recall the basic evidence for evolution. If you are working in molecular biology, medical research, neuroscience, etc., there isn’t a deep need to understand evolutionary biology on a day to day basis on the bench (I would argue the rise of -omics is changing this some, but many labs have one or two -omics people to handle that aspect). The high rates of acceptance of evolution among researchers in these fields has less to do with reason, and more to do with the ecology of ideas which they inhabit. Evolutionary biologists in their own turn accept the basic structural outlines of how axons and dendrites are essential in the proper function of the brain without understanding all the details about action potentials and such. They assume that neuroscientists understand their domain.

So far I’ve been talking about opinions and beliefs that are held by contemporaries. The basic model is that you offload the task of reasoning about issues which you are not familiar with, or do not understand in detail, to the collective with which you identify, and give weight to specialists if they exist within that collective. I would submit that to some extent the same occurs across time as well. Why do we do X and not Y? Because in the past our collective unit did X, not Y. How persuasive this sort of argument is all things equal probably smokes out to some extent where you are on the conservative-liberal spectrum. Traditional conservatives argue that the past has wisdom through its organic evolution, and the trial and error of customs and traditions. This is a general tendency, applicable both to Confucius and Edmund Burke. Liberal utopians, whether Mozi or the partisans of the French Revolution, don’t put so much stock in the past, which they may perceive to be the font of injustice rather than wisdom. Instead, they rely on their reason in the here and now, more or less, to “solve” the problems which they believe are amenable to decomposition via their rational faculties.

Both methods of coming to a decision result in errors, at least in hindsight. I argue at Secular Right that American conservatives should just accept that they were on the wrong side of history on Civil Rights, just as 19th century conservatives were often on the wrong side of history on slavery. In fact, it is the latter case which is more interesting, because slavery was accepted as a viable institution in all civilized societies up until that era (even if it was perceived as an evil). Yet today we can agree that the collective wisdom of the ages was on some level wrong-headed.

Does that then mean that we should rush to every new enthusiasm and establish justice in our time? Obviously as someone who identifies as conservative I do not. Just as conservatives have been wrong in the past on relying upon the wisdom of the past, liberals have been wrong about their grasp of the details of the architecture of human reality in their own age. Though Edmund Burke defended institutions which we might consider retrograde, in broad strokes his criticisms of the excesses of the French Revolution were spot on. The regime which abolished slavery and emancipated Jews also ushered in an age of political violence which served as the template for radicals for generations. French Jews may have been more fully liberated before the law at an earlier period than British Jews, but were French Jews more accepted within French society one hundred years later than British Jews? More recently progressives and liberals accepted the necessity of coercive eugenics as part of the broader social consensus in the West (which only a few institutions, such as the Roman Catholic Church, resisted with any vigor). Obviously this specific reliance on reason and rational social engineering was perceived to be a failure. Less controversially, some of the excesses of the Great Society and the 1960s revolution in the United States in the area of social welfare and criminal justice seem to have exacerbated the anomie of the 1970s, which abated concomitantly with the rollback of open-ended nature of the welfare state and tougher law & order policies in the 1990s. Even the most well conceived experiments sometimes end up failing.

Whatever your political or social perspective, the largest takeaway is that attitudes toward complex issues which are relevant to our age are almost always framed by the delusion that reason, and not passion, has us by the leash. The New Right which championed the “pro-life” movement in the late 1970s, and the progressive Left which espouses “marriage equality” now, can all give individual reasons when prompted why there was a shift in opinion. But the reasons proffered will be interestingly invariant, as if people are reading off a collective script, which they are. Social milieus can sometimes crystallize consensus so quickly that individuals caught in the maelstrom of the new orthodoxy construct a whole internal rational edifice which justifies their conformity. This does not mean that the conformity and the viewpoints are frauds, just that as humans we tend to self-delude as to the causal chain by which we come to our conclusions.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Anthropology, Cognitive Science, Psychology 
🔊 Listen RSS

Ed Yong has a piece in Nature on the problems of confirmation bias and replication in psychology. Yong notes that “It has become common practice, for example, to tweak experimental designs in ways that practically guarantee positive results.” The way this has been explained to me is that you perform an experiment, get a p-value of > 0.05 (significance). You know that your hunch is warranted, so just modulate the experiment, and hope that the p-value comes in at < 0.05, and you have publishable results! Obviously this is not just a problem in psychology; John Ioannidis has famously focused on medicine. But here’s a chart which shows that positive results are particular prevalent in psychology:

There are many angles to this story, but one which Ed did not touch upon is the political homogeneity of of psychology as a discipline. The vast majority of psychologists are political liberals. This issue of false positive results being ubiquitous is pretty well known within psychology, so I’m sure that that’s one reason Jonathan Haidt has emphasized the ideological blinders of scholars so much. Let’s assume that the range of false positives to support a wide array of hypotheses is rather large. In other words, if you have the will, you can support many alternative hypotheses. How then do you support your hypothesis? In all likelihood, consciously or unconsciously, you are guided by normative considerations. From the pot of “statistically significant” results you just peel away the ones which align with your preferences.

All of this is one reason why I’m rather skeptical whenever I hear that a psychologist has dispassionately waded into a domain of study and come back with objective and incontrovertible evidence supporting their own position. I can go in and do that too. Or more concrete, how hard has it been for your to find “sources” which support whichever crazy opinion you want to hold on Google?

Knowledge is hard.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Psychology 
🔊 Listen RSS

There’s a new paper in PLoS ONE, The Distance Between Mars and Venus: Measuring Global Sex Differences in Personality*, which suggests that by measuring variation of single observed personality traits researchers are missing larger underlying patterns of difference. The Distance Between Mars and Venus: Measuring Global Sex Differences in Personality:

In conclusion, we believe we made it clear that the true extent of sex differences in human personality has been consistently underestimated. While our current estimate represents a substantial improvement on the existing literature, we urge researchers to replicate this type of analysis with other datasets and different personality measures. An especially critical task will be to compare self-reported personality with observer ratings and other, more objective evaluation methods. Of course, the methodological guidelines presented in this paper can and should be applied to domains of individual differences other than personality, including vocational interests, cognitive abilities, creativity, and so forth. Moreover, the pattern of global sex differences in these domains may help elucidate the meaning and generality of the broad dimension of individual differences known as “masculinity-femininity”…In this way, it will be possible to build a solid foundation for the scientific study of psychological sex differences and their biological and cultural origins.


I’m curious about the reaction of people in psychology to this result. The reason is that I am generally confused or skeptical about measurements of personality difference. I’m not confused or skeptical of differences in personality between individuals or groups. I agree that these exist. I just don’t have a good sense of the informativeness of the measures of difference. People may criticize psychometrics intelligence testing all they want, but at least their methods are relatively clear.

From what I can gather the authors discovered that the differences between sexes on personality were much clearer once you looked for the correlation across numerous single measured traits. This strikes me as similar to what you see in population genetics when you move from variation in one gene across populations to many. While a single gene is not very informative in terms of population differences (e.g., the standard assertion that ~15 percent of variation is between races), synthesizing the variation of many genes allows one to easily distinguish populations, because there is such strong discordance in the correlation of differences. An analogy with traits makes understanding this easy. If you were told that population X tended toward black hair, that would not be very informative. Nor if you were told that population X tended toward straight hair. And what if you were told that population X tended toward light skin? All these traits are common across many different populations. But if you told that population X tended toward straight black hair and light skin, the set of populations which intersect at those three traits together in this direction is far smaller than evaluating on a trait-by-trait basis.

But in regards to the evolution of sex differences there is something that I feel that I can say here. Humans seem to lay between other ape lineages in terms of physical dimorphism. For example, in size the difference between males and females is not as extreme as gorillas, but not as equitable as among gibbons. These differences are traditionally correlated with social structure. Groillas are highly polygynous, and there is a great deal of male-male competition, therefore driving sexual selection. In contrast, gibbons tend toward monogamy (at least in the ideal, as with “monogamous birds” the reality seems to differ from the ideal).

But there is also an evolutionary genetic aspect to sexual dimorphism we must consider: in Genetics and Analysis of Quantitative Traits the authors note that evolution of sex specific traits is not going to occur fast. The reason is simple: aside from the peculiarities on the sex chromosomes males and females are genetically the same. This implies that sex differences on the genetic level may emerge via modulation of gene expression across networks of genes tuned by some “master controllers” associated with differential sex development. All of this added complexity takes time to evolve, with the rough result that sexual differences in trait value take about an order of magnitude longer than other traits to come to the fore. The intuition here is simple: if there is selection for large males, there will be selection for large daughters indirectly. Modifiers which dampen this effect need to emerge, so that sex-specific selection doesn’t have the side effect of dragging the other sex along in terms of trait value (this is a concern when you have traits, such as high testosterone, which might increase fitness in males, but reduce it their daughters). Therefore, if there are sex differences in behavioral tendencies which are biologically rooted (I believe there), they will tend to be universal across human societies and have a very deep evolutionary history.

So that would be the strategy to understand differences in personality across the sexes. Go beyond W.E.I.R.D. populations, as they did in this study. And look for traits where males and females seem to exhibit consistent differences across these range of social environments. I suspect environment does effect the magnitude of differences, but I would be willing to bet money that some differences are going to persist (e.g., inter-personal violence is an area where males will differ due to size and personality).

* I’m really sick of the use of the Mars vs. Venus dichotomy in the scholarship.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Psychology, Sex Differences 
🔊 Listen RSS

Several readers have pointed me to this amusing story, Court OKs Barring High IQs for Cops:

A man whose bid to become a police officer was rejected after he scored too high on an intelligence test has lost an appeal in his federal lawsuit against the city.

“This kind of puts an official face on discrimination in America against people of a certain class,” Jordan said today from his Waterford home. “I maintain you have no more control over your basic intelligence than your eye color or your gender or anything else.”

Jordan, a 49-year-old college graduate, took the exam in 1996 and scored 33 points, the equivalent of an IQ of 125. But New London police interviewed only candidates who scored 20 to 27, on the theory that those who scored too high could get bored with police work and leave soon after undergoing costly training.

The average score nationally for police officers is 21 to 22, the equivalent of an IQ of 104, or just a little above average.

But the U.S. District Court found that New London had “shown a rational basis for the policy.” In a ruling dated Aug. 23, the 2nd Circuit agreed. The court said the policy might be unwise but was a rational way to reduce job turnover.


First, is the theory empirically justified? If so, I can see where civil authorities are coming from. That being said, it’s obvious that there are some areas where “rational discrimination” is socially acceptable, and others where it is not. The same arguments used to be applied to women, in terms of the actuarial probabilities that they would get pregnant and so have to leave the workforce. And disparate impact always looms large in the utilization of these sorts of tests.

Second, can’t you just fake a lower score on an intelligence test? Do police departments hire statisticians to smoke out evidence of conscious selection of incorrect scores? I doubt it. Jordan may be smart, but perhaps he lacks common sense if the upper bound for IQ was well known.

My initial thought was that an IQ of 104 seemed too low for a median police officer, but poking around it does seem plausible as a descriptive statistic. Honestly I don’t have much acquaintance with the police, so I’ll trust the scholars no this. That being said, is it in our social interest for police officers to be so average? I don’t know. Though is it in the social interest that someone with an IQ as high as Robert Jordan’s ends up a prison guard?

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: IQ, Psychology 
🔊 Listen RSS

Interesting interview of Steve Hsu. I’ll reproduce the part about Feynman:

3. Is it true Feynman’s IQ score was only 125?

Feynman was universally regarded as one of the fastest thinking and most creative theorists in his generation. Yet it has been reported-including by Feynman himself-that he only obtained a score of 125 on a school IQ test. I suspect that this test emphasized verbal, as opposed to mathematical, ability. Feynman received the highest score in the country by a large margin on the notoriously difficult Putnam mathematics competition exam, although he joined the MIT team on short notice and did not prepare for the test. He also reportedly had the highest scores on record on the math/physics graduate admission exams at Princeton. It seems quite possible to me that Feynman’s cognitive abilities might have been a bit lopsided-his vocabulary and verbal ability were well above average, but perhaps not as great as his mathematical abilities. I recall looking at excerpts from a notebook Feynman kept while an undergraduate. While the notes covered very advanced topics for an undergraduate-including general relativity and the Dirac equation-it also contained a number of misspellings and grammatical errors. I doubt Feynman cared very much about such things.


One thing I have always wondered about is the fact that Richard Feynman had substantive accomplishments which marked him as definitively brilliant by the time he was talking about his 125 I.Q. score (which is smart, but not exceedingly smart). Intelligence scores are supposed to be predictors of accomplishments, but Feynman already had those accomplishments. Bright people take many psychometric tests, so there will be a range of score about a mean. My personal experience is that there’s a bias in reporting the highest scores. But it may be that Feynman gloried in reporting his lowest scores because that made his accomplishments even more impressive. Unlike most he had nothing to prove to anyone.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Psychology 
🔊 Listen RSS

Amy Harmon has a very long piece in The New York Times, Navigating Love and Autism. It’s about a couple who both have been diagnosed with Asperger syndrome. Like cancer I suspect that this term brackets a lot of different issues into one catchall label, not to mention the acknowledgment that it’s a spectrum. When I spent time with the Bay Area Less Wrong community I would observe the range in tendencies and neurological diversity of people who clearly would be classified as “high functioning autistic” (to be clear, these were individuals strongly selected for high general intelligence, with a minimum threshold of around two standard deviations above the norm). The lack of comprehension of religiosity and bias toward libertarianism were two salient characteristics of this sect (though people who have met me don’t classify me as having Asperger syndrome, I have these two cognitive biases myself)

 

In any case, the bigger issue which Amy Harmon’s piece brought out to me is that people with high-functioning autism develope their own micro-norms, meaning that they are often not very compatible with each other despite their deviation from “neurotypicals.” There’s no guarantee that you’ll deviate away from the norm in the same dimension when the norm is highly multidimensional!

People with Asperger are often non-conformists. This is not a bad thing necessarily, at least for society as a whole. But as explained in Not by Genes Alone: How Culture Transformed Human Evolution a very strong tendency toward within-group conformity is a major hallmark in human behavior. It’s probably biological encoded. So, for example, speaking with your parents’ accents, as opposed to that of your hypothetical peer group, is a trait of many people with high functioning autism (or, a tendency toward hyper-formalism of speech). This is a tell for lack of group conformity. The problems autistic people have with conventional “manners,” and not just basic universal human niceties, is an outgrowth of this tendency I suspect. Manners can differ greatly across societies, and require cultural conditioning. But the human tendency to want some set of regular norms does apply to those with Asperger. The diversity among this set is what results in the difficulties of negotiating conflicts (and may explain a bit why libertarians and the hyper-atheistic tend to fracture along what seem trivial deviations from the outside!). You can imagine that in some ways people with Asperger syndrome explore the full parameter space of cultural possibilities, unencumbered by the positive feedback loops of group conformity which is the human norm.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Psychology 
🔊 Listen RSS

The title says it all, and I yanked it from a paper that is now online (and free). It’s of interest because of its relevance to the future genetic understanding of complex cognitive and behavioral traits. Here’s the abstract:

General intelligence (g) and virtually all other behavioral traits are heritable. Associations between g and specific single-nucleotide polymorphisms (SNPs) in several candidate genes involved in brain function have been reported. We sought to replicate published associations between 12 specific genetic variants and g using three independent, well-characterized, longitudinal datasets of 5571, 1759, and 2441 individuals. Of 32 independent tests across all three datasets, only one was nominally significant at the p ~ .05 level. By contrast, power analyses showed that we should have expected 10–15 significant associations, given reasonable assumptions for genotype effect sizes. As positive controls, we confirmed accepted genetic associations for Alzheimer disease and body mass index, and we used SNP-based relatedness calculations to replicate estimates that about half of the variance in g is accounted for by common genetic variation among individuals. We conclude that different approaches than candidate genes are needed in the molecular genetics of psychology and social science.


My hunch is that these results will be unsatisfying to many people. The authors confirm and reassert the heritability of general intelligence, both by reiterating classical results, and utilizing novel genomic techniques. But, they also suggest that the candidate gene literature is nearly worthless because of the lack of power of most of the earlier studies. The latter is probably due to the genetic architecture of the trait. Intelligence may be determined by numerous genes of very small effect (e.g., 0.01% of the variance effected by one particular SNP), or, “rare, perhaps structural, genetic variants with modest to large effect sizes.” The former case is pretty obvious, but what about the latter? I’m mildly skeptical of this because I’m curious why modest-to-large effect variants didn’t show up in family-based studies (presumably within the family the same variants would localize to sections of the genetic map)? But I’m not fluent enough in the literature to know if there was a lot of work in this area with families previously.

Related: Here’s the first author’s article in Commentary from the late 1990s, IQ Since “The Bell Curve”.

(Republished from Discover/GNXP by permission of author or representative)
 
🔊 Listen RSS

The New York Times has a short piece on Steven Pinker up. Nothing too new to long time followers of the man and his work. I would like to point readers to the fact that Steven Pinker has a F.A.Q. up for The Better Angels of Our Nature: Why Violence Has Declined. He links to my post, Relative angels and absolute demons, as supporting his dismissal of Elizabeth Kolbert’s review in The New Yorker. I have to admit that I find much, though not all, of the coverage of science in The New Yorker to exhibit some of the more annoying stereotypical caricatures of humanists when confronting the specter of natural philosophy.

I should also mention I started reading The Better Angels of Our Nature over Thanksgiving. I’m only ~20% through it, and probably won’t finish until Christmas season gets into high gear, but so far it’s a huge mess. In both a good way, and a bad way. The good way is that it’s incredibly rich in its bibliography, with fascinating facts strewn about the path of the narrative. The bad way is that so far it lacks the tightness of The Blank Slate or The Language Instinct in terms of argument. This may change. Finally, I think I should mention that Pinker has already addressed some of the criticisms of his methodologies brought up in the comments sections of my posts. Those who have specific critiques probably should read the book, because he seems to try sincerely to address those. Or at least they should address those critiques to people who have bothered to read the book.

(Republished from Discover/GNXP by permission of author or representative)
 
No Items Found
Razib Khan
About Razib Khan

"I have degrees in biology and biochemistry, a passion for genetics, history, and philosophy, and shrimp is my favorite food. If you want to know more, see the links at http://www.razib.com"