The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
Authors Filter?
P-ter Razib Khan
Nothing found
 TeasersGene Expression Blog
/
Cognitive Science

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

416NQwBS-+L._SY344_BO1,204,203,200_ Years ago I walked down the main street of a town I had lived in for years, and noticed some new apartments above the retail shops. When I got back home I told my girlfriend about my discovery, and she rolled her eyes and explained patiently that they’d always been there. The fact is that I’m notorious for not noticing things. I have no attention to visual detail. I’m a tunnel-vision sort of person.

In Stanislas Dehaene’s Reading in the Brain he outlines a neuronal recycling hypothesis, which might explain my strange behavior. The gist is that there are regions of the brain which have been co-opted for reading. In particular, the letterbox region of the brain. It turns out that processing text is localized to a small zone of the left hemisphere, and damage to this area can have consequences for reading comprehension. In many ways the symptoms are similar to aphasia, but a tight analogy would be somewhat perplexing.* Language is a core human competency. Though it needs cultural and social input, it seems clear we have a built in facility toward it. Or at least that is the consensus among cognitive scientists. It has almost certainly been the target of natural selection.

The same can not be said for literacy, which is only 5,000 years old, and, whose widespread penetration among humans is only a feature of the last few hundred years. ~25 percent of the world’s population is still illiterate. In contrast, those who can not speak are considered to have some sort of pathology. How then can there be a region of the brain dedicated to text comprehension? Reading seems to leverage pattern recognition capabilities which have deep evolutionary roots, and those recognition capabilities are localized to particular regions of the brain. This localization occurs early in life, but is not hardwired, so very early damage can result in rerouting to other regions (also, slight differences cross-culturally in positioning can be explained as a function of the nature of letters and length of words).

Dehaene reports that this fact imposes constraints on the nature of symbols utilized in text. For example, letters exhibit a distribution of common to rare patterns which are similar to a sort of stimuli we would experience in nature. That is, particular letter forms are over-represented in all world writing form representations, and those forms correspond to the sort of patterns one sees in the natural environment. The similarities of distributions are curious especially in light of the fact that writing systems evolve over time to become more abstract and less concrete from more elaborate, often pictographic, forms. Convergence in phenomenon rooted in our cognitive architecture implies that there are genuine constraints and biases in the shape of our cultural expression.

51IgD1yKTiL._SY344_BO1,204,203,200_ Many of the reviewers of Reading in the Brain highlight that it is interminably technical across its many chapters. The author admits as much as he nears his conclusion and attempts an apologia for testing the reader’s patience. Certainly the repetitive survey of neuroanatomy, or the deep-dive into the cognitive neuroscience of symmetry, seemed a bit much for me as someone without a neuroscience background. But when he comes up for air it turns out that Dehaene’s ultimate goal is very ambitious: to explain the variation of culture as a function of cognitive neuroscience. To do this he refers to the large body of work produced by Dan Sperber, a cognitive anthropologist who argues that culture is an “epidemiology of representations.” It is basically an elaborated model of viral memes, where the cognitive landscape of our minds canalize cultural expression toward particular aesthetically appealing or functionally effective forms. For Dehaene reading, literacy, is a form of viral culture. It gives us pleasure (novels), and, it is functionally useful (accounting). Developed over the last several thousand years independently at least twice, and likely more than twice, it seems to be at an intersection of our facility for language, memory, and symbolic representation, which is naturally evoked out of the environment of complex societies. If we could create mentats we would. Their non-existence is a testament to the difficulties of such an enterprise. As it is, we’ll have to make do with reading, and the symbolic manipulation which reading allows. Culture then, is a finite set of phenomena which express novel syntheses of our cognitive capacities. Dehaene in particular emphasizes the role of the domain-general neocortex in threading together disparate domain-specific functions into cultural novelties (e.g., the visual recognition and language competencies necessary for reading).

51QrNyN0KzL._SY344_BO1,204,203,200_ Nestled between the abstruse cognitive neuroscience and speculative theorizing about the nature of culture Dehaene has some concrete suggestions on policy. Whole language learning is a travesty, and phonics are the way to go if you want to encourage early literacy. He seems frustrated that some teachers consider phonics “right-wing” and so favor whole language learning. The science says that whole language learning is by and large bunk, even if there are some techniques which are salvageable. Whole language learning shaves years off reading abilities according to Dehaene. Apparently it can take until adulthood for whole language learners to catch up with those who utilized phonics. In an area where policy is probably impotent, English speakers are at a major disadvantage to Italian or Finnish speakers because of irregular spelling (French is between the two groups). Absorption of complex literature is retarded among British and American students because more time in elementary school must be spent mastering the profusion of words (obviously Chinese readers have a similar issue with delayed reading of higher literature due to the cognitive overhead of written Chinese). Finally, though therapy and treatment can mitigate dyslexia, Reading in the Brain left me very happy that my daughter (who is now reading) does not exhibit this problem. It seems a major handicap.

Which brings me back to my anecdote about my personal ability to not “notice” things. I’ve been a big reader my whole life (you can see which books I remember reading at Good Reads). So I was naturally curious about the cognitive neuroscience of this phenomenon. One of the implications of Dehaene’s neuronal recycling hypothesis is that there is an opportunity cost to devoting one’s resources to reading comprehension. The area of the brain which becomes the letterbox region may very well be the area devoted to noticing subtle differences in one’s environment that make trackers among indigenous people seem preternatural in their aptitudes. The way some stems of leaves have become askew, for example, may be the root of many of the letters which we use to represent sounds. Perhaps then my lack of ability to “notice” small things in the world around me is inextricably linked to the fact that my life’s focus has been on text, more or less. It’s a trade-off that I’m happy with.

* Also, damage to the letterbox region can impair recognition of letters, while allowing for number recognition!

 
• Category: Science • Tags: Cognitive Science 
🔊 Listen RSS

Religion_Explained_by_Pascal_Boyer_book_cover One of the first things that the author of 2002′s Religion Explained had to address is the fact that everyone thinks they have the “explanation” for religion. Unlike quantum physics, or even population genetics, people think they “get” religion, and have a pretty good intuition and understanding of the phenomenon without any scholarly inquiry. Most people grew up religious, and know plenty of religious people. Naturally everyone has a theory to sell you informed by their experiences. This is clear in the comments of this weblog where people start with an assumed definition of religion, and then proceed to enter into a chain of reasoning with their axiomatic definition in mind, totally oblivious to the possibility that there might be a diversity of opinions as to the important aspects of religious phenomenon. This causes a problem when people begin at different starting points. Religion is obviously important. That is why Samuel Huntington’s Clash of Civilizations used it to delimit civilizations. Religion is also expansive.

In modern times the expansiveness can be a problem in terms of getting definitions right in any sort of conversation about the topic. On the one hand adherents of “higher religions” often dismiss supernatural beliefs outside of the purview of their organized systems as “superstition,” constraining the space of possibilities to an absurdly narrow set (this can be taken to extremes when narrow sects define all religions outside of their umbrella as “cults”). At the other extreme there are others who wish to include “political religion” more wholeheartedly in the discussion. In my opinion doing so makes it difficult to discuss religious phenomena in a historical context, as political religion is relatively novel and recent. Theological-Incorrectness-Jason-Slone As a phenomenon with many features it is not surprising that there are many other traits which resemble religion, but this logic rapidly leads to loss of any intelligible specificity. Therefore as a necessary precondition I tend to assume that religion must have supernatural agents at their heart. Basically, gods. But, all the accoutrements of organized “higher” religions, which crystallized in the period between 600 BC and 600 AD (from Buddhism to Islam), are not necessary to understand religion. In fact, as outlined in books such as Theological Incorrectness, taking the claims of organized world religions at face value can mislead in terms of the beliefs and behaviors of the mass of the rank and file, whose spiritual world is still strongly shaped by the same cognitive parameters one finds in primal “animistic” faiths. Summa Theologica is not only impenetrable to the vast number of believers, but it is totally irrelevant. And yet the concerns of intellectuals loom large in any attempt to understand the nature of higher religions, because they tend to occupy positions of power, prestige, and prominence. And importantly, they are the ones writing down the history of their faith.

9780195178036 It is useful then to differentiate between religion in the generality, which likely has deep evolutionary roots in our species. This is characterized by modal intuitions about the supernatural nature of the world. A universe of spirits, gods, and unseen forces. Then there are the complex processed cultural units of production and consumption which are the “world religions” of the past few thousand years, which have achieved a sort of stable oligopoly power over the loyalties of the vast majority of the world’s population. They are not inchoate and organic, bottom up reifications of the foam of cognitive process, perhaps co-opted toward functional or aesthetic purposes. Rather, world religions are clearly products of complex post-Neolithic agricultural societies which exhibit niche specialization and social stratification. They are the end, not the beginning. A complex melange of distinct cultural threads brought together into one unit of consumption for the masses and the elites, which binds society together into an organic whole. Think of the world religions as the Soylent of their era.

warandpeaceandwar The historical context of this is well known, all the way back to Karl Jaspers. Over two thousand years ago the ideas which we would later term philosophy arose in the eastern Mediterranean, in northern India, and northern China. They were absorbed by various organized religions in each locale. The complex social-political order of those years also persist in the institutional and bureaucratic outline of these religious organizations. Ergo, the Roman Catholic Church is the shadow of the Roman Empire. The Sangha probably reflects the corporate nature of South Asian society even at that early time. Though some set of elite practitioners of these religions tended toward philosophical rationalism, others were attracted toward mystical movements which elevate the existential and esoteric elements of religious experience. Both mystical and rational variants of religion exhibit a commonality in that they are patronized by elites with leisure to spare upon introspection or reflection. Formal liturgical traditions co-opt the human propensity for heightened emotional arousal in collective group contexts to ritualize subordination and submission to central authorities, which serve as the axis mundi which binds the divine to the world and proxies for the gods.

0226901351 This only scratches the surface of the phenomenon in question. And these are not academic matters; religion is a powerful force in the world around us. This is why studies such as this in the Proceedings of the Royal Society, Broad supernatural punishment but not moralizing high gods precede the evolution of political complexity in Austronesia, are heartening to see. The importance is not the topic of study, or even the conclusion, but the methods. Using phylogenetic techniques the authors get a crisper understanding of the dynamics. Even if one quibbles with their conclusions one can at least grapple with it formally. Nature has a good piece surveying the response, though please ignore the hyperbole in the title!

The gist of this conclusion to me seems to be similar to the one in relation to lactase persistence: cultural and social change set the preconditions for evolutionary change, evolutionary change did not trigger cultural and social change. Complex multi-ethnic expansive societies arose somewhat over 2,000 years ago. The world religions developed in this environment as natural adaptations, which allowed for these societies to persist over time and space in a manner that was recognizable. The diffuse world of gods and spirits were distilled down to the portable essence which would serve to bind and tie diverse peoples together (in practice for much of history this only applied to the elites, as the populace still retained what basically could be termed folk paganism). The verbal models supplemented by formalism is probably what is needed to truly gain a deep insight into the nature of a phenomenon as slippery by ubiquitous as religion.

 
• Category: History • Tags: Cognitive Science, Religion 
🔊 Listen RSS

300px-Quan_Am_1656On Twitter and elsewhere (e.g., on this weblog, in real life) I often get into confusing arguments with people when it comes to religion because I approach the topic from a somewhat strange angle. Specifically, it is one which integrates cognitive science, evolutionary anthropology, intellectual history and sociology. My interest in this topic was more in evidence in the middle years of the last decade (yes, I’ve been blogging a long time!). One of the last long posts on the topic I published in 2007 was titled Levels of analysis of religion, Atran, Boyer & Wilson. The shorter version is that I believe it is important to understand religion from the ground up. Ergo,

- Religion as a cognitive phenomenon which emerges out of banal basic human intuitions

- Religion as a social phenomenon which emerges out of the interaction and cooperation of individuals within groups

- Religion as a social phenomenon which emerges out of the interaction and conflict across groups

- Religion as political phenomenon which emerges out of the interaction of different groups, constructing a ‘meta-ethnic’ identity (using Peter Turchin’s terminology)

- Religion as an intellectual phenomenon, which can be bracketed into two classes, the mystical and the philosophical-rational

The last is to a great extent what we moderns think of religion as. That is, religion qua religion. Some who are more aware of history and anthropology might acknowledge a phase of ‘primal religion,’ which is pre-philosophical. Animism and such. What my study of religion suggested to me is that the fixation upon religion as a intellectual system totally misses the primary reasons that religion exists, and why it has existed for all of human history and has had adherents across most of humanity. To see how this is relevant, analyses of individual religious believers of various world faiths has emphasized how incredibly similar their conceptualizations of the supernatural world is when stripped away of the exoteric terminology. By this, I mean that terms such as ‘monotheistic,’ ‘henotheistic’, and ‘polytheistic,’ do not really sink deep into the mental architecture of humans. They’re surface concepts with a logical coherency, such as non-euclidean geometry. But intuitively they’re as substantive as the colors upon a flag.

Obviously I’ve moved onto to other things, but perhaps the field has also updated. I checked out Justin Barrett’s Cognitive Science, Religion, and Theology: From Human Minds to Divine Minds to see if the scholarship has moved in this decade. I’ll read it when I have time. My personal experience is that most educated people are weak on understanding the lower levels of the organization of the religious phenomenon. The psychology and social structure. Pascal Boyer’s Religion Explained is a rather easy introduction. David Sloan Wilson’s Darwin’s Cathedral is probably the best treatment of a neo-functionalist understanding of religious organization. Scott Atran’s In God’s We Trust is a harder read, but worth it.

 
• Category: Science • Tags: Cognitive Science, Religion 
🔊 Listen RSS

One point which I’ve made on this weblog several times is that on a whole range of issues and behaviors people simply follow the consensus of their self-identified group. This group conformity probably has deep evolutionary origins. It is often much cognitively “cheaper” to simply utilize a heuristic “do what my peers do” than reason from first principles. The “wisdom of the crowds” and “irrational herds” both arise from this dynamic, positive and negative manifestations. The interesting point is that from a proximate (game-theoretic rational actor) and ultimate (evolutionary fitness) perspective ditching reason is often quite reasonable (in fact, it may be the only feasible option if you want to “understand,” for example, celestial mechanics).


If you’re faced with a complex environment or set of issues “re-inventing the wheel” is often both laborious and impossible. Laborious because our individual general intelligence is simply not that sharp. Impossible because most of us are too stupid to do something like invent calculus. Many people can learn the rules for obtaining derivatives and integrals, but far fewer can come up with the fundamental theorem of calculus. Similarly, in the 18th century engineers who utilized Newtonian mechanics for practical purposes were not capable of coming up with Newtonian mechanics themselves. I’m using these two examples because calculus and mechanics are generally consider “high level” cognitive tasks, but even they at the root illustrate the principle of collective wisdom and group conformity. Calculus and mechanics is included in the curriculum not because all of the individuals who decide the curriculum understand these two topics in detail, but because individuals whom they trust and believe are worthy of emulation and deference, as well as past empirical history, tell them that this is the “reasonable” way to go. (science and engineering have the neat property is that you don’t just trust people, you trust concrete results!)

This sort of behavior is even more evident in political and social viewpoints. Recently there have been signs of shifts in African American attitudes toward same-sex marriage, and a more general trend in that direction across the population. Is this because individuals are sitting in their armchair and reflecting on justice? Of course people will enter into evidence the experience of knowing gay people, and the empathy which that generates, but are you willing to bet that these public policy shifts are primarily and independent driven by simply these sorts of dynamics? (i.e., run a regression and trying predict the change in attitude by the number of people coming out of the closet over time) Similarly,people like Chris Mooney have documented the shift among the Republican grassroots in issues like climate change which seem to have moved very rapidly likely due to elite cues, rather than a deep analysis of the evidence.

But let’s look at something less controversial, at least on this weblog. Most people who accept evolution really don’t understand how it works, nor are they very conversant in the reasons for why evolutionary process is compelling. The vast majority of the 50 percent of Americans who accept evolution have not read Charles Darwin, nor could they tell you what the neo-Darwinian Synthesis is. They have not read Talk Origins, or Why Evolution is True. So why do they accept evolution? Because evolution, like Newtonian mechanics, is part of established science, and educated people tend to accept established science. But that’s conditional. If you look in the General Social Survey you notice a weird trend: the correlation between education and acceptance of evolution holds for those who are not Biblical literalists, but not for those who are Biblical literalists! Why? Because well educated Biblical literalists accept a different set of authorities on this issue. In their own knowledge ecology the “well-informed” perspective might actually be that evolution is a disputed area in science.

At this point everything is straightforward, more or less. But I want to push this further: most biologists do not understand evolution as a phenomenon, though they may be able to recall the basic evidence for evolution. If you are working in molecular biology, medical research, neuroscience, etc., there isn’t a deep need to understand evolutionary biology on a day to day basis on the bench (I would argue the rise of -omics is changing this some, but many labs have one or two -omics people to handle that aspect). The high rates of acceptance of evolution among researchers in these fields has less to do with reason, and more to do with the ecology of ideas which they inhabit. Evolutionary biologists in their own turn accept the basic structural outlines of how axons and dendrites are essential in the proper function of the brain without understanding all the details about action potentials and such. They assume that neuroscientists understand their domain.

So far I’ve been talking about opinions and beliefs that are held by contemporaries. The basic model is that you offload the task of reasoning about issues which you are not familiar with, or do not understand in detail, to the collective with which you identify, and give weight to specialists if they exist within that collective. I would submit that to some extent the same occurs across time as well. Why do we do X and not Y? Because in the past our collective unit did X, not Y. How persuasive this sort of argument is all things equal probably smokes out to some extent where you are on the conservative-liberal spectrum. Traditional conservatives argue that the past has wisdom through its organic evolution, and the trial and error of customs and traditions. This is a general tendency, applicable both to Confucius and Edmund Burke. Liberal utopians, whether Mozi or the partisans of the French Revolution, don’t put so much stock in the past, which they may perceive to be the font of injustice rather than wisdom. Instead, they rely on their reason in the here and now, more or less, to “solve” the problems which they believe are amenable to decomposition via their rational faculties.

Both methods of coming to a decision result in errors, at least in hindsight. I argue at Secular Right that American conservatives should just accept that they were on the wrong side of history on Civil Rights, just as 19th century conservatives were often on the wrong side of history on slavery. In fact, it is the latter case which is more interesting, because slavery was accepted as a viable institution in all civilized societies up until that era (even if it was perceived as an evil). Yet today we can agree that the collective wisdom of the ages was on some level wrong-headed.

Does that then mean that we should rush to every new enthusiasm and establish justice in our time? Obviously as someone who identifies as conservative I do not. Just as conservatives have been wrong in the past on relying upon the wisdom of the past, liberals have been wrong about their grasp of the details of the architecture of human reality in their own age. Though Edmund Burke defended institutions which we might consider retrograde, in broad strokes his criticisms of the excesses of the French Revolution were spot on. The regime which abolished slavery and emancipated Jews also ushered in an age of political violence which served as the template for radicals for generations. French Jews may have been more fully liberated before the law at an earlier period than British Jews, but were French Jews more accepted within French society one hundred years later than British Jews? More recently progressives and liberals accepted the necessity of coercive eugenics as part of the broader social consensus in the West (which only a few institutions, such as the Roman Catholic Church, resisted with any vigor). Obviously this specific reliance on reason and rational social engineering was perceived to be a failure. Less controversially, some of the excesses of the Great Society and the 1960s revolution in the United States in the area of social welfare and criminal justice seem to have exacerbated the anomie of the 1970s, which abated concomitantly with the rollback of open-ended nature of the welfare state and tougher law & order policies in the 1990s. Even the most well conceived experiments sometimes end up failing.

Whatever your political or social perspective, the largest takeaway is that attitudes toward complex issues which are relevant to our age are almost always framed by the delusion that reason, and not passion, has us by the leash. The New Right which championed the “pro-life” movement in the late 1970s, and the progressive Left which espouses “marriage equality” now, can all give individual reasons when prompted why there was a shift in opinion. But the reasons proffered will be interestingly invariant, as if people are reading off a collective script, which they are. Social milieus can sometimes crystallize consensus so quickly that individuals caught in the maelstrom of the new orthodoxy construct a whole internal rational edifice which justifies their conformity. This does not mean that the conformity and the viewpoints are frauds, just that as humans we tend to self-delude as to the causal chain by which we come to our conclusions.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Anthropology, Cognitive Science, Psychology 
🔊 Listen RSS

Update: An ungated version of the paper.

I used to spend a lot more time talking about cognitive science of religion on this weblog. It was an interest of mine, but I’ve come to a general resolution of what I think on this topic, and so I don’t spend much time discussing it. But in the comments below there was a lot of fast & furious accusation, often out of ignorance. I personally find that a little strange. I’ve been involved in freethought organizations in the past, and so have some acquaintance with “professional atheists.” Additionally, I’ve also been a participant and observer of the internet freethought websites since the mid-1990s (yes, I remember when alt.atheism was relevant!). In other words, I know of whom I speak (and I am not completely unsympathetic to their role in the broader ecology of ideas).

But the bigger issue is a cognitive model of how religiosity emerges. Luckily for me a paper came out which speaks to many of the points which I alluded to, Divine intuition: Cognitive style influences belief in God:

Some have argued that belief in God is intuitive, a natural (by-)product of the human mind given its cognitive structure and social context. If this is true, the extent to which one believes in God may be influenced by one’s more general tendency to rely on intuition versus reflection. Three studies support this hypothesis, linking intuitive cognitive style to belief in God. Study 1 showed that individual differences in cognitive style predict belief in God. Participants completed the Cognitive Reflection Test (CRT; Frederick, 2005), which employs math problems that, although easily solvable, have intuitively compelling incorrect answers. Participants who gave more intuitive answers on the CRT reported stronger belief in God. This effect was not mediated by education level, income, political orientation, or other demographic variables. Study 2 showed that the correlation between CRT scores and belief in God also holds when cognitive ability (IQ) and aspects of personality were controlled. Moreover, both studies demonstrated that intuitive CRT responses predicted the degree to which individuals reported having strengthened their belief in God since childhood, but not their familial religiosity during childhood, suggesting a causal relationship between cognitive style and change in belief over time. Study 3 revealed such a causal relationship over the short term: Experimentally inducing a mindset that favors intuition over reflection increases self-reported belief in God.

Recall that in many social domains where neurotypicals rely on innate, intuitive, and “fast” cognition, high functioning autistic individuals must reflect and reason. I don’t have access to the original paper, but there’s a nice piece in Harvard Gazette on the research. Here’s the last sentence: ““How people think about tricky math problems is reflected in their thinking — and ultimately their convictions — about the metaphysical order of the universe,” Shenhav said.”

(Republished from Discover/GNXP by permission of author or representative)
 
🔊 Listen RSS

The prince of neurobloggers Jonah Lehrer has a good if curious column up at the Wall Street Journal, Social Networks Can’t Replace Socializing. He concludes:

This doesn’t mean that we should stop socializing on the web. But it does suggest that we reconsider the purpose of our online networks. For too long, we’ve imagined technology as a potential substitute for our analog life, as if the phone or Google+ might let us avoid the hassle of getting together in person.

But that won’t happen anytime soon: There is simply too much value in face-to-face contact, in all the body language and implicit information that doesn’t translate to the Internet. (As Mr. Glaeser notes, “Millions of years of evolution have made us into machines for learning from the people next to us.”) Perhaps that’s why Google+ traffic is already declining and the number of American Facebook users has contracted in recent months.

These limitations suggest that the winner of the social network wars won’t be the network that feels the most realistic. Instead of being a substitute for old-fashioned socializing, this network will focus on becoming a better supplement, amplifying the advantages of talking in person.

For years now, we’ve been searching for a technological cure for the inefficiencies of offline interaction. It would be so convenient, after all, if we didn’t have to travel to conferences or commute to the office or meet up with friends. But those inefficiencies are necessary. We can’t fix them because they aren’t broken.


First, let me offer up that if I had to pick between my twitter, Facebook, or Google+, I’d pick the last. At this point twitter is better in my opinion at allowing me to “sample” from the stream of news/links than Google+, where people tend to be more verbose. In contrast if I want to see how cute the babies of my college friends are getting to be, Facebook all the way. But the conversations on Google+ are much better when it comes to people I may interact with today, rather than the past. Facebook keeps me up to date on my past, and twitter tells me what the wider world is concerned with, but Google+ is the best complement to my present social life (which, to be fair, is not typical because of my quasi-public existence).

All that being said, some of you might wonder why Jonah even would be allowed to write such a banal column. Doesn’t everyone know that social networking technologies are not going to change our need for physical contact? No, everyone doesn’t know. There are pundits who are asserting the revolutionary transformative nature of the social web. As Jonah observes this has come and gone many a time. Remember Second Life?

I do have friends in real life who contend that Facebook and company many actually allow us to shatter Dunbar’s number. I am skeptical. The reason is that our cognitive natures are not universally plastic.

We’ve got great general domain intelligence in comparison to other species. Even though it is slow and laborious in comparison to our innate cognitive toolkit, it is incredibly flexible and extensible. Technologies which amplify the power of domain general intelligence are game changers. Writing and digital computers are examples of such extenders. The decline of the art of memory and slide rule illustrates the power of this sort of technology to be progressive. What was indispensable in the past is quickly forgotten, and shown to be the utility it always was (I suspect many of you don’t know what a slide rule is, despite its ubiquity two generations ago!). In this there is a resemblance toward science.

In contrast, consider something like sexual technology. The verisimilitude of visual pornography is incredible (in fact, sometimes too good insofar as make-up artists on porn sets are having a harder and harder time covering up blemishes and imperfections). There is an enormous industry of sex toys, and sex dolls are getting better and better. Obviously there’s a huge demand for such products and services. But will these ever possibly replace real sex? Imagine a near future sex doll with artificially generated body temperatures and synthesized human skin. Even without this some people are claiming that porn is substituting for real sexual relationships.

Such a substitution will not happen in the near future. That’s because our pleasure, our utility, of experience derives not just from its pure sensory input, but also our model of its essential nature. We have not only beliefs, but also aliefs. The knowledge that you’re having sex with a human being counts for something in and of itself. The knowledge that you own the original painting counts for something. The knowledge that a book was once owned by someone famous counts for something. We have deeply ingrained preferences which are not simply dependent on the substance or style of something. There are part of a broader constellation of our understanding of the world.

Jonah correctly points out that communication via social technologies don’t transmit a lot of implicit and subtle cues which you can obtain through sensory input face to face. That’s a matter of substance. But ultimately people will put a premium on face to face interaction even when teleconferencing technology becomes much better at transmitting sensory information. That’s because humans are social beings, and to a great extend socialization in the proximate sense (as opposed to evolutionary) is not a means, but an ends. We enjoy spending time with flesh and blood humans beings. The only way this will change is through a deep re-write of low level cognitive code.

Addendum: The above generalizations are relevant for most, but not all, human beings.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science, Facebook, Google, Technology, Twitter 
🔊 Listen RSS

ResearchBlogging.org One of the issues when talking about the effect of environment and genes on behavioral and social outcomes is that the entanglements are so complicated. People of various political and ideological commitments tend to see the complications as problems for the other side, and yet are often supremely confident of the likely efficacy of their predictions based on models which they shouldn’t even been too sure of. That is why cross-cultural studies are essential. Just as psychology has overly relied on the WEIRD nature of data sets, so it is that those interested in social science need to see if their models are robust across cultures (I’m looking at you economists!).

That is why this ScienceDaily headline, Family, Culture Affect Whether Intelligence Leads to Education, grabbed my attention. The original paper is Family Background Buys an Education in Minnesota but Not in Sweden:

Educational attainment, the highest degree or level of schooling obtained, is associated with important life outcomes, at both the individual level and the group level. Because of this, and because education is expensive, the allocation of education across society is an important social issue. A dynamic quantitative environmental-genetic model can help document the effects of social allocation patterns. We used this model to compare the moderating effect of general intelligence on the environmental and genetic factors that influence educational attainment in Sweden and the U.S. state of Minnesota. Patterns of genetic influence on educational outcomes were similar in these two regions, but patterns of shared environmental influence differed markedly. In Sweden, shared environmental influence on educational attainment was particularly important for people of high intelligence, whereas in Minnesota, shared environmental influences on educational attainment were particularly important for people of low intelligence. This difference may be the result of differing access to education: state-supported access (on the basis of ability) to a uniform higher-education system in Sweden versus family-supported access to a more diverse higher-education system in the United States.


Minnesota is to some extent the Scandinavia of America, so the cross-cultural difference is particularly notable. You wouldn’t be surprised for example by big differences between Mississippi and Sweden. But looking at a comparison between the Upper Midwest and Scandinavia is closer to seeing the impact of national culture and policy differences on populations which were originally very similar.

Their methodology was simple, though as with much of this sort of behavior genetic work the statistical analysis can be somewhat labored. In both Sweden and Minnesota you had samples of dizygotic and monozygotic twins which give you a way to compare the effect of genes on variation in life outcomes. Sweden has large data sets from male conscription for behavior genetics analysis. They compared this with the Minnesota Twin Family Study data set.

Since the topline results are pretty straightforward, I thought I’d give you some statistics. Table 1 has raw correlations. Note that they converted educational attainment into a seven-point scale, less than 9 years of education to completion of doctoral studies.

swed1

You see the expected drop off in correlation between identical and fraternal twins. Identical twins share more genetic identity than fraternal twins, so they’re going to be more identical on a host of metrics aside from appearance. Those are just raw correlation values of traits though across categories of twins. The core intent of the paper was to explore the relationship between genes, family environment, and other environmental factors, and educational attainment. To do this they constructed a model. Below you see estimates of the variance in the trait explained by variance in genes, shared environment (family), non-shared environment (basically “noise” or error, or it could be something like peer group), from Sweden to Minnesota, and, at three intelligence levels. Two standard deviations below the norm is borderline retarded, ~2.5% of the population or so, and two standard deviations above the norm is at Mensa level.

swed2

It’s interesting that as you move up the IQ scale the genetic variation explains more and more of variance the educational attainment. Someone with an IQ of 130 is likely to be university educated. But there are many who are not. Why? The way I interpret these results is that if you are that intelligent and do not manage to complete university you may have heritable dispositions of personality which result in you not completing university. If, for example, you come from a family which is very intelligent, but is low on conscientiousness, then there may be a high likelihood that you just won’t complete university because you can’t be bothered to focus. Or, you may have personality characteristics so that you don’t want to complete university. A second major finding here is that Sweden and the USA go in totally different directions when it comes to the sub-average and dull in prediction of years of education. Why? The explanation in the paper seems plausible: Sweden strongly constrains higher education supply, but makes it available to those with proven academic attainments at a nominal price. Family encouragement and connections don’t matter as much, if you can’t pass the university entrance examination you can’t pass it. In contrast in the USA if you’re dull, but come from a more educated or wealthier family, you can find some university or institution of higher education which you can matriculate in. Supply is more flexible to meet the demand. I actually know of a woman who is strongly suspected to be retarded by her friends. I have been told she actually tested in the retarded range in elementary school but was taken out of that track because her family demanded it (she’s the product of a later conception, and her family made their money in real estate, not through professional advancement). Over the years she has enrolled in various community colleges, but never to my knowledge has she completed a degree. If she had not had family connections there is a high probability she wouldn’t have completed high school. As it is, she can check off “some college” on demographic surveys despite likely be functionally retarded.

The next table is a bit more confusing. It shows you the correlations between the effects of the variable on education and intelligence. In other words, does a change in X have the same directional effect on Y and Z, and what is the extent of the correspondence between the effect on Y and Z.

swed3

Shared environment had almost the same effect on intelligence and education, while genetics had a more modest simultaneous effect. Not too surprising that non-shared environment didn’t have a strong correlation in effect across the traits, the authors note that much of this is going to noise in the model, and so not systematically biased in any direction. Though the confidence intervals here are a little strange. I’m not going to get into the details of the model, because frankly I’m not going to replicate the analysis with their data myself. That’s why I wanted to present raw correlations first. That’s pretty straightforward. Estimates of variances out of models with a set of parameters is less so. Here’s an interesting point from the correlations in the last table:

The patterns of genetic correlations in the two samples differed. In Sweden, genetic correlation was steadily in excess of .50 across the range of intelligence, indicating a genetically influenced direct effect of intelligence on educational attainment that was weaker than the shared environmental effect on educational attainment. In the MTFS [Minnesota] population, however, genetic correlation was in excess of .50 when level of intelligence was low, but was halved at higher levels of intelligence. This indicated that genetic influences on intelligence tended to limit educational attainment when the level of intelligence was low, but not when the level of intelligence was average or high.

Now let me jump to conclusion:

This finding indicates that genetic influences common to intelligence and educational attainment may have been more effective in limiting educational attainment in individuals with low levels of intelligence than in encouraging educational attainment in those with high levels of intelligence. As in Sweden, shared environmental influences on intelligence and educational attainment were completely linked, indicating a direct contribution from shared environmental influences on intelligence to educational attainment. The decrease in shared environmental variance with higher intelligence, however, indicated that shared environmental influences were more effective in encouraging educational attainment in higher-intelligence individuals than in limiting educational attainment in lower-intelligence individuals. In other words, in populations in which shared environmental influences such as family history and values encouraged high levels of educational attainment, individuals were able to surmount limitations in intelligence.

Our analysis does not permit the conclusion that these differences in educational systems cause the differences in environmental and genetic influences on educational attainment observed in this study, but it is reasonable to hypothesize that this is likely. In particular, the greater expense of higher education and greater subjectivity of admission standards in the United States compared with Sweden may partially explain the very different patterns of shared environmental influences in the two population samples. Regardless of the causes underlying the differences we observed, the results of our study make clear that the degrees of environmental and genetic influences can vary substantially between groups with different circumstances, and even within such groups. Our results also suggest that the ways in which social systems are organized may have implications for how and to what extent environmental and genetic influences on behavior will be expressed.

This discussion about the role of environment, genes, and culture, on various outcomes should not hinge on one paper. But, these sorts of results are often not widely disseminated among the intellectual classes. One aspect of the American educational system in contrast to some other nations is that not-too-brights have university degrees. Education has long been a project for social engineering in the USA, going back to Horace Mann. Legacies, underrepresented minorities, the poor, those with particular talents, etc., are all part of the decentralized system of university admissions in the United States. In contrast, in nations such as Sweden or Japan there is a more centralized and universal set of criteria. This results is more perfect sorting by the metrics of interest without considerations of social engineering. I know that Sweden has traditionally had a small aristocratic class, while the Japanese aristocracy were basically abolished after World War II. Additionally, both are relatively homogeneous societies so considerations of racial representativeness are not operative. Or weren’t until recently in the case of Sweden. But consider one reality: if such a system is perfectly meritocratic over time if the traits being evaluated are heritable then you will have genetic stratification and reduction of social mobility assuming assortative mating at university.

Currently there is some handwringing by the elites about the fact that so few poor kids get admitted to Ivy League universities. I think there’s a simple way to change this: get rid of the implicit Asian quotas. After all, there was a lot of socioeconomic diversity after the Ivy League universities got rid of their Jewish quotas, but the children of the Jews who didn’t have to go to CUNY and went to Harvard are well off themselves. But more socioeconomic mobility through removing the implicit Asian quota would cause other difficulties, as elite private universities need their slots for both legacies as well as underrepresented minorities for purposes of social engineering/fostering diversity/encouraging donations. Additionally, just as with the Jews the welter of mobility in one generation of the children of Asian immigrants would settle into quiescence in the next if the traits which enable university admission are genetically or culturally heritable.

Citation: Johnson W, Deary IJ, Silventoinen K, Tynelius P, & Rasmussen F (2010). Family background buys an education in Minnesota but not in Sweden. Psychological science : a journal of the American Psychological Society / APS, 21 (9), 1266-73 PMID: 20679521

(Republished from Discover/GNXP by permission of author or representative)
 
🔊 Listen RSS

A few years ago I was hearing a lot about mirror neurons. There was a hyped up article on The Edge website about them, MIRROR NEURONS and imitation learning as the driving force behind “the great leap forward” in human evolution. But I haven’t heard much since then, though I’m not neuro nerd so perhaps I’m out of the loop. So I pass on this link with interest, Single-Neuron Responses in Humans during Execution and Observation of Actions:

Direct recordings in monkeys have demonstrated that neurons in frontal and parietal areas discharge during execution and perception of actions…Because these discharges “reflect” the perceptual aspects of actions of others onto the motor repertoire of the perceiver, these cells have been called mirror neurons. Their overlapping sensory-motor representations have been implicated in observational learning and imitation, two important forms of learning[9]. In humans, indirect measures of neural activity support the existence of sensory-motor mirroring mechanisms in homolog frontal and parietal areas…other motor regions…and also the existence of multisensory mirroring mechanisms in nonmotor region…We recorded extracellular activity from 1177 cells in human medial frontal and temporal cortices while patients executed or observed hand grasping actions and facial emotional expressions. A significant proportion of neurons in supplementary motor area, and hippocampus and environs, responded to both observation and execution of these actions. A subset of these neurons demonstrated excitation during action-execution and inhibition during action-observation. These findings suggest that multiple systems in humans may be endowed with neural mechanisms of mirroring for both the integration and differentiation of perceptual and motor aspects of actions performed by self and others.

ScienceDaily has a hyped-up headline, First Direct Recording Made of Mirror Neurons in Human Brain.

Update: Neuro skepticcritic has much more.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science 
🔊 Listen RSS

The Evolution Of Symbolic Language by Terrence Deacon and Ursula Goodenough. Deacon’s The Symbolic Species: The Co-Evolution of Language and the Brain is a book I liked a great deal, though in hindsight I don’t think I had the background to appreciate it in any depth (nor do I now).

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science 
🔊 Listen RSS

Social Cognition in Dogs, or How did Fido get so smart?. This you know:

Domesticated dogs seem to have an uncanny ability to understand human communicative gestures. If you point to something the dog zeroes in on the object or location you’re pointing to (whether it’s a toy, or food, or to get his in-need-of-a-bath butt off your damn bed and back onto his damn bed). Put another way, if your attention is on something, or if your attention is directed to somewhere, dogs seem to be able to turn their attention onto that thing or location as well.
Amazingly, dogs seem to be better at this than primates (including our nearest cousins, the chimpanzees) and better than their nearest cousins, wild wolves.

But there are two explanations for how/why dogs are better than primates at this task:

And so it was that biological anthropologist Brian Hare, director of the of Duke University Canine Cognition Center wondered: did dogs get so smart because of direct selection for this ability during the domestication of dogs, or did this apparent intelligence evolve, in a sense, by accident, because of selection against fear and aggression?

I didn’t even consider that it would be anything except for direct selection. In any case, read the whole post for a run-down of the paper, but here’s the blogger’s conclusion:

So, these results appear to support the correlated by-product hypothesis, and not the selection for communication hypothesis. It suggests that the evolution of social cognitive abilities in domesticated dogs mirrors that process observed in the experimentally domesticated silver foxes, and that it was a by-product of selection against fear and aggression. To really really get at this question, a study of wolves should be conducted as well.
More broadly, the social intelligence hypothesis (which is another way of framing the selection for communication hypothesis) asserts that primate (and human) intelligence was driven by the need to predict and manipulate the behavior of others, by reading subtle cues in their behavior. These findings suggest that human intelligence may have evolved, instead, as a by-product of selection against fear of and aggression towards others.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science 
🔊 Listen RSS

Human face recognition ability is specific and highly heritable:

Compared with notable successes in the genetics of basic sensory transduction, progress on the genetics of higher level perception and cognition has been limited. We propose that investigating specific cognitive abilities with well-defined neural substrates, such as face recognition, may yield additional insights. In a twin study of face recognition, we found that the correlation of scores between monozygotic twins (0.70) was more than double the dizygotic twin correlation (0.29), evidence for a high genetic contribution to face recognition ability. Low correlations between face recognition scores and visual and verbal recognition scores indicate that both face recognition ability itself and its genetic basis are largely attributable to face-specific mechanisms. The present results therefore identify an unusual phenomenon: a highly specific cognitive ability that is highly heritable. Our results establish a clear genetic basis for face recognition, opening this intensively studied and socially advantageous cognitive trait to genetic investigation.

In other words, the strength of face recognition does not seem to track other intelligence test results much at all (including tests which measure verbal and visual memory). Rather, it seems to be a domain-specific competency, rather than emerging out of general intelligence. And, the variation in face recognition ability is highly heritable.
What’s going on here? A reasonable guess for me is that the ability to recognize many, many, different faces isn’t something that came up for most of human history. Even in a pre-modern village you’d see the same people over and over. By contrast, if you work in sales you probably need to juggle a lot of faces & names to be successful.
Remember that if a quantitative trait is highly heritable then by definition that means that directional selection wasn’t operating to drive genes to fixation so that the population was monomorphic in trait value. In English that means if there was a huge benefit to being able to recognize hundreds of faces very well in the past, then we would be able to recognize hundreds of faces very well to the same extent. As it is the strongly for face recognition has to be more complex, with the direct selection applicable being some sort of balancing selection.
Citation: Jeremy B. Wilmer, Laura Germine, Christopher F. Chabris, Garga Chatterjee, Mark Williams, Eric Loken, Ken Nakayama, and Bradley Duchaine, Human face recognition ability is specific and highly heritable, doi:10.1073/pnas.0913053107

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science 
🔊 Listen RSS

Covers all the major angles. Nice that there’s a newspaper which can support this sort of reporting (on the other hand). Not surprising that Amy Bishop seems to have some history of delusions of grandeur, she’s claiming that both she and her husband have an I.Q. of 180. That’s 5.3 standard deviations above the mean. Assuming a normal distribution that’s a 1 in 20 million probability. Of course the tails of the distribution are fatter beyond 2 standard deviations than expectation for I.Q., but at these really high levels (above 160) I’m skeptical that most tests are measuring anything real.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science 
🔊 Listen RSS

Cardiovascular fitness is associated with cognition in young adulthood:

During early adulthood, a phase in which the central nervous system displays considerable plasticity and in which important cognitive traits are shaped, the effects of exercise on cognition remain poorly understood. We performed a cohort study of all Swedish men born in 1950 through 1976 who were enlisted for military service at age 18 (N = 1,221,727). Of these, 268,496 were full-sibling pairs, 3,147 twin pairs, and 1,432 monozygotic twin pairs. Physical fitness and intelligence performance data were collected during conscription examinations and linked with other national databases for information on school achievement, socioeconomic status, and sibship. Relationships between cardiovascular fitness and intelligence at age 18 were evaluated by linear models in the total cohort and in subgroups of full-sibling pairs and twin pairs. Cardiovascular fitness, as measured by ergometer cycling, positively associated with intelligence after adjusting for relevant confounders (regression coefficient b = 0.172; 95% CI, 0.168-0.176). Similar results were obtained within monozygotic twin pairs. In contrast, muscle strength was not associated with cognitive performance. Cross-twin cross-trait analyses showed that the associations were primarily explained by individual specific, non-shared environmental influences (≥80%), whereas heritability explained <15% of covariation. Cardiovascular fitness changes between age 15 and 18 y predicted cognitive performance at 18 y. Cox proportional-hazards models showed that cardiovascular fitness at age 18 y predicted educational achievements later in life. These data substantiate that physical exercise could be an important instrument for public health initiatives to optimize educational achievements, cognitive performance, as well as disease prevention at the society level.

IQphys.pngThe figure to the left is pretty striking, though the general correlation between intelligence and overall health has been long known. I’m not too sure if I really accept that this correlation is as causal as they say it is, but it probably can’t hurt to encourage for moderate exercise within the population. So even if this is another spurious correlation which leads to educational programs which don’t have the effect intended (increase IQ), it wouldn’t do that much harm, and perhaps might result in some good.
Citation: Maria A. I. Åberg, Nancy L. Pedersen, Kjell Torén, Magnus Svartengren, Björn Bäckstrand, Tommy Johnsson, Christiana M. Cooper-Kuhn, N. David Åberg, Michael Nilsson, and H. Georg Kuhn, Cardiovascular fitness is associated with cognition in young adulthood, PNAS 2009 : 0905307106v1-pnas.0905307106.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science, Culture 
🔊 Listen RSS

More Singularity stuff. I’m Not Saying People Are Stupid, says Eliezer Yudkowsky in response to my summary of his talk. The last line of his post says: “I’m here because I’m crazy,” says the patient, “not because I’m stupid.” So the issue is craziness, not stupidity in Eliezer’s reading. The problem I would say is that stupid people have the “Not Even Crazy” problem. They often can’t get beyond their basic cognitive biases because they don’t have a good grasp of a rational toolkit, nor are they comfortable and fluent in analysis and abstraction. I can grant that many smart people are wrong or crazy, but at least there’s a hope of having them internalize Bayes’ rule.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science 
🔊 Listen RSS

For Gun-Shy Consumers, Debit Is Replacing Credit:

Visa announced this spring that spending on Visa debit cards in the United States surpassed credit for the first time in the company’s history. In 2008, debit payment volume was $206 billion, compared with credit volume of $203 billion. MasterCard reported that for the first six months of this year, the volume of purchases on its debit cards increased 4.1 percent, to $160 billion, in the United States. Spending on credit and charge cards sank 14.8 percent, to $233 billion.
“Consumers are rational thinking individuals, and they’re going to shift their behavior in a way that fits with their current economic situation,” said Scott Strumello, an associate with the Auriemma Consulting Group, a Long Island-based payment card advisory firm. “They’re thinking more seriously about it, and many may decide, ‘I’m going to use debit where I can and reserve credit for larger purchases.’ ”

I think really what’s going on here is that people are embracing the pain of paying; when you decouple time of payment from what you’re purchasing that tends to result in more purchase than would otherwise be the case. A perfectly rational individual wouldn’t need to make a distinction between debit and credit, what does it matter if you pay for a latte tomorrow (that is, it comes out of your account tomorrow) vs. the next billing cycle? No, people are rational about the fact that they are irrational. Pay later = buy more, pay tomorrow = buy less. If you want to buy less then heighten the immediacy of the cost.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Economics, Science • Tags: Cognitive Science, Culture, Psychology 
🔊 Listen RSS

Via Anthropology.net, The prehistory of handedness: Archaeological data and comparative ethology:

Homo sapiens sapiens displays a species wide lateralised hand preference, with 85% of individuals in all populations being right-handed for most manual actions. In contrast, no other great ape species shows such strong and consistent population level biases, indicating that extremes of both direction and strength of manual laterality (i.e., species-wide right-handedness) may have emerged after divergence from the last common ancestor. To reconstruct the hand use patterns of early hominins, laterality is assessed in prehistoric artefacts. Group right side biases are well established from the Neanderthals onward, while patchy evidence from older fossils and artefacts indicates a preponderance of right-handed individuals. Individual hand preferences and group level biases can occur in chimpanzees and other apes for skilled tool use and food processing. Comparing these findings with human ethological data on spontaneous hand use reveals that the great ape clade (including humans) probably has a common effect at the individual level, such that a person can vary from ambidextrous to completely lateralised depending on the action. However, there is currently no theoretical model to explain this result. The degree of task complexity and bimanual complementarity have been proposed as factors affecting lateralisation strength. When primatology meets palaeoanthropology, the evidence suggests species-level right-handedness may have emerged through the social transmission of increasingly complex, bimanually differentiated, tool using activities.

The evolutionary background of handedness is of interest because there are correlates with left-handedness when it comes to individual differences. Handedness can also be somewhat confusing. For example, I am right-handed when it comes to writing (of less relevance today when I generally type). But I am strongly left-handed in basketball, switch-hit in baseball (slower bat speed left), and can throw a football with either arm comfortably (greater strength left, but better touch right, and I tend to side-arm with the left).

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science, Evolution 
🔊 Listen RSS

Recently I listened to the author of Addiction: A Disorder of Choice, Gene M. Heyman, interviewed on the Tom Ashbrook show. A lot of the discussion revolved around the term “disease”, which I can’t really comment on, but a great deal of Heyman’s thesis is grounded in rather conventional behavior genetic insights. In short, a behavioral trait can have a host of inputs, and is often a combination of environment & genes developing over a lifetime. Alcoholism is not much of an issue among observant Mormons because of their environment, not their genes. Heyman points out that whereas some behavioral phenotypes, such as schizophrenia or autism, are extremely difficult or impossible to cure through one’s own personal choice (i.e., for schizophrenia you may need medication, while many autistics are what they are no matter the drug or environment), addiction therapy can work and so change the expression of the trait. Additionally he makes some important criticisms of the methodologies of clinical studies of addiction which seem important to me, primarily that there is a strong selection bias in these samples which overstates the inability to control impulse in individuals prone to addiction (similar problems probably resulted in an overestimate of the concordance for homosexuality among twins).
But the bigger issue is the same as the one that crops up with obesity, what role does personal responsibility and public policy play? Many of the critics of Heyman seem to be suggesting that he is reverting to blaming someone with an illness. The fat acceptance movement makes similar arguments. These issues, and the fact that policy and culture revolve around them, mean that we have to begin to rethink our conceptions of free will, choice and decision making. It isn’t about people being good, bad, irresponsible or moral, it is people being who they are, and confronting the cards they’re dealt.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science, Culture, Health 
🔊 Listen RSS

We know that dogs can read human faces, it turns out that babies can infer the meaning of different dog barks:

New research shows babies have a handle on the meaning of different dog barks – despite little or no previous exposure to dogs.
Infants just 6 months old can match the sounds of an angry snarl and a friendly yap to photos of dogs displaying threatening and welcoming body language.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science 
🔊 Listen RSS

Update: See Ed Yong.
Randall Parker points me to a new paper from Joshua Greene which describes the neurological responses of individuals when do, or don’t, lie, when lying might be in their self-interest. From EurekaAlert:

The research was designed to test two theories about the nature of honesty – the “Will” theory, in which honesty results from the active resistance of temptation, and the “Grace” theory in which honesty is a product of lack of temptation. The results of this study suggest that the “Grace” theory is true, because the honest participants did not show any additional neural activity when telling the truth.

Using fMRI, Greene found that the honest individuals displayed little to no additional brain activity when reporting their prediction of the coin toss. However, the dishonest participants’ brains were most active in control-related brain regions when they chose not to lie. These control-related brain regions include the dorsolateral prefrontal cortex and the anterior cingulate cortex, and previous research has shown that these regions are active when an individual is asked to lie.

“When the honest people leave money on the table, you don’t see anything special or extra going on in their brains at all,” says Greene. “Whereas, when the dishonest people leave money on the table, that’s when you saw the most robust control network activation.”
If neuroscience is able to identify lies by peering into the brain of the liar, it will be important to distinguish between activity in the brain when lying and activity caused by the temptation to lie. Greene says that eventually it may be possible to detect lies by looking at someone’s brain activity, although a lot more work must be done before this is possible.


Will fMRI really be better than the various other physiological indicators used in contemporary lie detector tests? What’s the error rate? False positive is a killer here. Nice quote for a press release, but we’ll see, color me skeptical. Nevertheless, I am intrigued by the idea that people of diverse ethical orientations may have a strong cognitive bias in particular directions which will naturally result in a neurological pattern which we can discern. A few years back Jonathan Haidt made a splash by mooting the idea of average moral differences between populations, but we don’t need to go that far, behavior genetics has long shown us that there’s a large heritable component to the decisions we make which might seem to have a moral or ethical valence.
In any case, for thousands of years philosophers have speculated whether humans are innately good or bad, from Rosseau and Hobbes to Xun Zi and Mencius. The time for speculation is over, as experimental philosophers are looking into the empirical distribution of human moral intuition, as opposed to surveying the reflections of their philosophically oriented colleagues. Instead of one moral sense it seems much more likely that humans exhibit plasticity in their behavior as well as differences of modal response in a given circumstance. In other words, morality is situational, but the distribution of responses might vary quite a bit from person to person given the same situations. Attempting to drill-down on the neuroscientific map of this phenomenon is one avenue of exploration, but genetics will probably get in on the action at some point. Intelligent people will also perhaps fine-tune their model of how “free will” works, though much of this research will be irrelevant to the majority.
Cite: Greene, J.D., Paxton, J.M., Patterns of neural activity associated with honest and dishonest moral decisions. Proceedings of the National Academy of Sciences (not online yet)

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science 
🔊 Listen RSS

Arnold Kling highlights this section from a Scientific American article, The Science of Economic Bubbles and Busts:

But behavioral economics experiments routinely show that despite similar outcomes, people (and other primates) hate a loss more than they desire a gain, an evolutionary hand-me-down that encourages organisms to preserve food supplies or to weigh a situation carefully before risking encounters with predators.
One group that does not value perceived losses differently than gains are individuals with autism, a disorder characterized by problems with social interaction. When tested, autistics often demonstrate strict logic when balancing gains and losses, but this seeming rationality may itself denote abnormal behavior. “Adhering to logical, rational principles of ideal economic choice may be biologically unnatural,” says Colin F. Camerer, a professor of behavioral economics at Caltech. Better insight into human psychology gleaned by neuroscientists holds the promise of changing forever our fundamental assumptions about the way entire economies function–and our understanding of the motivations of the individual participants therein, who buy homes or stocks and who have trouble judging whether a dollar is worth as much today as it was yesterday.

The gain vs. loss dictum indicates a strong risk aversion in humanity. Why might this be? I suspect it has to do with the fact that for most of our history we’ve been an animal like any other, on the Malthusian boundary, always facing individual or group extinction. The possibility of becoming as rich as Warren Buffet, or as prolific as Genghis Khan, by taking risks or trodding the path less taken, simply did not exist. The downside was extinction, the upside might be temporary success, only to see your lineage be swept away by history due to a propensity to gamble.


Consider the case of sex. Clonal reproduction is more efficient in the short term. Every individual can generate many copies of themselves. In dioecious sexual organisms the existence of males cuts down on the short term reproductive output of both sexes in terms of gene copies passed into organisms each generation. While clonal reproducing females generate exact copies, sexual females dilute their genetic contribution by 1/2. But over the long term clonal lines presumably become extinct often enough that the non-clonal varieties are dominant at any given moment. Cloning has a short term upside, but its long term downside is extinction (over the long term all species go extinct, so the key is that you’re just shifting the value of the expected interval which a species might exist upward).
Risk aversion is really a way to dampen volatility of behavior. You do what’s worked in the past and stick to custom & tradition. In The Pursuit of Glory: The Five Revolutions that Made Modern Europe: 1648-1815 Tim Blanning notes that it was very difficult to get European peasants to adopt new crops and modern agricultural science; productivity be damned. One exception to this was the rapid adoption of the potato in Ireland, which witnessed an enormous population boom in the 18th and early 19th centuries. And of course Ireland was hit by a potato blight which rendered it vulnerable to famine because of its excessive dependence on this one crop, which in the short term was the optimal way to convert land to calories in Ireland’s climate (note that this example is not proof of the principle, just an illustration, as I’m well aware of the various institutional reasons which exacerbated the Irish famine).
The past is not the present. In the Malthusian world there simply weren’t rates of economic growth which we see today in the wake of the Industrial Revolution and its descendants. Thomas Malthus may seem foolish, writing as he was on the precipice of a new age, but his ideas were informed by all of human history. The hyperrational autistic individual whose analytic cognitive functions are sharp is faced with bewildering human animals with irrational “hard & fast” intuitive reflexes embedded in a world which baffles intuition. But like evolutionary psychologists I suspect that these cognitive intuitions have deep roots in pre-modern ecological fitness, the world of the hunter-gatherer & peasant, and many are to an extent “baked into the cake” of our cognitive architecture.
Naturally the pig-headed stubbornness of Russian peasants when faced with 19th century scientific agronomists seems short-sighted today, but Russia’s 20th century experiments to some extent vindicate the suspicion of simple folk. But these attachments to older ways which emerge from risk aversion aren’t simply nuisances, they might be essential in understanding human utility functions, which might have a thick network of prior values. In The Myth of the Rational Voter Bryan Caplan argues that the typical human has weak conceptions of the non-zero sum dynamics at the heart of modern economics, especially in the case of free trade. This is true, but I think what this misses is that the utility function of most humans is more strongly weighted toward risk aversion and willing to accept a trade off between rate of growth and volatility of growth which minimizes the latter. At the end of the day it doesn’t really matter if it isn’t “rational”* for people to weight loss twice as strongly in their utility functions as gains, it’s just how many people are in their bones.
* Most talk of rationality presupposes a relative clean, elegant and spare mental architecture. This model is manifestly false.

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Cognitive Science, Culture 
No Items Found
Razib Khan
About Razib Khan

"I have degrees in biology and biochemistry, a passion for genetics, history, and philosophy, and shrimp is my favorite food. If you want to know more, see the links at http://www.razib.com"