The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information

 TeasersKjmtchl@GNXP Blogview

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

So many things can go wrong in the development of the human brain it is amazing that it ever goes right. The fact that it usually does – that the majority of people do not suffer from a neurodevelopmental disorder – is due to the property engineers call robustness. This property has important implications for understanding the genetic architecture of neurodevelopmental disorders – what kinds of insults will the system be able to tolerate and what kind will it be vulnerable to?

The development of the brain involves many thousands of different gene products acting in hundreds of distinct molecular and cellular processes, all tightly coordinated in space and time – from patterning and proliferation to cell migration, axon guidance, synapse formation and many others. Large numbers of proteins are involved in the biochemical pathways and networks underlying each cell biological process. Each of these systems has evolved not just to do a particular job, but to do it robustly – to make sure this process happens even in the face of diverse challenges.

Robustness is an emergent and highly adaptive property of complex systems that can be selected for in response to particular pressures. These include extrinsic factors, such as variability in temperature, supply of nutrients, etc., but also intrinsic factors. A major source of intrinsic variation is noise in gene expression – random fluctuations in the levels of all proteins in all cells. These fluctuations arise due to the probabilistic nature of gene transcription – whether a messenger RNA is actively being made from a gene at any particular moment. The system must be able to deal with these fluctuations and it can be argued that the noise in the system actually acts as a buffer. If the system only worked within a narrow operating range for each component then it would be very vulnerable to failure of any single part.

Natural selection will therefore favour system architectures that are more robust to environmental and intrinsic variation. In the process, such systems also indirectly become robust to the other major source of variation – mutations.

Many individual components can be deleted entirely with no discernible effect on the system (which is why looking exhaustively for a phenotype in mouse mutants can be so frustrating – many gene knockouts are irritatingly normal). You could say that if the knockout of a gene does not affect a particular process, that that means the gene product is not actually involved in that process, but that is not always the case. One can often show that a protein is involved biochemically and even that the system is sensitive to changes in the level of that protein – increased expression can often cause a phenotype even when loss-of-function manipulations do not.

Direct evidence for robustness of neurodevelopmental systems comes from examples of genetic background effects on phenotypes caused by specific mutations. While many components of the system can be deleted without effect, others do cause a clear phenotype when mutated. However, such phenotypes are often modified by the genetic background. This is commonly seen in mouse experiments, for example, where the effect of a mutation may vary widely when it is crossed into various inbred strains. The implication is that there are some genetic differences between strains that by themselves have no effect on the phenotype, but that are clearly involved in the system or process, as they strongly modify the effect of another mutation.

How is this relevant to understanding so-called complex disorders? There are two schools of thought on the genetic architecture of these conditions. One considers the symptoms of, say, autism or schizophrenia or epilepsy as the consequence of mutation in any one of a very large number of distinct genes. This is the scenario for intellectual disability, for example, and also for many other conditions like inherited blindness or deafness. There are hundreds of distinct mutations that can result in these symptoms. The mutations in these cases are almost always ones that have a dramatic effect on the level or function of the encoded protein.

The other model is that complex disorders arise, in many cases, due to the combined effects of a very large number of common polymorphisms – these are bases in the genome where the sequence is variable in the population (e.g., there might be an “A” in some people but a “G” in others). The human genome contains millions of such sites and many consider the specific combination of variants that each person inherits at these sites to be the most important determinant of their phenotype. (I disagree, especially when it comes to disease). The idea for disorders such as schizophrenia is that at many of these sites (perhaps thousands of them), one of the variants may predispose slightly to the illness. Each one has an almost negligible effect alone, but if you are unlucky enough to inherit a lot of them, then the system might be pushed over the level of burden that it can tolerate, into a pathogenic state.

These are the two most extreme positions – there are also many models that incorporate effects of both rare mutations and common polymorphisms. Models incorporating common variants as modifiers of the effects of rare mutations make a lot of biological sense. What I want to consider here is the model that the disease is caused in some individuals purely by the combined effects of hundreds or thousands of common variants (without what I call a “proper mutation”).

Ironically, robustness has been invoked by both proponents and opponents of this idea. I have argued that neurodevelopmental systems should be robust to the combined effects of many variants that have only very tiny effects on protein expression or function (which is the case for most common variants). This is precisely because the system has evolved to buffer fluctuations in many components all the time. In addition to being an intrinsic, passive property of the architecture of developmental networks, robustness is also actively promoted through homeostatic feedback loops, which can maintain optimal performance in the face of variations, by regulating the levels of other components to compensate. The effects of such variants should therefore NOT be cumulative – they should be absorbed by the system. (In fact, you could argue that a certain level of noise in the system is a “design feature” because it enables this buffering).

Others have argued precisely the opposite – that robustness permits cryptic genetic variation to accumulate in populations. Cryptic genetic variation has no effect in the context in which it arises (allowing it to escape selection) but, in another context – say in a different environment, or a different genetic background – can have a large effect. This is exactly what robustness allows to happen – indeed, the fact that cryptic genetic variation exists provides some of the best evidence that we have that the systems are robust as it shows directly that mutations in some components are tolerated in most contexts. But is there any evidence that such cryptic variation comprises hundreds or thousands of common variants?

To be fair, proving that is the case would be very difficult. You could argue from animal breeding experiments that the continuing response to selection of many traits means that there must be a vast pool of genetic variation that can affect them, which can be cumulatively enriched by selective breeding, almost ad infinitum. However, new mutations are known to make at least some contribution to this continued response to selection. In addition, in most cases where the genetics of such continuously distributed traits have been unpicked (by identifying the specific factors contributing to strain differences for example) they come down to perhaps tens of loci showing very strong and complex epistatic interactions (1, 2, 3). Thus, just because variation in a trait is multigenic, does not mean it is affected by mutations of small individual effect – an effectively continuous distribution can emerge due to very complex epistatic interactions between a fairly small number of mutations which have surprisingly large effects in isolation.

(I would be keen to hear of any examples showing real polygenicity on the level of hundreds or thousands of variants).

In the case of genetic modifiers of specific mutations – say, where a mutation causes a very different phenotype in different mouse strains – most of the effects that have been identified have been mapped to one or a small number of mutations which have no effect by themselves, but which strongly modify the phenotype caused by another mutation.

These and other findings suggest that (i) cryptic genetic variation relevant to disease is certainly likely to exist and to have important effects on phenotype, but that (ii) such genetic background effects can most likely be ascribed to one, several, or perhaps tens of mutations, as opposed to hundreds or thousands of common polymorphisms.

This is already too long, but it begs the question: if neurodevelopmental systems are so robust, then why do we ever get neurodevelopmental disease? The paradox of systems that are generally robust is that they may be quite vulnerable to large variation in a specific subset of components. Why specific types of genes are in this set, while others can be completely deleted without effect, is the big question. More on that in a subsequent post…

(Republished from by permission of author or representative)
• Category: Science • Tags: Autism, Complex Disorders, Mutations, Schizophrenia 
🔊 Listen RSS

A trio of papers in this week’s Nature identifies mutations causing autism in four new genes, demonstrate the importance of de novo mutations in the etiology of this disorder and suggest that there may be 1,000 or more genes in which high-risk, autism-causing mutations can occur.

These studies provide an explanation for what seems like a paradox: on the one hand, twin studies show that autism is very strongly genetic (identical twins are much more likely to share a diagnosis than fraternal twins) – on the other, many cases are sporadic, with no one else in the family affected. How can the condition be “genetic” but not always run in the family? The explanation is that many cases are caused by new mutations – ones that arise in the germline of the parents. (This is similar to conditions like Down syndrome). The studies reported in Nature are trying to find those mutations and see which genes are affected.

They are only possible because of the tremendous advances in our ability to sequence DNA. The first genome cost three billion dollars to sequence and took ten years – we can do one now for a couple thousand dollars in a few days. That means you can scan through the entire genome in any affected individual for mutated genes. The problem is we each carry hundreds of such mutations, making it difficult to recognise the ones that are really causing disease.

The solution is to sequence the DNA of large numbers of people with the same condition and see if the same genes pop up multiple times. That is what these studies aimed to do, with samples of a couple hundred patients each. They also concentrated on families where autism was present in only one child and looked specifically for mutations in that child that were not carried by either parent – so-called de novo mutations, that arise in the generation of sperm or eggs. These are the easiest to detect because they are likely to be the most severe. (Mutations with very severe effects are unlikely to be passed on because the people who carry them are far less likely to have children).

There is already strong evidence that de novo mutations play an important role in the etiology of autism – first, de novo copy number variants (deletions or duplications of chunks of chromosomes) appear at a significantly higher rate in autism patients compared to controls (in 8% of patients compared to 2% of controls). Second, it has been known for a while that the risk of autism increases with paternal age – that is, older fathers are more likely to have a child with autism. (Initial studies suggested the risk was up to five-fold greater in fathers over forty – these figures have been revised downwards with increasing sample sizes, but the effect remains very significant, with risk increasing monotonically with paternal age). This is also true of schizophrenia and, in fact, of dominant Mendelian disorders in general (those caused by single mutations). The reason is that the germ cells generating sperm in men continue to divide throughout their lifetime, leading to an increased chance of a mutation having happened as time goes on.

The three studies in Nature were looking for a different class of mutation – point mutations or changes in single DNA bases. They each provide a list of genes with de novo mutations found in specific patients. Several of these showed a mutation in more than one (unrelated) patient, providing strong evidence that these mutations are likely to be causing autism in those patients. The genes with multiple hits include CHD8, SCN2A, KATNAL2 and NTNG1. Mutations in the last of these, NTNG1, were only found in two patients but have been previously implicated as a rare cause of Rett syndrome. This gene encodes the protein Netrin-G1, which is involved in the guidance of growing nerves and the specification of neuronal connections. CHD8 is a chromatin-remodeling factor and is involved in Wnt signaling, a major neurodevelopmental pathway, as well as interacting with p53, which controls cell growth and division. SCN2A encodes a sodium channel subunit; mutations in this gene are involved in a variety of epilepsies. Not much is known about KATNAL2, except by homology – it is related to proteins katanin and spastin, which sever microtubules – mutations in spastin are associated with hereditary spastic paraplegia. How the specific mutations observed in these genes cause the symptoms of autism in these patients (or contribute to them) is not clear – these discoveries are just a starting point, but they will greatly aid the quest to understand the biological basis of this disorder.

The fact that these studies only got a few repeat hits also means that there are probably many hundreds or even thousands of genes that can cause autism when mutated (if there were only a small number, we would see more repeat hits). Some of these will be among the other genes on the lists provided by these studies and will no doubt be recognisable as more patients are sequenced. Interestingly, many of the genes on the lists are involved in aspects of nervous system development or function and encode proteins that interact closely with each other – this makes it more likely that they are really involved.

These studies reinforce the fact that autism is not one disorder – not clinically and not genetically either. Like intellectual disability or epilepsy or many other conditions, it can be caused by mutations in any of a very large number of genes. The ones we know about so far make up around 30% of cases – these new studies add to that list and also show how far we have to go to complete it.

We should recognise too that the picture will also get more complex – in many cases there may be more than one mutation involved in causing the disease. De novo mutations are likely to be the most severe class and thus most likely to cause disease with high penetrance themselves. But many inherited mutations may cause autism only in combination with one or a few other mutations.

These complexities will emerge over time, but for now we can aim to recognise the simpler cases where a mutation in a particular gene is clearly implicated. Each new gene discovered means that the fraction of cases we can assign to a specific cause increases. As we learn more about the biology of each case, those genetic diagnoses will have important implications for prognosis, treatment and reproductive decisions. We can aim to diagnose and treat the underlying cause in each patient and not just the symptoms.

(Republished from by permission of author or representative)
• Category: Science • Tags: Autism, Genetics, Mutations 
🔊 Listen RSS

Finding your soulmate, for a neuron, is a daunting task. With so many opportunities for casual hook-ups, how do you know when you find “the one”?

In the early 1960’s Roger Sperry proposed his famous “chemoaffinity theory” to explain how neural connectivity arises. This was based on observations of remarkable specificity in the projections of nerves regenerating from the eye of frogs to their targets in the brain. His first version of this theory proposed that each neuron found its target by expression of matching labels on their respective surfaces. He quickly realised, however, that with ~200,000 neurons in the retina, the genome was not large enough to encode separate connectivity molecules for each one. This led him to the insight that a regular array of connections of one field of neurons (like the retina) across a target field (the optic tectum in this case) could be readily achieved by gradients of only one or a few molecules.

The molecules in question, Ephrins and Eph receptors, were discovered thirty-some years later. They are now known to control topographic projections of sets of neurons to other sets of neurons across many areas of the brain, such that nearest-neighbour relationships are maintained (e.g., neurons next to each other in the retina connect to neurons next to each other in the tectum). In this way, the map of the visual world that is generated in the retina is transmitted intact to its targets. Actually, maintenance of nearest-neighbour topography seems to be a general property of projections between any two areas, even ones that do not obviously map some external property across them.

But the idea of matching labels was not wrong – they do exist and they play a very important part in an earlier step of wiring – finding the correct target region in the first place. This is nicely illustrated by a beautiful paper studying projections of retinal neurons in the mouse, which implicates proteins in the Cadherin family in this process.

In the retina, photoreceptor cells sense light and transmit this information, through a couple of relays, to retinal ganglion cells (RGCs). These are the cells that send their projections out of the retina, through the optic nerve, to the brain. But the tectum is not the only target of these neurons. There are, in fact, at least 20 different types of RGCs with distinct functions that project from the retina to various parts of the brain.

In mammals, “seeing” is mediated by projections to the visual centre of the thalamus, which projects in turn to the primary visual cortex. But conscious vision is only one thing we use our eyes for. The equivalent of the tectum, called the superior colliculus in mammals, is also a target for RGCs, and mediates reflexive eye movements, head turns and shifts of attention. (It might even be responsible for blindsight – subconscious visual responsiveness in consciously blind patients). Other RGCs send messages to regions controlling circadian rhythms (the suprachiasmatic nuclei) or pupillary reflexes (areas of the midbrain called the olivary pretectal nuclei).

These RGCs express a photoresponsive pigment (melanopsin) and respond to light directly. This likely reflects the fact that early eyes contained both ciliated photoreceptors (like current rods and cones) and rhabdomeric photoreceptors (possibly the ancestors of RGCs and other retinal cells).

So how do these various RGCs know which part of the brain to project to? This was the question investigated by Andrew Huberman and colleagues, who looked for inspiration to the fly eye. It had previously been shown that a member of the Cadherin family of proteins was involved in fly photoreceptor axons choosing the right layer to project to in the optic lobe. Cadherins are “homophilic” adhesion molecules – they are expressed on the surface of cells and like to bind to themselves. Two cells expressing the same Cadherin protein will therefore stick to each other. This stickiness may be used as a signal to make a synaptic connection between a neuron and its target.

The protein implicated in flies, N-Cadherin, is widely expressed in mammals and thus unlikely to specify connections to different targets of the retina. But Cadherins comprise a large family of proteins, suggesting that other members might play more specific roles. This turns out to be the case – a screen of these proteins revealed several expressed in distinct regions of the brain receiving inputs from subtypes of RGCs. One in particular, Cadherin-6, is expressed in non-image-forming brain regions that receive retinal inputs – those controlling eye movements and pupillary reflexes, for example. The protein is also expressed in a very discrete subset of RGCs – specifically those that project to the Cadherin-6-expressing targets in the brain.

The obvious hypothesis was that this matching protein expression allowed those RGCs to recognise their correct targets by literally sticking to them. To test this, they analysed these projections in mice lacking the Cadherin-6 molecule. Sure enough, the projections to those targets were severely affected – the axons spread out over the general area of the brain but failed to zero in on the specific subregions that they normally targeted.

These results illustrate a general principle likely to be repeated using different Cadherins in different RGC subsets and also in other parts of the brain. Indeed, a paper published at the same time shows that Cadherin-9 may play a similar function in the developing hippocampus. In addition, other families of molecules, such as Leucine-Rich Repeat proteins may play a similar role as synaptic matchmakers by promoting homophilic adhesion between neurons and their targets. (Both Cadherins and LRR proteins also have important “heterophilic” interactions with other proteins).

The expansion of these families in vertebrates could conceivably be linked to the greater complexity of the nervous system, which presumably requires more such labels to specify it. But these molecules may be of more than just academic interest in understanding the molecular logic and evolution of the genetic program that specifies brain wiring. Mutations in various members of the Cadherin (and related protocadherin) and LRR gene families have also been implicated in neurodevelopmental disorders, including autism, schizophrenia, Tourette’s syndrome and others. Defining the molecules and mechanisms involved in normal development may thus be crucial to understanding the roots of neurodevelopmental disease.

Osterhout, J., Josten, N., Yamada, J., Pan, F., Wu, S., Nguyen, P., Panagiotakos, G., Inoue, Y., Egusa, S., Volgyi, B., Inoue, T., Bloomfield, S., Barres, B., Berson, D., Feldheim, D., & Huberman, A. (2011). Cadherin-6 Mediates Axon-Target Matching in a Non-Image-Forming Visual Circuit Neuron, 71 (4), 632-639 DOI: 10.1016/j.neuron.2011.07.006

Williams, M., Wilke, S., Daggett, A., Davis, E., Otto, S., Ravi, D., Ripley, B., Bushong, E., Ellisman, M., Klein, G., & Ghosh, A. (2011). Cadherin-9 Regulates Synapse-Specific Differentiation in the Developing Hippocampus Neuron, 71 (4), 640-655 DOI: 10.1016/j.neuron.2011.06.019

(Republished from by permission of author or representative)
• Category: Science • Tags: Autism, Connectivity 
🔊 Listen RSS

A debate is raging in human genetics these days as to why the massive genome-wide association studies (GWAS) that have been carried out for every trait and disorder imaginable over the last several years have not explained more of the underlying heritability. This is especially true for many of the so-called complex disorders that have been investigated, where results have been far less than hoped for. A good deal of effort has gone into quantifying exactly how much of the genetic variance has been “explained” and how much remains “missing”.

The problem with this question is that it limits the search space for the solution. It forces our thinking further and further along a certain path, when what we really need is to draw back and question the assumptions on which the whole approach is founded. Rather than asking what is the right answer to this question, we should be asking: what is the right question?

The idea of performing genome-wide association studies for complex disorders rests on a number of very fundamental and very big assumptions. These are explored in a recent article I wrote for Genome Biology (referenced below; reprints available on request). They are:

1) That what we call complex disorders are unitary conditions. That is, clinical categories like schizophrenia or diabetes or asthma are each a single disease and it is appropriate to investigate them by lumping together everyone in the population who has such a diagnosis – allowing us to calculate things like heritability and relative risks. Such population-based figures are only informative if all patients with these symptoms really have a common etiology.

2) That the underlying genetic architecture is polygenic – i.e., the disease arises in each individual due to toxic combinations of many genetic variants that are individually segregating at high frequency in the population (i.e., “common variants”).

3) That, despite the observed dramatic discontinuities in actual risk for the disease across the population, there is some underlying quantitative trait called “liability” that is normally distributed in the population. If a person’s load of risk variants exceeds some threshold of liability, then disease arises.

All of these assumptions typically go unquestioned – often unmentioned, in fact – yet there is no evidence that any of them is valid. In fact, the more you step back and look at them with an objective eye, the more outlandish they seem, even from first principles.

First, what reason is there to think that there is only one route to the symptoms observed in any particular complex disorder? We know there are lots of ways, genetically speaking, to cause mental retardation or blindness or deafness – why should this not also be the case for psychosis or seizures or poor blood sugar regulation? If the clinical diagnosis of a specific disorder is based on superficial criteria, as is especially the case for psychiatric disorders, then this assumption is unlikely to hold.

Second, the idea that common variants could contribute significantly to disease runs up against the effects of natural selection pretty quickly – variants that cause disease get selected against and are therefore rare. You can propose models of balancing selection (where a specific variant is beneficial in some genomic contexts and harmful in others), but there is no evidence that this mechanism is widespread. In general, the more arcane your model has to become to accommodate contradictory evidence, the more inclined you should be to question the initial premise.

Third, the idea that common disorders (where people either are or are not affected) really can be treated as quantitative traits (with a smooth distribution in the population, as with height) is really, truly bizarre. The history of this idea can be traced back to early geneticists, but it was popularised by Douglas Falconer, the godfather of quantitative genetics (he literally wrote the book).

In an attempt to demonstrate the relevance of quantitative genetics to the study of human disease, Falconer came up with a nifty solution. Even though disease states are typically all-or-nothing, and even though the actual risk of disease is clearly very discontinuously distributed in the population (dramatically higher in relatives of affecteds, for example), he claimed that it was reasonable to assume that there was something called the underlying liability to the disorder that was actually continuously distributed. This could be converted to a discontinuous distribution by further assuming that only individuals whose burden of genetic variants passed an imagined threshold actually got the disease. To transform discontinuous incidence data (mean rates of disease in various groups, such as people with different levels of genetic relatedness to affected individuals) into mean liability on a continuous scale, it was necessary to further assume that this liability was normally distributed in the population. The corollary is that liability is affected by many genetic variants, each of small effect. Q.E.D.

This model – simply declared by fiat – forms the mathematical basis for most GWAS analyses and for simulations regarding proportions of heritability explained by combinations of genetic variants (e.g., the recent paper from Eric Lander’s group). To me, it is an extraordinary claim, which you would think would require extraordinary evidence to be accepted. Despite the fact that it has no evidence to support it and fundamentally makes no biological sense (see Genome Biology article for more on that), it goes largely unquestioned and unchallenged.

In the cold light of day, the most fundamental assumptions underlying population-based approaches to investigate the genetics of “complex disorders” can be seen to be flawed, unsupported and, in my opinion, clearly invalid. More importantly, there is now lots of direct evidence that complex disorders like schizophrenia or autism or epilepsy are really umbrella terms, reflecting common symptoms associated with large numbers of distinct genetic conditions. More and more mutations causing such conditions are being identified all the time, thanks to genomic array and next generation sequencing approaches.

Different individuals and families will have very rare, sometimes even unique mutations. In some cases, it will be possible to identify specific single mutations as clearly causal; in others, it may require a combination of two or three. There is clear evidence for a very wide range of genetic etiologies leading to the same symptoms. It is time for the field to assimilate this paradigm shift and stop analysing the data in population-based terms. Rather than asking how much of the genetic variance across the population can be currently explained (a question that is nonsensical if the disorder is not a unitary condition), we should be asking about causes of disease in individuals:

- How many cases can currently be explained (by the mutations so far identified)?

- Why are the mutations not completely penetrant?

- What factors contribute to the variable phenotypic expression in different individuals carrying the same mutation?

- What are the biological functions of the genes involved and what are the consequences of their disruption?

- Why do so many different mutations give rise to the same phenotypes?

- Why are specific symptoms like psychosis or seizures or social withdrawal such common outcomes?

These are the questions that will get us to the underlying biology.

Mitchell, K. (2012). What is complex about complex disorders? Genome Biology, 13 (1) DOI: 10.1186/gb-2012-13-1-237

Manolio, T., Collins, F., Cox, N., Goldstein, D., Hindorff, L., Hunter, D., McCarthy, M., Ramos, E., Cardon, L., Chakravarti, A., Cho, J., Guttmacher, A., Kong, A., Kruglyak, L., Mardis, E., Rotimi, C., Slatkin, M., Valle, D., Whittemore, A., Boehnke, M., Clark, A., Eichler, E., Gibson, G., Haines, J., Mackay, T., McCarroll, S., & Visscher, P. (2009). Finding the missing heritability of complex diseases Nature, 461 (7265), 747-753 DOI: 10.1038/nature08494

Zuk, O., Hechter, E., Sunyaev, S., & Lander, E. (2012). The mystery of missing heritability: Genetic interactions create phantom heritability Proceedings of the National Academy of Sciences, 109 (4), 1193-1198 DOI: 10.1073/pnas.1119675109

(Republished from by permission of author or representative)
🔊 Listen RSS

It takes a lot of genes to wire the human brain. Billions of cells, of a myriad different types have to be specified, directed to migrate to the right position, organised in clusters or layers, and finally connected to their appropriate targets. When the genes that specify these neurodevelopmental processes are mutated, the result can be severe impairment in function, which can manifest as neurological or psychiatric disease.


How those kinds of neurodevelopmental defects actually lead to the emergence of particular pathological states – like psychosis or seizures or social withdrawal – is a mystery, however. Many researchers are trying to tackle this problem using mouse models – animals carrying mutations known to cause autism or schizophrenia in humans, for example. A recent study from my own lab (open access in PLoS One) adds to this effort by examining the consequences of mutation of an important neurodevelopmental gene and providing evidence that the mice end up in a state resembling psychosis. In this case, we start with a discovery in mice as an entry point to the underlying neurodevelopmental processes.


In just the past few years, over a hundred different mutations have been discovered that are believed to cause disorders like autism or schizophrenia. In many cases, particular mutations can actually predispose to many different disorders, having been linked in different patients to ADHD, epilepsy, mental retardation or intellectual disability, Tourette’s syndrome, depression, bipolar disorder and others. These clinical categories may thus represent more or less distinct endpoints that can arise from common neurodevelopmental origins.


For a condition like schizophrenia, the genetic overlap with other conditions does not invalidate the clinical category. There is still something distinctive about the symptoms of this disorder that needs to be explained. I have argued that schizophrenia can clearly be caused by single mutations in any of a very large number of different genes, many with roles in neurodevelopment. If that model is correct, then the big question is: how do these presumably diverse neurodevelopmental insults ultimately converge on that specific phenotype? It is, after all, a highly unusual condition. The positive symptoms of psychosis – hallucinations and delusions, for example – especially require an explanation. If we view the brain from an engineering perspective, then we can say that the system is not just not working well – it is failing in a particular and peculiar manner.


To try to address how this kind of state can arise we have been investigating a particular mouse – one with a mutation in a gene called Semaphorin-6A. This gene encodes a protein that spans the membranes of nerve cells, acting in some contexts as a signal to other cells and in other contexts as a receptor of information. It has been implicated in controlling cell migration, the guidance of growing axons, the specification of synaptic connectivity and other processes. It is deployed in many parts of the developing brain and required for proper development in the cerebral cortex, hippocampus, thalamus, cerebellum, retina, spinal cord, and probably other areas we don’t yet know about.


Despite widespread cellular disorganisation and miswiring in their brains, Sema6A mutant mice seem overtly pretty normal. They are quite healthy and fertile and a casual inspection would not pick them out as different from their littermates. However, more detailed investigation revealed electrophysiological and behavioural differences that piqued our interest.



Because these animals have a subtly malformed hippocampus, which looks superficially like the kind of neuropathology observed in many cases of temporal lobe epilepsy, we wanted to test if they had seizures. To do this we attached electrodes to their scalp and recorded their electroencephalogram (or EEG).This technique measures patterned electrical activity in the underlying parts of the brain and showed quite clearly that these animals do not have seizures.But it did show something else – a generally elevated amount of activity in these animals all the time.
What was particularly interesting about this is that the pattern of change (a specific increase in alpha frequency oscillations) was very similar to that reported in animals that are sensitised to amphetamine – a well-used model of psychosis in rodents. High doses of amphetamine can acutely induce psychosis in humans and a suite of behavioural responses in rodents. In addition, a regimen of repeated low doses of amphetamine over an extended time period can induce sensitisation to the effects of this drug in rodents, characterised by behavioural differences, like hyperlocomotion, as well as the EEG differences mentioned above. Amphetamine is believed to cause these effects by inducing increases in dopaminergic signaling, either chronically, or to acute stimuli.

This was of particular interest to us, as that kind of hyperdopaminergic state is thought to be a final common pathway underlying psychosis in humans. Alterations in dopamine signaling are observed in schizophrenia patients (using PET imaging) and also in all relevant animal models so far studied.


To explore possible further parallels to these effects in Sema6A mutants we examined their behaviour and found a very similar profile to many known animal models of psychosis, namely hyperlocomotion and a hyper-exploratory phenotype (in addition to various other phenotypes, like a defect in working memory). The positive symptoms of psychosis can be ameliorated in humans with a number of different antipsychotic drugs, which have in common a blocking action on dopamine receptors. Administering such drugs to the Sema6A mutants normalised both their activity levels and the EEG (at a dose that had no effect on wild-type animals).


These data are at least consistent with (though they by no means prove) the hypothesis that Sema6A mutants end up in a hyperdopaminergic state. But how do they end up in that state? There does not seem to be a direct effect on the development of the dopaminergic system – Sema6A is at least not required to direct these axons to their normal targets.


Our working hypothesis is that the changes to the dopaminergic system emerge over time, as a secondary response to the primary neurodevelopmental defects seen in these animals.

It is well documented that early alterations, for example to the hippocampus, can have cascading effects over subsequent activity-dependent development and maturation of brain circuits. In particular, it can alter the excitatory drive to the part of the midbrain where dopamine neurons are located, in turn altering dopaminergic tone in the forebrain. This can induce compensatory changes that ultimately, in this context, may prove maladaptive, pushing the system into a pathological state, which may be self-reinforcing.


For now, this is just a hypothesis and one that we (and many other researchers working on other models) are working to test. The important thing is that it provides a possible explanation for why so many different mutations can result in this strange phenotype, which manifests in humans as psychosis. If this emerges as a secondary response to a range of primary insults then that reactive process provides a common pathway of convergence on a final phenotype. Importantly, it also provides a possible point of early intervention – it may not be possible to “correct” early differences in brain wiring but it may be possible to prevent them causing transition to a state of florid psychopathology.


Rünker AE, O’Tuathaigh C, Dunleavy M, Morris DW, Little GE, Corvin AP, Gill M, Henshall DC, Waddington JL, & Mitchell KJ (2011). Mutation of Semaphorin-6A disrupts limbic and cortical connectivity and models neurodevelopmental psychopathology. PloS one, 6 (11) PMID: 22132072


Mitchell, K., Huang, Z., Moghaddam, B., & Sawa, A. (2011). Following the genes: a framework for animal modeling of psychiatric disorders BMC Biology, 9 (1) DOI: 10.1186/1741-7007-9-76


Mitchell, K. (2011). The genetics of neurodevelopmental disease Current Opinion in Neurobiology, 21 (1), 197-203 DOI: 10.1016/j.conb.2010.08.009


Howes, O., & Kapur, S. (2009). The Dopamine Hypothesis of Schizophrenia: Version III–The Final Common Pathway Schizophrenia Bulletin, 35 (3), 549-562 DOI: 10.1093/schbul/sbp006

(Republished from by permission of author or representative)
🔊 Listen RSS

Unlike in many other animals, injured nerve fibres in the mammalian central nervous system do not regenerate – at least not spontaneously.A lot of research has gone in to finding ways to coax them to do so, unfortunately with only modest success.The main problem is that there are many reasons why central nerve fibres don’t regenerate after an injury – tackling them singly is not sufficient.A new study takes a combined approach to hit two distinct molecular pathways in injured nerves and achieves substantial regrowth in an animal model.

Many lower vertebrates, like frogs and salamanders, for example, can regrow damaged nerves quite readily.And even in mammals, nerves in the periphery will regenerate and reconnect, given enough time.But nerve fibres in the brain and spinal cord do not regenerate after an injury.Researchers trying to solve this problem focused initially on figuring out what is different about the environment in the central versus the peripheral nervous system in mammals.

It was discovered early on that the myelin – the fatty sheath of insulation surrounding nerve fibres – in the central nervous system is different from that in the periphery. In particular, it inhibits nerve growth. A number of groups have tried to figure out what components of central myelin are responsible for this activity. Myelin is composed of a large number of proteins, as well as lipid membranes. One of these, subsequently named Nogo, was discovered to block nerve growth. This discovery prompted understandable excitement, especially because an antibody that binds that protein was found to promote regrowth of injured spinal nerves in the rat. (It even prompted a film, Extreme Measures, with Gene Hackman and Hugh Grant – an under-rated thriller with some surprisingly accurate science and some very serious medical malfeasance).

Unfortunately, the regrowth in rats that is promoted by blocking the Nogo protein is very limited.Similarly, mice that are mutant for this protein or its receptor show very minor regeneration.What is observed in some cases is extra sprouting of uninjured axons downstream of the spinal injury site.This can lead to some minor recovery of function but it’s really remodelling, rather than regeneration.

But it does suggest an answer to the question: why would we have evolved a system that seems actively harmful, that prevents regeneration after an injury?Well, first, the selective pressure in mammals to be able to regenerate damaged nerves is probably not very great, simply because injured animals would not typically get the chance to regenerate in the wild.And second, it suggests that the function of proteins like Nogo may not be to prevent regeneration but to prevent sprouting of nerve fibres after they have already made their appropriate connections.A lot of effort goes in to wiring the nervous system, with exquisite specificity – once that wiring pattern is established, it probably pays to actively keep it that way.

There are a number of reasons why blocking the Nogo protein does not allow nerves to fully regenerate. First, it is not the only protein in myelin that blocks growth – there are many others. Second, the injury itself can give rise to scarring and inflammation that generates a secondary barrier. And third, neurons in the mature nervous system may simply not be inclined to grow. (Not only that – the distances they may have to travel in the fully grown adult may be orders of magnitude longer than those required to wire the nervous system up during development. There are nerves in an adult human that are almost a metre long but these connections were first formed in the embryo when the distance was measured in millimetres.)

This last problem has been addressed more recently, by researchers asking if there is something in the neurons themselves that changes over time – after all, neurons in the developing nervous system grow like crazy.That propensity for growth seems to be dampened down in the adult nervous system – again, once the nervous system is wired up, it is important to restrict further growth.

Researchers have therefore looked for biochemical differences between young (developing) neurons and mature neurons that have already formed connections. The hope is that if we understand the molecular pathways that differ we might be able to target them to “rejuvenate” damaged neurons, restoring their internal urge to grow. The lab of Zhigang He at Harvard Medical School has been one of the leaders in this area and has previously found that targeting either of two biochemical pathways allowed some modest regeneration of injured neurons. (They study the optic nerve as a more accessible model of central nerve regrowth than the spinal cord).

In a new study recently published in Nature, they show that simultaneously blocking both these proteins leads to remarkably impressive regrowth – far greater than simply an additive effect of blocking the two proteins alone. The two proteins are called PTEN and SOCS3 – they are both intracellular regulators of cell growth, including the ability to respond to extracellular growth factors. The authors used a genetic approach to d
elete these genes two weeks prior to an injury and found that regrowth was hugely promoted.That is obviously not a very medically useful approach however – more important is to show that deleting them after the injury can permit regeneration and indeed, this is what they found.Presumably, neurons in this “grow, grow, grow!” state are either insensitive to the inhibitory factors in myelin or the instructions for growth can override these factors.

They went on to characterise the changes that occur in the neurons when these genes are deleted and observed that many other proteins associated with active growth states are upregulated, including ones that get repressed in response to the injury itself.The hope now is that drugs may be developed to target the PTEN and SOCS3 pathways in human patients, especially those with devastating spinal cord injuries, to encourage damaged nerves to regrow.As with all such discoveries, translation to the clinic will be a difficult and lengthy process, likely to take years and there is no guarantee of success.But compared to previous benchmarks of regeneration in animal models, this study shows what looks like real progress.

Sun F, Park KK, Belin S, Wang D, Lu T, Chen G, Zhang K, Yeung C, Feng G, Yankner BA, & He Z (2011). Sustained axon regeneration induced by co-deletion of PTEN and SOCS3. Nature, 480 (7377), 372-5 PMID: 22056987

(Republished from by permission of author or representative)
• Category: Science • Tags: Wiring 
🔊 Listen RSS

“Scientists discover gene for autism” (or ovarian cancer, or depression, cocaine addiction, obesity, happiness, height, schizophrenia… and whatever you’re having yourself). These are typical newspaper headlines (all from the last year) and all use the popular shorthand of “a gene for” something. In my view, this phrase is both lazy and deeply misleading and has caused widespread confusion about what genes are and do and about their influences on human traits and disease.

The problem with this phrase stems from the ambiguity in what we mean by a “gene” and what we mean by “for”. These can mean different things at different levels and unfortunately these meanings are easily conflated. First, a gene can be defined in several different ways. From a molecular perspective, it is a segment of DNA that codes for a protein, along with the instructions for when and where and in what amounts this protein should be made. (Some genes encode RNA molecules, rather than proteins, but the general point is the same). The function of the gene on a cellular level is thus to store the information that allows this protein to be made and its production to be regulated. So, you have a gene for haemoglobin and a gene for insulin and a gene for rhodopsin, etc., etc. (around 25,000 such genes in the human genome). The question of what the gene is for then becomes a biochemical question – what does the encoded protein do?

But that is not the only way or probably even the main way that people think about what genes do – it is certainly not how geneticists think about it. The function of a gene is commonly defined (indeed often discovered) by looking at what happens when it is mutated – when the sequence of DNA bases that make up the gene is altered in some way which affects the production or activity of the encoded protein. The visible manifestation of the effect of such a mutation (the phenotype) is usually defined at the organismal level – altered anatomy or physiology or behaviour, or often the presence of disease. From this perspective, the gene is defined as a separable unit of heredity – something that can be passed on from generation to generation that affects a particular trait. This is much closer to the popular concept of a gene, such as a gene for blue eyes or a gene for breast cancer. What this really means is a mutation for blue eyes or a mutation for breast cancer.

The challenge is in relating the function of a gene at a cellular level to the effects of variation in that gene, which are most commonly observed at the organismal level. The function at a cellular level can be defined pretty directly (make protein X) but the effect at the organismal level is much more indirect and context-dependent, involving interaction with many other genes that also contribute to the phenotype in question, often in highly complex and dynamic systems.

If you are talking about a simple trait like blue eyes, then the function of the gene at a molecular level can actually be related to the mutant phenotype fairly easily – the gene encodes an enzyme that makes a brown pigment. When that enzyme is not made or does not work properly, the pigment is not made and the eyes are blue. Easy-peasy.

But what if the phenotype is in some complex physiological trait, or even worse, a psychological or behavioural trait? These traits are often defined at a very superficial level, far removed from the possible molecular origins of individual differences. The neural systems underlying such traits may be incredibly complex – they may break down due to very indirect consequences of mutations in any of a large number of genes.

For example, mutations in the genes encoding two related proteins, neuroligin-3 and neuroligin-4 have been found in patients with autism and there is good evidence that these mutations are responsible for the condition in those patients. Does this make them “genes for autism”? That phrase really makes no sense – the function of these genes is certainly not to cause autism, nor is it to prevent autism. The real link between these genes and autism is extremely indirect. The neuroligin proteins are involved in the formation of synaptic connections between neurons in the developing brain. If they are mutated, then the connections that form between specific types of neurons are altered. This changes the function of local circuits in the brain, affecting their information-processing parameters and changing how different regions of the brain communicate. Ultimately, this impacts on neural systems controlling things like social behaviour, communication and behavioural flexibility, leading to the symptoms that define autism at the behavioural level.

So, mutations in these genes can cause autism, but these are not genes for autism. They are not even usefully or accurately thought of as genes for social behaviour or for cognitive flexibility – they are required, along with the products of thousands of other genes, for those faculties to develop.

But perhaps there are other genetic variants in the population that affect the various traits underlying these faculties – not in such a severe way as to result in a clinical disorder, but enough to cause the observed variation across the general population. It is certainly true that traits like extraversion are moderately heritable – i.e., a fair proportion of the differences between people in this trait are attributable to genetic differences. When someone asks “are there genes for extraversion?”, the answer is yes if they mean “are differences in extraversion partly due to genetic differences?”. If they mean the function of some genetic variant is to make people more or less extroverted, then they have suddenly (often unknowingly) gone from talking about the activity of a gene or the effect of mutation of that gene to considering the utility of a specific variant.

This suggests a deeper meaning – not just that the gene has a function, but that it has a purpose – in biological terms, this means that a particular version of the gene was selected for on the basis of its effect on some trait. This can be applied to the specific sequence of a gene in humans (as distinct from other animals) or to variants within humans (which may be specifi
c to sub-populations or polymorphic within populations).

While geneticists may know what they mean by the shorthand of “genes for” various traits, it is too easily taken in different, unintended ways. In particular, if there are genes “for” something, then many people infer that the something in question is also “for” something. For example, if there are “genes for homosexuality”, the inference is that homosexuality must somehow have been selected for, either currently or under some ancestral conditions. Even sophisticated thinkers like Richard Dawkins fall foul of this confusion – the apparent need to explain why a condition like homosexual orientation persists. Similar arguments are often advanced for depression or schizophrenia or autism – that maybe in ancestral environments, these conditions conferred some kind of selective advantage. That is one supposed explanation for why “genes for schizophrenia or autism” persist in the population.

Natural selection is a powerful force but that does not mean every genetic variation we see in humans was selected for, nor does it mean every condition affecting human psychology confers some selective advantage. In fact, mutations like those in the neuroligin genes are rapidly selected against in the population, due to the much lower average number of offspring of people carrying them. The problem is that new ones keep arising – in those genes and in thousands of other required to build the brain. By analogy, it is not beneficial for my car to break down – this fact does not require some teleological explanation. Breaking down occasionally in various ways is not a design feature – it is just that highly complex systems bring an associated higher risk due to possible failure of so many components.

So, just because the conditions persist at some level does not mean that the individual variants causing them do. Most of the mutations causing disease are probably very recent and will be rapidly selected against – they are not “for” anything.

Jamain S, Quach H, Betancur C, Råstam M, Colineaux C, Gillberg IC, Soderstrom H, Giros B, Leboyer M, Gillberg C, Bourgeron T, & Paris Autism Research International Sibpair Study (2003). Mutations of the X-linked genes encoding neuroligins NLGN3 and NLGN4 are associated with autism. Nature genetics, 34 (1), 27-9 PMID: 12669065

(Republished from by permission of author or representative)
🔊 Listen RSS

The fact that the adult brain is very plastic is often held up as evidence against the idea that many psychological, cognitive or behavioural traits are innately determined. At first glance, there does indeed appear to be a paradox. On the one hand, behavioural genetic studies show that many human psychological traits are strongly heritable and thus likely determined, at least in part, by innate biological differences. On the other, it is very clear that even the adult brain is highly plastic and changes itself in response to experience.

The evidence on both sides is very strong. In general, for traits like intelligence and personality characteristics such as extraversion, neuroticism or conscientiousness, among many others, the findings from genetic studies are remarkably consistent. Just as for physical traits, people who are more closely related resemble each other for psychological traits more than people with a more distant relationship. Twin study designs get around the obvious objection that such similarities might be due to having been raised together. Identical twins tend to be far more like each other for these traits than fraternal twins, though the family environment is shared in both cases. Even more telling, identical twins who are raised apart tend to be pretty much as similar to each other as pairs who are raised together. Clearly, we come fairly strongly pre-wired and the family environment has little effect on these kinds of traits.

Yet we know the brain can “change itself”. You could say that is one of its main jobs in fact – altering itself in response to experience to better adapt to the conditions in which it finds itself. For example, as children learn a language, their auditory system specialises to recognise the typical sounds of that language. Their brains become highly expert at distinguishing those sounds and, in the process, lose the ability to distinguish sounds they hear less often. (This is why many Japanese people cannot distinguish between the sounds of the letters “l” and “r”, for example, and why many Westerners have difficulty hearing the crucial tonal variations in languages like Cantonese). Learning motor skills similarly improves performance and induces structural changes in the relevant brain circuits. In fact, most circuits in the brain develop in an experience-dependent fashion, summed up by two adages: “cells that fire together, wire together” and “use it or lose it”.

Given the clear evidence for brain plasticity, the implication would seem to be that even if our brains come pre-wired with some particular tendencies, that experience, especially early experience, should be able to override them.

I would argue that the effect of experience-dependent development is typically exactly the opposite – that while the right kind of experience can, in principle, act to overcome innate tendencies, in practice, the effect is reversed. The reason is that our innate tendencies shape the experiences we have, leading us to select ones that tend instead to reinforce or even amplify these tendencies. Our environment does not just shape us – we shape it.

A child who is naturally shy – due to innate differences in the brain circuits mediating social behaviour, general anxiety, risk-aversion and other parameters – will tend to have less varied and less intense social experience. As a result, they will not develop the social skills that might make social interaction more enjoyable for them. A vicious circle emerges – perhaps intense practice in social situations would alter the preconfigured settings of a shy child’s social brain circuits but they tend not to get that experience, precisely because of those settings. In contrast, their extroverted classmates may, by constantly seeking out social interactions, continue to develop this innate faculty.

This circle may be most vicious in children with autism, most of whom have a reduced level of innate interest in other people. They tend, for example, not to find faces as intrinsically fascinating as other infants. This may contribute to a delay in language acquisition, as they miss out on interpersonal cues that strongly facilitate learning to speak.

A similar situation may hold for children who have difficulties in reading or with mathematics. Dyslexia seems to be caused by an innate difficulty in associating the sounds and shapes of letters. This can be traced to genetic effects during early development of the brain, which may cause interruptions in long-range connections between brain areas. This innate disadvantage is cruelly amplified by the typical experience of many dyslexics. Learning to read is hard enough and requires years of practice and active instruction. For children who have basic difficulties in recognising letters and words, reading remains effortful for far longer and they will therefore tend to read less, missing out on the intensive practice that would help their brain circuitry specialise for reading.

Though less widely known, dyscalculia (a selective difficulty in mathematics) is equally common and shares many characteristics with dyslexia. The initial problem is in innate number sense – the ability to estimate and compare small numbers of objects. This faculty is present in very young infants and even shared with many other animal species, notably crows. Formal mathematical instruction is required to build on this innate number sense but also crucially relies on it. As with reading, mathematics requires hard work to learn and if numbers are inherently mysterious then this will change the nature of the child’s experience, lessen interest and reduce practice. At the other end of the spectrum, those with strong mathematical talent may gravitate towards the subject, further amplifying the differences between these two groups.

Thus, while a certain type of experience can alter the innate tendency, the innate tendency makes getting that experience far less likely. Brain plasticity tends instead to amplify initial differences.

That sounds rather fatalistic, but the good news is that this vicious circle can be broken if innate difficulties are recognised early enough – by actively changing the nature of early experience. There is good evidence that intense early intervention in children with autism (such as Applied Behaviour Analysis) allows them to compensate for innate deficits and lead to improvements in cognitive, communication and adaptive skills. Similarly intense intervention in children with dyslexia has also proven effective. Thus, even if it is not possible to reverse whatever neurodevelopmental differences lead to these kinds of deficits, it should at least be possible to prevent their being amplified by subsequent experience.

Duff FJ, & Clarke PJ (2011). Practitioner Review: Reading disorders: what are the effective interventions and how should they be implemented and evaluated? Journal of child psychology and psychiatry, and allied disciplines, 52 (1), 3-12 PMID: 21039483

Vismara, L., & Rogers, S. (2010). Behavioral Treatments in Autism Spectrum Disorder: What Do We Know? Annual Review of Clinical Psychology, 6 (1), 447-468 DOI: 10.1146/annurev.clinpsy.121208.131151

(Republished from by permission of author or representative)
• Category: Science • Tags: Autism, Genetics, Personality, Twins 
🔊 Listen RSS

A new study suggests that a gene known to be causally linked to schizophrenia and other psychiatric disorders is involved in the formation of connections between the two hemispheres of the brain. DISC1 is probably the most famous gene in psychiatric genetics, and rightly so. It was discovered in a large Scottish pedigree, where 18 members were affected by psychiatric disease.
The diagnoses ranged from schizophrenia and bipolar disorder to depression and a range of “minor” psychiatric conditions. It was found that the affected individuals had all inherited a genetic anomaly – a translocation of genetic material between two chromosomes. This basically involves sections of two chromosomes swapping with each other. In the process, each chromosome is broken, before being spliced back to part of the other chromosome. In this case, the breakpoint on chromosome 1 interrupted a gene, subsequently named Disrupted-in-Schizophrenia-1, or DISC1.

That this discovery was made using classical “cytogenetic” techniques (physically looking at the chromosomes down a microscope) and in a single family is somehow pleasing in an age where massive molecular population-based studies are in vogue. (A win for “small” science).

The discovery of the DISC1 translocation clearly showed that disruption of a single gene could lead to psychiatric disorders like schizophrenia. This was a challenge to the idea that these disorders were “polygenic” – caused by the inheritance in each individual of a large number of genetic variants. As more and more mutations in other genes are being found to cause these disorders, the DISC1 situation can no longer be dismissed as an exception – it is the norm.

It also was the first example of a principle that has since been observed for many other genes – namely that the effects of the mutation can manifest quite variably – not as one specific disorder, but as different ones in different people. Indeed, DISC1 has since been implicated in autism as well as adult-onset disorders. It is now clear from this and other evidence that these apparently distinct conditions are best thought of as variable outcomes that arise, in many cases at least, from disturbances of neurodevelopment.

Since the initial discovery, major research efforts of a growing number of labs have been focused on the next obvious questions: what does DISC1 do? And what happens when it is mutated? What happens in the brain that can explain why psychiatric symptoms result?

We now know that DISC1 has many different functions. It is a cytoplasmic protein – localised inside the cell – that interacts with a very large number of other proteins and takes part in diverse cellular functions, including cell migration, outgrowth of nerve fibres, the formation of dendritic spines (sites of synaptic contact between neurons), neuronal proliferation and regulation of biochemical pathways involved in synaptic plasticity. Many of the proteins that DISC1 interacts with have also been implicated in psychiatric disease.

This new study adds another possible function, and a dramatic and unexpected one at that. This function was discovered from an independent angle, by researchers studying how the two hemispheres of the brain get connected – or more specifically, why they sometimes fail to be connected. The cerebral hemispheres are normally connected by millions of axons which cross the midline of the brain in a structure called the corpus callosum (or “tough body” – (don’t ask)). Very infrequently, people are born without this structure – the callosal axons fail to cross the midline and the two hemispheres are left without this major route of communication (though there are other routes, such as the anterior commissure).

The frequency of agenesis of the corpus callosum has been estimated at between 1 in 1,000 and 1 in 6,000 live births – thankfully very rare. It is associated with a highly variable spectrum of other symptoms, including developmental delay, autistic symptoms, cognitive disabilities extending into the range of mental retardation, seizures and other neurological signs.

Elliott Sherr and colleagues were studying patients with this condition, which is very obvious on magnetic resonance imaging scans (see Figure). They initially found a mother and two children with callosal agenesis who all carried a deletion on chromosome 1, at position 1q42 – exactly where DISC1 is located. They subsequently identified another patient with a similar deletion, which allowed them to narrow down the region and identify DISC1 as a plausible candidate (among some other genes in the deleted region). Because the functions of proteins can be affected not just by large deletions or translocations but also by less obvious mutations that change a single base of DNA, they also sequenced the DISC1 gene in a cohort of callosal agenesis patients and found a number carrying novel mutations that are very likely to disrupt the function of the gene.

While not rock-solid evidence that it is DISC1 that is responsible, these data certainly point to it as the strongest candidate to explain the callosal defect. This hypothesis is strongly supported by findings from DISC1 mutant mice (carrying a mutation that mimics the effect of the human translocation), which also show defects in formation of the corpus callosum. In addition, the protein is strongly expressed in the axons that make up this structure at the time of its development.

The most obvious test of whether disruption of DISC1 really causes callosal agenesis is to look in the people carrying the initial translocation. Remarkably, it is not known whether the original patients in the Scottish pedigree who carry the DISC1 translocation show this same obvious brain structural phenotype. They have, very surprisingly, never been scanned.

This new paper raises the obvious hypothesis that the failure to connect the two hemispheres results in the psychiatric or cognitive symptoms, which variously include reduced intellectual ability, autism and schizophrenia. This seems like too simplistic an interpretation, however. All we have now is a correlation. First, the implication of DISC1 in the acallosal phenotype is not yet definitive – this must be nailed down and replicated. But even if it is shown that disruption of DISC1 causes both callosal agenesis and schizophrenia (or other psychiatric disorders or symptoms), this does not prove a causal link. DISC1 has many other functions and is expressed in many different brain areas (ubiquitously in fact). Any, or indeed, all of these functions may in fact be the cause of psychopathology.

One prediction, if it were true that the lack of connections between the two hemispheres is causal, is that we would expect the majority of patients with callosal agenesis to have these kinds of psychiatric symptoms. In fact, the rates are indeed very high – in different studies it has been estimated that up to 40% of callosal agenesis patients have an autism diagnosis, while about 8% have the symptoms of schizophrenia or bipolar disorder. (Of course, these patients may have other, less obvious brain defects as well, so even this is not definitive).

Conversely, we might naively expect a high rate of callosal agenesis in patients with autism or schizophrenia. However, we know these disorders are extremely heterogeneous and so it is much more likely that this phenotype might be apparent in only a specific (possibly very small) subset of patients. This may indeed be the case – callosal agenesis has been observed in about 3 out of 200 schizophrenia patients (a vastly higher rate than in the general population). Another study, just published, has found that mutations in a different gene – ARID1B – are also associated with callosal agenesis, mental retardation and autism. More generally, there may be subtle reductions in callosal connectivity in many schizophrenia or autism patients (including some autistic savants).

Whether this defect can explain particular symptoms is not yet clear. For the moment, the new study provides yet another possible function of DISC1, and highlights an anatomical phenotype that is apparently present in a subset of autism and schizophrenia cases and that can arise due to mutation in many different genes (of which DISC1 and ARID1B are only two of many known examples).

One final note: formation of the corpus callosum is a dramatic example of a process that is susceptible to developmental variation. What I mean is this: when patients inherit a mutation that results in callosal agenesis, this phenotype occurs in some patients but not all. This is true even in genetically identical people, like monozygotic twins or triplets (or in lines of genetically identical mice). Though the corpus callosum contains millions of nerve fibres, the initial events that establish it involve very small numbers of cells. These cells, which are located at the medial edge of each cerebral hemisphere, must contact each other to enable the fusion of the two hemispheres, forming a tiny bridge through which the first callosal fibres can cross. Once these are across, the rest seem able to follow easily. Because this event involves very few cells at a specific time in development, it is susceptible to random “noise” – fluctuations in the precise amounts of various proteins in the cells, for example. These are not caused by external forces – the noise is inherent in the system. The result is that, in some people carrying such a mutation the corpus callosum will not form at all, while in others it forms apparently completely normally (see figure of triplets, one on left with normal corpus callosum, the other two with it absent). So, an all-or-none effect can arise, without any external factors involved.

This same kind of intrinsic developmental variation may also explain or at least contribute to the variability in phenotypic outcome at the level of psychiatric symptoms when these kinds of neurodevelopmental mutations are inherited. Even monozygotic twins are often discordant for psychiatric diagnoses (concordance for schizophrenia is about 50%, for example). This is often assumed to be due to non-genetic and therefore “environmental” or experiential factors. If these disorders really arise from differences in brain wiring, which we know are susceptible to developmental variation, then differences in the eventual phenotype could actually be completely intrinsic and innate.

Osbun N, Li J, O’Driscoll MC, Strominger Z, Wakahiro M, Rider E, Bukshpun P, Boland E, Spurrell CH, Schackwitz W, Pennacchio LA, Dobyns WB, Black GC, & Sherr EH (2011). Genetic and functional analyses identify DISC1 as a novel callosal agenesis candidate gene. American journal of medical genetics. Part A, 155 (8), 1865-76 PMID: 21739582

Halgren C, Kjaergaard S, Bak M, Hansen C, El-Schich Z, Anderson CM, Henriksen KF, Hjalgrim H, Kirchhoff M, Bijlsma EK, Nielsen M, den Hollander NS, Ruivenkamp CA, Isidor B, Le Caignec C, Zannolli R, Mucciolo M, Renieri A, Mari F, Anderlid BM, Andrieux J, Dieux A, Tommerup N, & Bache I (2011). Corpus Callosum Abnormalities, Mental Retardation, Speech Impairment, and Autism in Patients with Haploinsufficiency of ARID1B. Clinical genetics PMID: 21801163

(Republished from by permission of author or representative)
• Category: Science • Tags: Autism, Development, Genetics, Schizophrenia, Wiring 
🔊 Listen RSS

There is a common view that the human genome has two different parts – a “constant” part and a “variable” part. According to this view, the bases of DNA in the constant part are the same across all individuals. They are said to be “fixed” in the population. They are what make us all human – they differentiate us from other species. The variable part, in contrast, is made of positions in the DNA sequence that are “polymorphic” – they come in two or more different versions. Some people carry one base at that position and others carry another. The idea is that it is the particular set of such variations that we inherit that makes us each unique (unless we have an identical twin). According to this idea, we each have a hand dealt from the same deck.

The genome sequence (a simple linear code made up of 3 billion bases of DNA in precise order, chopped up onto different chromosomes) is peppered with these polymorphic positions – about 1 in every 1,250 bases. That makes about 2,400,000 polymorphisms in each genome (and we each carry two copies of the genome). That certainly seems like plenty of raw material, with limitless combinations that could explain the richness of human diversity. This interpretation has fuelled massive scientific projects to try and find which common polymorphisms affect which traits. (Not to mention personal genomics companies who will try to tell you your risk of various diseases based on your profile of such polymorphisms).

The problem with this view is that it is wrong. Or at least woefully incomplete.

The reason is it ignores another source of variation: very rare mutations in those bases that are constant across the vast majority of individuals. There is now very good evidence that it is those kinds of mutations that contribute most to our individuality. Certainly, they are much more likely to affect a protein’s function and much more likely to contribute to genetic disease. We each carry hundreds of such rare mutations that can affect protein function or expression and are much more likely to have a phenotypic impact than common polymorphisms.

Indeed, far from most of the genome being effectively constant, it can be estimated that every position in the genome has been mutated many, many times over in the human population. And each of us carries hundreds of new mutations that arose during generation of the sperm and egg cells that fused to form us. New mutations may spread in the pedigree or population in which they arise for some time, depending in part on whether they have a deleterious effect or not. Ones that do will likely be quickly selected against.

A new paper from the 1000 genomes project consortium shows that:

“the vast majority of human variable sites are rare and that the majority of rare variants exhibit, at most, very little sharing among continental populations”.

This is a much more fluid picture of genetic variation than we are used to. We are not all dealt a genetic hand from the same deck – each population, sub-population, kindred, nuclear family has a distinct set of rare genetic variants. And each of these decks contains a lot of jokers – the new mutations that arise each time a hand is dealt.

Why have such rare mutations generally been ignored while the polymorphic sites have been the focus of intense research? There are several reasons, some practical and some theoretical. Practically, it has until recently been almost impossible to systematically find very rare mutations. To do so requires that we sequence the whole genome, which has only recently become feasible. In contrast, methods to survey which bases you carry at all the polymorphic sites across the genome were developed quite some time ago now and are relatively cheap to use. (They rely on sampling about 500,000 such sites around the genome – because of unevenness in the way different bits of chromosomes get swapped when sperm and eggs are made, this sample actually tells you about most of the variable sites across the whole genome). So, there has been a tendency to argue that polymorphic sites will be major contributors to human phenotypes (especially diseases) because those have been the only ones we have been able to look at.

Unfortunately, the results of genome-wide association studies, which aim to identify common variants associated with traits or diseases, have been disappointing. This is especially true for disorders with large effects on fitness, such as schizophrenia or autism. Some variants have been found but their effects, even in combination are very small. Most of the heritability of most of the traits or diseases examined to date remains unexplained. (There are some important exceptions, especially for diseases that strike only late in life and for things like drug responses, where selective pressures to weed out deleterious alleles are not at play).

In contrast, many more rare mutations causing disease are being discovered all the time, and the pace of such discoveries is likely to increase with technological advances. The main message that emerges from these studies has been called by Mary-Claire King the “Anna Karenina principle”, based on Tolstoy’s famous opening line:

“Happy families are all alike; every unhappy family is unhappy in its own way”

But can such rare variants really explain the “missing heritability” of these disorders? Some people have argued that they cannot, but this seems to me to be based on a pervasive misconception of how the heritability of a trait is measured and what it means. According to this misconception, if a trait is heritable across the population, that heritability cannot be accounted for by rare variants. After all, if a mutation only occurs in one or a few individuals, it could only minimally (nearly negligibly) contribute to heritability across the whole population. That is true. However, heritability is not measured across the population – it is measured in families and then averaged across the population.

In humans, it is usually derived by comparing phenotypes between people of different genetic relatedness (identical versus fraternal twins, siblings, parents, cousins, etc.). The values of these comparisons are then averaged across large numbers of pairs to allow estimates of how much genetic variance affects phenotypic variance – the population heritability. While a specific rare mutation may only affect the phenotype within a single family, such mutations could, collectively, explain all of the heritability. Completely different sets of mutations could be affecting the trait or causing the disease in different families.

The next few years will reveal the true impact of rare mutations. We should certainly expect complex genetic interactions and some real effects of common polymorphisms. But the idea that our traits are determined simply by the combination of variants we inherit from a static pool in the population is no longer tenable. We are each far more unique than that.

(And if your personal genomics company isn’t offering to sequence your whole genome, it’s not personal enough).

Gravel S, Henn BM, Gutenkunst RN, Indap AR, Marth GT, Clark AG, Yu F, Gibbs RA, The 1000 Genomes Project, & Bustamante CD (2011). Demographic history and rare allele sharing among human populations. Proceedings of the National Academy of Sciences of the United States of America, 108 (29), 11983-11988 PMID: 21730125

Walsh CA, & Engle EC (2010). Allelic diversity in human developmental neurogenetics: insights into biology and disease. Neuron, 68 (2), 245-53 PMID: 20955932

McClellan, J., & King, M. (2010). Genetic Heterogeneity in Human Disease Cell, 141 (2), 210-217 DOI: 10.1016/j.cell.2010.03.032

Mirrored from Wiring the Brain

(Republished from by permission of author or representative)
🔊 Listen RSS

Hearing voices is a hallmark of schizophrenia and other psychotic disorders, occurring in 60-80% of cases. These voices are typically identified as belonging to other people and may be voicing the person’s thoughts, commenting on their actions or ideas, arguing with each other or telling the person to do something. Importantly, these auditory hallucinations are as subjectively real as any external voices. They may in many cases be critical or abusive and are often highly distressing to the sufferer.

However, many perfectly healthy people also regularly hear voices – as many as 1 in 25 according to some studies, and in most cases these experiences are perfectly benign. In fact, we all hear voices “belonging to other people” when we dream – we can converse with these voices, waiting for their responses as if they were derived from external agents. Of course, these percepts are actually generated by the activity of our own brain, but how?

There is good evidence from neuroimaging studies that the same areas that respond to external speech are active when people are having these kinds of auditory hallucinations. In fact, inhibiting such areas using transcranial magnetic stimulation may reduce the occurrence or intensity of heard voices. But why would the networks that normally process speech suddenly start generating outputs by themselves? Why would these outputs be organised in a way that fits speech patterns, as opposed to random noise? And, most importantly, why does this tend to occur in people with schizophrenia? What is it about the pathology of this disorder that makes these circuits malfunction in this specific way?

An interesting approach to try and get answers to these questions has been to model these circuits in artificial neural networks. If you can generate a network that can process speech inputs and find certain conditions under which it begins to spontaneously generate outputs, then you may have an informative model of auditory hallucinations. Using this approach, a couple of studies from several years ago from the group of Ralph Hoffman have found some interesting clues as to what may be going on, at least on an abstract level.

Their approach was to generate an artificial neural network that could process speech inputs. Artificial neural networks are basically sets of mathematical functions modelled in a computer programme. They are designed to simulate the information-processing functions carried out by individual neurons and, more importantly, the computational functions carried out by an interconnected network of such neurons. They are necessarily highly abstract, but they can recapitulate many of the computational functions of biological neural networks. Their strength lies in revealing unexpected emergent properties of such networks.

The particular network in this case consisted of three layers of neurons – an input layer, an output layer, and a “hidden” layer in between – along with connections between these elements (from input to hidden and from hidden to output, but crucially also between neurons within the hidden layer). “Phonetic” inputs were fed into the input layer – these consisted of models of speech sounds constituting grammatical sentences. The job of the output layer was to report what was heard – representing different sounds by patterns of activation of its forty-three neurons. Seems simple, but it’s not. Deciphering speech sounds is actually very difficult as individual phonetic elements can be both ambiguous and variable. Generally, we use our learned knowledge of the regularities of speech and our working memory of what we have just heard to anticipate and interpret the next phonemes we hear – forcing them into recognisable categories. Mimicking this function of our working memory is the job of the hidden layer in the artificial neural network, which is able to represent the prior inputs by the pattern of activity within this layer, providing a context in which to interpret the next inputs.

The important thing about neural networks is they can learn. Like biological networks, this learning is achieved by altering the strengths of connections between pairs of neurons. In response to a set of inputs representing grammatical sentences, the network weights change in such a way that when something similar to a particular phoneme in an appropriate context is heard again, the pattern of activation of neurons representing that phoneme is preferentially activated over other possible combinations.

The network created by these researchers was an able student and readily learned to recognise a variety of words in grammatical contexts. The next thing was to manipulate the parameters of the network in ways that are thought to model what may be happening to biological neuronal networks in schizophrenia.

There are two major hypotheses that were modelled: the first is that networks in schizophrenia are “over-pruned”. This fits with a lot of observations, including neuroimaging data showing reduced connectivity in the brains of people suffering with schizophrenia. It also fits with the age of onset of the florid expression of this disorder, which is usually in the late teens to early twenties. This corresponds to a period of brain maturation characterised by an intense burst of pruning of synapses – the connections between neurons.

In schizophrenia, the network may have fewer synapses to begin with, but not so few that it doesn’t work well. This may however make it vulnerable to this process of maturation, which may reduce its functionality below a critical threshold. Alternatively, the process of synaptic pruning may be overactive in schizophrenia, damaging a previously normal network. (The evidence favours earlier disruptions).

The second model involves differences in the level of dopamine signalling in these circuits. Dopamine is a neuromodulator – it alters how neurons respond to other signals – and is a key component of active perception. It plays a particular role in signalling whether inputs match top-down expectations derived from our learned experience of the world. There is a wealth of evidence implicating dopamine signalling abnormalities in schizophrenia, particularly in active psychosis. Whether these abnormalities are (i) the primary cause of the disease, (ii) a secondary mechanism causing specific symptoms (like psychosis), or (iii) the brain attempting to compensate for other changes is not clear.

Both over-pruning and alterations to dopamine signalling could be modelled in the artificial neural network, with intriguing results. First, a modest amount of pruning, starting with the weakest connections in the network, was found to actually improve the performance of the network in recognising speech sounds. This can be understood as an improvement in the recognition and specificity of the network for sounds which it had previously learned and probably reflects the improvements seen in human language learners, along with the concomitant loss in ability to process or distinguish unfamiliar sounds (like “l” and “r” for Japanese speakers).

However, when the network was pruned beyond a certain level, two interesting things happened. First, its performance got noticeably worse, especially when the phonetic inputs were degraded (i.e., the information was incomplete or ambiguous). This corresponds quite well with another symptom of schizophrenia, especially those who experience auditory hallucinations – sufferers show phonetic processing deficits under challenging conditions, such as a crowded room.

The second effect was even more striking – the network started to hallucinate! It began to produce outputs even in the absence of any inputs (i.e., during “silence”). When not being driven by reliable external sources of information, the network nevertheless settled into a state of activity that represented a word. The reason the output is a word and not just a meaningless pattern of neurons is that the previous learning that the network undergoes means that patterns representing words represent “attractors” – if some random neurons start to fire, the weighted connections representing real words will rapidly come to dominate the overall pattern of activity in the network, resulting in the pattern corresponding to a word.

Modeling alterations in dopamine signalling also produced both a defect in parsing degraded speech inputs and hallucinations. Too much dopamine signalling produced these effects but so did a combination of moderate over-pruning and compensatory reductions in dopamine signalling, highlighting the complex interactions possible.

The conclusion from these simulations is not necessarily that this is exactly how hallucinations emerge. After all, the artificial neural networks are pretty extreme abstractions of real biological networks, which have hundreds of different types of neurons and synaptic connections and which are many orders of magnitude more complex numerically. But these papers do provide aat least a conceptual demonstration of how a circuit designed to process speech sounds can fail in such a specific and apparently bizarre way. They show that auditory hallucinations can be viewed as the outputs of malfunctioning speech-processing circuits.

They also suggest that different types of insult to the system can lead to the same type of malfunction. This is important when considering new genetic data indicating that schizophrenia can be caused by mutations in any of a large number of genes affecting how neural circuits develop. One way that so many different genetic changes could lead to the same effect is if the effect is a natural emergent property of the neural networks involved.

Hoffman, R., & Mcglashan, T. (2001). Book Review: Neural Network Models of Schizophrenia The Neuroscientist, 7 (5), 441-454 DOI: 10.1177/107385840100700513

Hoffman, R., & McGlashan, T. (2006). Using a Speech Perception Neural Network Computer Simulation to Contrast Neuroanatomic versus Neuromodulatory Models of Auditory Hallucinations Pharmacopsychiatry, 39, 54-64 DOI: 10.1055/s-2006-931496

Mirrored from Wiring the Brain

(Republished from by permission of author or representative)
• Category: Science • Tags: Connectivity, Dopamine, Schizophrenia 
🔊 Listen RSS

A couple of recent papers have been making headlines in relation to autism, one claiming that it is caused less by genetics than previously believed and more by the environment and the other specifically claiming that antidepressant use by expectant mothers increases the risk of autism in the child. But are these conclusions really supported by the data? Are they strongly enough supported to warrant being splashed across newspapers worldwide, where most readers will remember only the headline as the take-away message? The legacy of the MMR vaccination hoax shows how difficult it can be to counter overblown claims and the negative consequences that can arise as a result.

So, do these papers really make a strong case for their major conclusions? The first gives results from a study of twins in California. Twin studies are a classic method to determine whether something is caused by genetic or environmental factors. The method asks, if one twin in a pair is affected by some disorder (autism in this case), with what frequency is the other twin also affected? The logic is very simple: if something is caused by environmental factors, particularly those within a family, then it should not matter whether the twins in question are identical or fraternal – their risk should be the same because their exposure is the same. On the other hand, if something is caused by genetic mutations, and if one twin has the disorder, then the rate of occurrence of the disorder in the other twin should be much higher if they are genetically identical than if they only share half their genes, as fraternal twins do.

Working backwards, if the rate of twin concordance for affected status are about the same for identical and fraternal twins, this is strong evidence for environmental factors. If the rate is much higher in monozygotic twins, this is strong evidence for genetic factors. Now to the new study. What they found was that the rate of concordance for monozygotic (identical) twins was indeed much higher than for dizyogotic (fraternal) twins – about twice as high on average.

For males: MZ: 0.58, DZ: 0.21
For females: MZ: 0.60, DZ: 0.27

Those numbers are for the diagnosis of strict autism. The rate of “autism spectrum disorder”, which encompasses a broader range of disability, showed similar results:

Males: MZ: 0.77, DZ: 0.31
Females: MZ: 0.50, DZ: 0.36.

These numbers fit pretty well with a number of other recent twin studies, all of which have concluded that they provide evidence for strong heritability of the disorder – i.e., that whether or not someone develops autism is largely (though not exclusively) down to genetics.

So, why did these authors reach a different conclusion and should their study carry any more weight than others? On the latter point, the study is significantly larger than many that have preceded it. This study looked at 192 twin pairs, each with at least one affected twin. However, some recent studies have been comparable or even larger: Lichtenstein and colleagues looked at 117 twin pairs and Rosenberg and colleagues looked at 277 twin pairs. These studies found eveidence for very high heritability and negligible shared environmental effects.

Another potentially important difference is in how the sample was ascertained. Hallmayer and colleagues claim that their assessment of affected status was more rigorous than for other studies and this may be true. However, it has previously been found that less rigorous assessments correlate extremely well with the more standardised assessments, so this is unlikely to be a major factor. In addition, there is very strong evidence that disorders like autism, ADHD, epilepsy, intellectual disability, tic disorders and others all share common etiology – having a broader diagnosis is therefore probably more appropriate.

In any case, the numbers they came up with for concordance rates were pretty similar across these studies. So, why did they end up with a different conclusion? That’s not a rhetorical question – I actually don’t know the answer and if anyone else does I would love to hear it. Given the data, I don’t know how they conclude that they provide evidence for shared environmental effects.

The methodology involves some statistical modeling that tries to tease out the sources of variance. However, this modeling is based completely on a multifactorial threshold model for the disorder – the idea that autism arises when the collective burden of individually minor genetic or environmental insults passes some putative threshold. Sounds plausible, but there is in fact no evidence – at all – that this model applies to autism. In fact, it seems most likely that autism really is an umbrella term for a collection of distinct genetic disorders caused by mutations in separate genes, but which happen to cause common phenotypes (or symptoms).

If that is the case, then what the twin concordance rates actually measure is the penetrance of such mutations – if one inherits mutation X, how often does that actually lead to autism? For monozygotic twins, let us assume that the affected proband (the first twin diagnosed) has such a mutation. Because they are genetically identical, the other one must too. The chance that the other twin will develop autism thus depends on the penetrance of the mutation – some mutations are more highly penetrant than others, giving a much higher probability of developing a specific phenotype. If we average across all MZ twin pairs we therefore get an average penetrance across all such putative mutations. Now, if such mutations are dominant, as many of the known ones are, then the chance that a dizygotic twin will inherit it is 50%, while the penetrance should remain the same. So, this model would predict that the rate of co-occurrence in DZ twins should be about half that of MZ twins, exactly as observed. (No stats required).

The conclusions from this study that the heritability is only modest and that a larger fraction of variance (55%!) is caused by shared environment thus seem extremely shaky. This is reinforced by the fact that the confidence intervals for these estimates are extremely wide (for the effect of shared environment the 95% confidence interval ranges from 9% to 81%). Certainly not enough to overturn all the other data from other studies.

What about epidemiological studies that have shown statistical evidence of increased risk of autism associated with a variety of other factors, including maternal diabetes, antidepressant use, season and place of brith? All of these factors have been linked with modest increases in the risk of autism. Don’t these prove there are important environmental factors? Well, first, they don’t prove causation, they provide a statistical evidence for an association between the two factors, which is not at all the same thing. Second, the increase in risk is usually on the order of about two-fold. Twice the risk may sound like a lot, but it’s only a 1% increase (from 1 to 2%), compared with some known mutations, which increase risk by 50-fold or more.

The main problem with these kinds of studies (and especially with how they are portrayed in the media) is that they are correlational and so you cannot establish a causal link directly from them. In some cases, two different correlated parameters (like red hair and freckles, for example) may actually be caused by an unmeasured third parameter. For example, in the recently published study, the use of antidepressants of the SSRI (selective serotonin reuptake inhibitor) class in mothers was associated with modestly increased risk of autism in the progeny. This association could be because SSRIs disrupt neural development in the fetus (perfectly plausible) but could alternatively be due to the known genetic link between risk of depression and risk of autism. Rates of depression are known to be higher in relatives of autistic people, so SSRI use could just be a proxy for that condition. The authors claim to have corrected for that by comparing rates of autism in the progeny of depressed mothers who were not prescribed SSRIs versus those who were but one might imagine that the severity of depression would be higher among those prescribed an antidpressant. In addition, the authors are careful to note that their findings were based on a small number of children exposed and that “Further studies are needed to replicate and extend these findings”. As with many such findings, this association may or may not hold up with additional study.

As for season and place of birth, those findings are better replicated and, interestingly, also found for schizophrenia. There is a theory that these effects may relate to maternal vitamin D levels, which can also affect neural development. This also seems plausible enough. However, the problem in really having confidence in these findings and in knowing how to interpret them is that they are population averages with small effect sizes. Overall, it seems quite possible that the environment – especially the prenatal environment – can play a part in the etiology of autism. At the moment, splashy headlines notwithstanding, genetic factors look much more important and genetic studies much more likely to give us the crucial entry points to the underlying biology.

Mirrored from Wiring the Brain.

Hallmayer J, Cleveland S, Torres A, Phillips J, Cohen B, Torigoe T, Miller J, Fedele A, Collins J, Smith K, Lotspeich L, Croen LA, Ozonoff S, Lajonchere C, Grether JK, & Risch N (2011). Genetic Heritability and Shared Environmental Factors Among Twin Pairs With Autism. Archives of general psychiatry PMID: 21727249

Lichtenstein P, Carlström E, Råstam M, Gillberg C, & Anckarsäter H (2010). The genetics of autism spectrum disorders and related neuropsychiatric disorders in childhood. The American journal of psychiatry, 167 (11), 1357-63 PMID: 20686188

Rosenberg, R., Law, J., Yenokyan, G., McGready, J., Kaufmann, W., & Law, P. (2009). Characteristics and Concordance of Autism Spectrum Disorders Among 277 Twin Pairs Archives of Pediatrics and Adolescent Medicine, 163 (10), 907-914 DOI: 10.1001/archpediatrics.2009.98

Croen LA, Grether JK, Yoshida CK, Odouli R, & Hendrick V (2011). Antidepressant Use During Pregnancy and Childhood Autism Spectrum Disorders. Archives of general psychiatry PMID: 21727247

(Republished from by permission of author or representative)
• Category: Science • Tags: Autism, Environment, Genetics, Twins 
🔊 Listen RSS

Deckard: She’s a replicant, isn’t she?
Tyrell: I’m impressed. How many questions does it usually take to spot them?
Deckard: I don’t get it, Tyrell.
Tyrell: How many questions?
Deckard: Twenty, thirty, cross-referenced.
Tyrell: It took more than a hundred for Rachael, didn’t it?
Deckard: [realizing Rachael believes she's human] She doesn’t know.
Tyrell: She’s beginning to suspect, I think.
Deckard: Suspect? How can it not know what it is?

A very discomfiting realisation, discovering you are an android. That all those thoughts and ideas and feelings you seem to be having are just electrical impulses zapping through your circuits. That you are merely a collection of physical parts, whirring away. What if some of them break and you begin to malfunction? What if they wear down with use and someday simply fail? The replicants in BladeRunner rail against their planned obsolescence, believing in the existence of their own selves, even with the knowledge that those selves are merely the products of machinery.

The idea that the self, or the conscious mind, emerges from the workings of the physical structures of the brain – with no need to invoke any supernatural spirit, essence or soul – is so fundamental to modern neuroscience that it almost goes unmentioned. It is the tacitly assumed starting point for discussions between neuroscientists, justified by the fact that all the data in neuroscience are consistent with it being true. Yet it is not an idea that the vast majority of the population is at all comfortable with or remotely convinced by. Its implications are profound and deeply unsettling, prompting us to question every aspect of our most deeply held beliefs and intuitions.

This idea has crept along with little fanfare – it did not emerge all at once like the theory of evolution by natural selection. There was no sudden revolution, no body of evidence proffered in a single moment that overturned the prevailing dogma. While the Creator was toppled with a single, momentous push, the Soul has been slowly chipped away at over a hundred years or more, with most people blissfully unaware of the ongoing assault. But its demolition has been no less complete.

If you are among those who is skeptical of this claim or who feels, as many do, that there must be something more than just the workings of the brain to explain the complexities of the human mind and the qualities of subjective experience (especially your own), then first ask yourself: what kind of evidence would it take to convince you that the function of the brain is sufficient to explain the emergence of the mind?

Imagine you came across a robot that performed all the functions a human can perform – that reported a subjective experience apparently as rich as yours. If you were able to observe that the activity of certain circuits was associated with the robot’s report of subjective experience, if you could drive that experience by activating particular circuits, if you could alter it by modifying the structure or function of different circuits, would there be any doubt that the experience arose from the activity of the circuits? Would there be anything left to explain?

The counter-argument to this thought experiment is that it would never be possible to create a robot that has human-like subjective experience (because robots don’t have souls). Well, all those kinds of experiments have, of course, been done on human beings, tens of thousands of times. Functional magnetic resonance imaging methods let us correlate the activity of particular brain circuits with particular behaviours, perceptions or reports of inward states. Direct activation of different brain areas with electrodes is sufficient to drive diverse subjective states. Lesion studies and pharmacological manipulations have allowed us to map which brain areas and circuits, neurotransmitters and neuromodulators are required for which functions, dissociating different aspects of the mind. Finally, differences in the structure or function of brain circuits account for differences in the spectrum of traits that make each of us who we are as individuals: personality, intelligence, cognitive style, perception, sexual orientation, handedness, empathy, sanity – effectively everything people view as defining characteristics of a person. (Even firm believers in a soul would be reluctant recipients of a brain transplant, knowing full well that their “self” would not survive the procedure).

The findings from all these kinds of approaches lead to the same broad conclusion: the mind arises from the activity of the brain – and nothing else. What neuroscience has done is correlated the activity of certain circuits with certain mental states, shown that this activity is required for these states to arise, shown that differences in these circuits affect the quality of these states and finally demonstrated that driving these circuits from the outside is sufficient to induce these states. That seems like a fairly complete scientific explanation of the phenomenon of mental states. If we had those data for our thought-experiment robot, we would be pretty satisfied that we understood how it worked (and could make useful predictions about how it would behave and what mental states it would report, given enough information of the activity of its circuits).

However, many philosophers (and probably a majority of people) would argue that there is something left to explain. After all, I don’t feel like an android – one made of biological rather than electronic materials, but a machine made solely of physical parts nonetheless. I feel like a person, with a rich mental life. How can the qualities of my subjective experience be produced by the activity of various brain circuits?

Many would claim, in fact, that subjective experience is essentially “ineffable” – it cannot be described in physical terms and cannot thus be said to be physical. It must therefore be non-physical, immaterial or even supernatural. However, the fact that we cannot conceive of how a mental state could arise from a brain state is a statement about our current knowledge and our powers of imagination and comprehension, not about the nature of the brain-mind relationship. As an argument, what we currently can or cannot conceive of has no bearing on the question. The strong intuition that the mind is more than just the activity of the brain is reinforced by an unfortunate linguistic accident – that the word “mind” is grammatically a noun, when really it should be a verb. At least, it does not describe an object or a substance, but a process or a state. It is not made of stuff but of the dynamic relations between bits of stuff.

When people argue that activity of some brain circuit is not identical to a subjective experience or sufficient to explain it, they are missing a crucial point – it is that activity in the context of the activity of the entire rest of the nervous system that generates the quality of the subjective experience at any moment. And those who dismiss this whole approach as scientific reductionism ad absurdum, claiming that the richness of human experience could not be explained merely by the activity of the brain should consider that there is nothing “mere” about it – with hundreds of billions of neurons making trillions of connections, the complexity of the human brain is almost incomprehensible to the human mind. (“If the brain were so simple that we could understand it, then we would be so simple that we couldn’t”).

To be more properly scientific, we should ask: “what evidence would refute the hypothesis that the mind arises solely from the activity of the brain”? Perhaps there is positive evidence available that is inconsistent with this view (as opposed to arguments based merely on our current inability to explain everything about the mind-brain relationship). It is not that easy to imagine what form such positive evidence would take, however – it would require showing that some form of subjective experience either does not require the brain or requires more than just the brain.

With respect to whether subjective experience requires the brain, the idea that the mind is associated with an immaterial essence, spirit or soul has an extension, namely that this soul may somehow outlive the body and be said to be immortal. If there were strong evidence of some form of life after death then this would certainly argue strongly against the sufficiency of neuroscientific materialism. Rather depressingly, no such evidence exists. It would be lovely to think we could live on after our body dies and be reunited with loved ones who have died before us. Unfortunately, wishful thinking does not constitute evidence.

Of course, there is no scientific evidence that there is not life after death, but should we expect neuroscience to have to refute this alternative hypothesis? Actually, the idea that there is something non-physical at our essence is non-refutable – no matter how much evidence we get from neuroscience, it does not prove this hypothesis is wrong. What neuroscience does say is that it is not necessary and has no explanatory power – there is no need of that hypothesis.

(Republished from by permission of author or representative)
• Category: Science • Tags: Brain, Consciousness, Mind, Soul, Wiring 
🔊 Listen RSS

A debate has been raging over the last few years over the nature of the genetic architecture of so-called “complex” disorders. These are disorders – such as schizophrenia, epilepsy, type II diabetes and many others – which are clearly heritable across the population, but which do not show simple patterns of inheritance. A new study looking at the profile of mutations in hundreds of genes in patients with epilepsy dramatically illustrates this complexity. The possible implications are far-reaching, especially for our ability to predict risk based on an individual’s genetic profile, but do these findings apply to all complex disorders?

Complex disorders are so named because, while it is clear that they are highly heritable (risk to an individual increases the more closely related they are to someone who has the disorder), their mode of inheritance is far more difficult to discern. Unlike classical Mendelian disorders (such as cystic fibrosis or Huntington’s disease), these disorders do not show simple patterns of segregation within families that would peg them as recessive or dominant, nor can they be linked to mutations in a single gene. This has led people to propose two very different explanations for how they are inherited.

One theory is that such disorders arise due to unfortunate combinations of large numbers of genetic variants that are common in the population. Individually, such variants would have little effect on the phenotype, but collectively, if they surpass some threshold of burden, they could tip the balance into a pathological state. This has been called the common disease/common variant (CD/CV) model.

The alternative model is that these “disorders” are not really single disorders at all – rather they are umbrella terms for collections of a large number of distinct genetic disorders, which happen to result in a similar set of symptoms. Within any individual or family, the disorder may indeed be caused by a particular mutation. Because many of the disorders in question are very severe, with high mortality and reduced numbers of offspring, these mutations will be rapidly selected against in the population. They will therefore remain very rare and many cases of the disorder may arise from new, or de novo, mutations. This has therefore been called the multiple rare variants (MRV) model.

Lately, a number of mixed models have been proposed by various researchers, including myself. Even classical Mendelian disorders rarely show strictly Mendelian inheritance – instead the effects of the major mutations are invariably affected by modifiers in the genetic background. (These are variants with little effect by themselves but which may have a strong effect in combination with some other mutation). If this sounds like a return to the CD/CV model, there are a couple important distinctions to keep in mind. One is the nature of the mutations involved – the mixed model would still invoke some rare mutation that has a large effect on protein function. It may not always cause the disorder by itself (i.e., not every one who carries it will be affected), but could still be called causative in the sense that if the affected individual did not carry it one would expect they would not suffer from the disorder. The other is the number of mutations or variants involved – under the CD/CV model this could number in the thousands (a polygenic architecture), while under the mixed model one could expect a handful to be meaningfully involved (an oligogenic architecture – see diagram from review in Current Opinion in Neurobiology).

The new study, from the lab of Jeff Noebels, aimed to test these models in the context of epilepsy. Epilepsy is caused by an imbalance in excitation and inhibition within brain circuits. This can arise due to a large number of different factors, including alterations in the structural organisation of the brain, which may be visible on magnetic resonance imaging. Many neurodevelopmental disorders are therefore associated with epilepsy as a symptom (usually one of many). But it can also arise due to more subtle changes, not in the gross structure of the brain or the physical wiring of different circuits, but in the way the electrical activity of individual neurons is controlled.

The electrical properties of any neuron – how excitable it is, how long it remains active, whether it fires a burst of action potentials or single ones, what frequency it fires at and many other important parameters – are determined in large part by the particular ion channel proteins it expresses. These proteins form a pore crossing the membrane of the cell, through which electrically charged ions can pass. Different channels are selective for sodium, potassium or calcium ions and can be activated by different types of stimuli – binding a particular neurotransmitter or a change in the cell’s voltage for example. Many channels are formed from multiple subunits, each of which may be encoded by a different gene. There are hundreds of these genes in several large families, so the resultant complexity is enormous.

Many familial cases of epilepsy have been found to be caused by mutations in ion channel genes. However, most epilepsy patients outside these families do not carry these particular mutations. Therefore, despite these findings and despite the demonstrated high heritability, the particular genetic cause of the vast majority of cases of epilepsy has remained unknown. Large genome-wide association studies have looked for common variants that are associated with risk of epilepsy but have turned up nothing of note. The interpretation has been that common variants do not play a major role in the etiology of idiopathic epilepsy (epilepsy without a known cause).

The rare variants model suggests that many of these cases are caused by single mutations in any of the very large number of ion channel genes. A straightforward experiment to test that would be to sequence all these candidate genes in a large number of epilepsy patients. The hope is that it would be possible to shake out the “low hanging fruit” – obviously pathogenic mutations in some proportion of cases. The difficulty lies in recognising such a mutation as pathogenic when one finds it. This generally relies on some statistical evidence – any individual mutation, or such mutations in general, should be more frequent in epilepsy patients than in unaffected controls. The experiment must therefore involve as large a sample as possible and a control comparison group as well as patients.

Klassen and colleagues sequenced 237 ion channel genes in 152 patients with idiopathic epilepsy and 139 healthy controls. What they found was surprising in several ways. They did find lots of mutations in these genes, but they found them at almost equal frequency in controls as in patients. Even the mutations predicted to have the most severe effects on protein function were not significantly enriched in patients. Indeed, mutations in genes already known to be linked to epilepsy were found in patients and controls alike (though 96% of patients had such a mutation, so did 67% of controls). Either these specific mutations are not pathogenic or their effects can be strongly modified by the genetic background.

More interesting results emerged from looking at the occurrence of multiple mutations in these genes in individuals. 78% of patients vs 30% of controls had two or more mutations in known familial epilepsy genes. A similar trend was observed when looking at specific ion channel gene families, such as GABA receptors or sodium channels.

These data would seem to fit with the idea that an increasing mutational load pushes the system over a threshold into a pathological state. The reality seems more complicated, however, and far more nuanced. Though the average load was lower, many controls had a very high load and yet were quite healthy. It seems that the specific pattern of mutations is far more important than the overall number. This fits very well with the known biology of ion channels and previous work on genetic interactions between mutations in these genes.

Though one might expect a simple relationship between number of mutations and severity of phenotype, that is unlikely to be the case for these genes. It is well known that the effects of a mutation in one ion channel gene can be suppressed by mutation in another gene – restoring the electrical balance in the cell, at least to a degree sufficient for performance under normal conditions. The system is so complex, with so many individual components, that these interactions are extremely difficult to predict. This is complicated further by the fact that there are active processes within the system that act to normalise its function. It has been very well documented, especially by Eve Marder and colleagues, that changes to one ion channel in a neuron can be compensated for by homeostatic mechanisms within the cell that aim to readjust the electrical set-points for optimal physiological function. In fact, these mechanisms do not just happen within one cell, but across the circuit.

The upshot of the study is that, though some of the mutations they discovered are indeed likely to be the pathogenic culprits, it is very difficult to discern which ones they are. It is very clear that there is at least an oligogenic architecture for so-called “channelopathies” – the phenotype is determined by several mutations in each individual. (Note that this is not evidence for a highly polygenic architecture involving hundreds or thousands of genetic variants with tiny individual effects). The important insight is that it is not the overall number or mutational load that matters but the pattern of specific mutations in any individual that is crucial. Unfortunately, given how complicated the system is, this means it is currently not possible to predict an individual’s risk, even with this wealth of data. This will likely require a lot more biological information on the interactions between these mutations from experimental approaches and computational modelling.

What are the implications for other complex disorders? Should we expect a similarly complicated picture for diseases like schizophrenia or autism? Perhaps, though I would argue against over-extrapolating these findings. For the reasons described above, mutations in ion channel genes will show especially complex genetic interactions – it is, for example, even possible for two mutations that are individually pathogenic to suppress each other’s effects in combination. This is far less likely to occur for classes of mutations affecting processes such as neurodevelopment, many of which have been implicated in psychiatric disorders. Though by no means unheard of, it is far less common for the effects of one neurodevelopmental mutation to be suppressed by another – it generally just makes things worse. So, while modifying effects of genetic background will no doubt be important for such mutations, there is some hope that the interactions will be more straightforward to elucidate (mostly enhancing, far fewer suppressing). Others may see it differently of course (and I would be pleased to hear from you if you do); similar sequencing efforts currently underway for these disorders may soon tell whether that prediction is correct.

Klassen T, Davis C, Goldman A, Burgess D, Chen T, Wheeler D, McPherson J, Bourquin T, Lewis L, Villasana D, Morgan M, Muzny D, Gibbs R, & Noebels J (2011). Exome sequencing of ion channel genes reveals complex profiles confounding personal risk assessment in epilepsy. Cell, 145 (7), 1036-48 PMID: 21703448

Kasperaviciute, D., Catarino, C., Heinzen, E., Depondt, C., Cavalleri, G., Caboclo, L., Tate, S., Jamnadas-Khoda, J., Chinthapalli, K., Clayton, L., Shianna, K., Radtke, R., Mikati, M., Gallentine, W., Husain, A., Alhusaini, S., Leppert, D., Middleton, L., Gibson, R., Johnson, M., Matthews, P., Hosford, D., Heuser, K., Amos, L., Ortega, M., Zumsteg, D., Wieser, H., Steinhoff, B., Kramer, G., Hansen, J., Dorn, T., Kantanen, A., Gjerstad, L., Peuralinna, T., Hernandez, D., Eriksson, K., Kalviainen, R., Doherty, C., Wood, N., Pandolfo, M., Duncan, J., Sander, J., Delanty, N., Goldstein, D., & Sisodiya, S. (2010). Common genetic variation and susceptibility to partial epilepsies: a genome-wide association study Brain, 133 (7), 2136-2147 DOI: 10.1093/brain/awq130

Mitchell KJ (2011). The genetics of neurodevelopmental disease. Current opinion in neurobiology, 21 (1), 197-203 PMID: 20832285

Mirrored from

(Republished from by permission of author or representative)
🔊 Listen RSS

“We only use 10% of our brain”. I don’t know where that idea originated but it certainly took off as a popular meme – taxi drivers seem particularly taken with it. It’s rubbish of course – you use more than that just to see. But it captures an idea that we humans have untapped intellectual potential – that in each of us individually, or at least in humans in general lies the potential for genius.

Part of what has fed into that idea is the existence of so-called “savants” – people who have some isolated area of special intellectual ability far beyond most other individuals. Common examples of savant abilities include prodigious mental calculations, calendar calculations and remarkable feats of memory. These can arise due to brain injuries, or be apparently congenital. In congenital cases, savant abilities are often encountered against a background of the general intellectual, social or communicative symptoms of autism. (The portrayal by Dustin Hoffman in Rain Man is a good example, based on the late, well known savant Kim Peek).

A new hypothesis proposes that savantism arises due to a combination of autism and another condition, synaesthesia. Synaesthesia is commonly thought of as a cross-sensory phenomenon, where, for example, different sounds will induce the experience of particular colours, or tastes will induce the tactile experience of a shape. But in most cases the stimuli that induce synaesthesia are not sensory, but conceptual categories of learned objects, such as letters, numbers, days of the week, months of the year. The most common types involve coloured letters or numbers and what are called mental “number forms”.

These go beyond the typical mental number line that most of us can visualise from early textbooks. They are detailed, stable and idiosyncratic forms in space around the person, where each number occupies a specific position. They may follow complicated trajectories through space, even wrapping around the individual’s body in some cases. These forms can be related to different reference points (body, head or gaze-oriented) and can sometimes be mentally manipulated by synaesthetes to examine them more closely at specific positions.

The suggestion in relation to savantism is that such forms enable arithmetical calculations to be carried out in some kind of spatial, intuitive way that is distinct from the normal operations of formal arithmetic – but only when the brain is wired in such a way to take advantage of these special reprepsentations of numbers, as apparently can arise due to autism.

It has been proposed that the intense and narrowly focused interests typical of autism can lead to prolonged practice of these skills, which thus emerge and improve over time. While certainly likely to be involved in the development of these skills, on its own this explanation seems insufficient. It seems more likely that these special abilities arise from more fundamental differences in the way the brains of autistic people process information, with a greater degree of processing of local detail, paralleled by greater local connectivity in neural circuits and reductions in long-range integration.

Local processing may normally be actively inhibited. This idea has been referred to as the tyranny of the frontal lobes (especially of the left hemisphere), which impart top-down expectations with such authority that they override lower areas, conscripting them into service for the greater good. The potential of the local elements to process detailed information is thus superseded in order to achieve optimal global performance. The idea that local processing is actively suppressed is supported by the fact that savant abilities can sometimes emerge after frontal lobe injuries or in cases of frontotemporal dementia. Increased skills in numerical estimation can also, apparently, be induced in healthy people by using transcranial magnetic stimulation to temporarily inactivate part of the left hemisphere.

This kind of focus on local details, combined with an exceptional memory, may explain many types of savant skills, including musical and artistic ones. As many as 10% of autistics show some savant ability. These “islands of genius” (including things like perfect pitch, for example) are typically remarkable only on the background of general impairment – they would be less remarkable in the general population. Really prodigious savants are much more rare – these are people who can do things outside the range of normal abilities, such as phenomenal mathematical calculations. In these cases, the increased local processing typical of autism may not be, by itself, sufficient to explain the supranormal ability.

The idea is that such prodigious calculations may also rely on the concrete visual representations of numbers found in some types of synaesthesia. This theory was originally proposed by Simon Baron-Cohen and colleagues and arose from case studies of individual savants, including Daniel Tammett, an extraordinary man who has both Asperger’s syndrome and synaesthesia.

I had the pleasure of speaking with Daniel recently about his particular talents on the FutureProof radio programme for Dublin’s Newstalk Radio. (The podcast, from Nov 27th, 2010, can be accessed, with some perseverance, here). Daniel is unique in many ways. He has the prodigious mental talents of many savants, for arithmetic calculations and memory, but also has the insight and communicative skills to describe what is going on in his head. It is these descriptions that have fueled the idea that the mental calculations he performs rely on his synaesthetic number forms.

Daniel experiences numbers very differently from most people. He sees numbers in his mind’s eye as occupying specific positions in space. They also have characteristic colours, textures, movement, sounds and, importantly, shapes. Sequences of numbers form “landscapes in his mind”. This is vividly portrayed in the excellent BBC documentary “The Boy With the Incredible Brain” and described by Daniel in his two books, “Born on a Blue Day” and “Embracing the Wide Sky”.

His synaesthetic experiences of numbers are an intrinsic part of his arithmetical abilities. (I say arithmetical, as opposed to mathematical, because his abilities seem to be limited to prodigious mental calculations, as opposed to a talent for advanced calculus or other areas of mathematics). Daniel describes doing these calculations by some kind of mental spatial manipulation of the shapes of numbers and their positions in space. When he is performing these calculations he often seems to be tracing shapes with his fingers. He is, however, hard pressed to define this process exactly – it seems more like his brain does the calculation and he reads off the answer, apparently deducing the value based at least partly on the shape of the resultant number.

Daniel is also the European record holder for rembering the digits of the number pi – to over 20,000 decimal places. This feat also takes advantage of the way that he visualises numbers – he describes moving along a landscape of the digits of pi, which he sees in his mind’s eye and which enables him to recall each digit in sequence. The possible generality of this single case study is bolstered by reports of other savants, who similarly utilise visuospatial forms in their calculations and who report that they simply “see” the correct answer (see review by Murray).

Additional evidence to support the idea comes from studies testing whether the concrete and multimodal representations of numbers or units of time are associated with enhanced cognitive abilities in synaesthetes who are not autistic. Several recent studies suggest this is indeed the case.

Many synaesthetes say that having particular colours or spatial positions for letters and numbers helps them remember names, phone numbers, dates, etc. Ward and colleagues have tested whether these anecdotal reports would translate into better performance on memory tasks and found that they do. Synaesthetes did show better than average memory, but importantly, only for those items which were part of their synaesthetic experience. Their general memory was no better than non-synaesthete controls. Similarly, Simner and colleagues have found that synaesthetes with spatial forms for time units perform better on visuospatial tasks such as mental rotation of 3D objects.

Synaesthesia and autism are believed to occur independently and, as each only occurs in a small percentage of people, the joint occurrence is very rare. Of course, it remains possible that, even though most people with synaesthesia do not have autism and vice versa, their co-occurrence in some cases may reflect a single cause. Further research will be required to determine definitively the possible relationship between these conditions. For now, the research described above, especially the first-person accounts of Daniel Tammett and others, gives a unique insight into the rich variety of human experience, including fundamental differences in perception and cognitive style.

Murray, A. (2010). Can the existence of highly accessible concrete representations explain savant skills? Some insights from synaesthesia Medical Hypotheses, 74 (6), 1006-1012 DOI: 10.1016/j.mehy.2010.01.014

Bor, D., Billington, J., & Baron-Cohen, S. (2008). Savant Memory for Digits in a Case of Synaesthesia and Asperger Syndrome is Related to Hyperactivity in the Lateral Prefrontal Cortex Neurocase, 13 (5), 311-319 DOI: 10.1080/13554790701844945

Simner, J., Mayo, N., & Spiller, M. (2009). A foundation for savantism? Visuo-spatial synaesthetes present with cognitive benefits Cortex, 45 (10), 1246-1260 DOI: 10.1016/j.cortex.2009.07.007

Yaro, C., & Ward, J. (2007). Searching for Shereshevskii: What is superior about the memory of synaesthetes? The Quarterly Journal of Experimental Psychology, 60 (5), 681-695 DOI: 10.1080/17470210600785208

(Republished from by permission of author or representative)
• Category: Science • Tags: Autism 
🔊 Listen RSS

Review of “Braintrust. What Neuroscience Tells Us about Morality”, by Patricia S. Churchland

The question of “where morals come from” has exercised philosophers, theologians and many others for millennia. It has lately, like many other questions previously addressed only through armchair rumination, become addressable empirically, through the combined approaches of modern neuroscience, genetics, psychology, anthropology and many other disciplines. From these approaches a naturalistic framework is emerging to explain the biological origins of moral behaviour. From this perspective, morality is neither objective nor transcendent – it is the pragmatic and culture-dependent expression of a set of neural systems that have evolved to allow our navigation of complex human social systems.


“Braintrust”, by Patricia S. Churchland, surveys the findings from a range of disciplines to illustrate this framework. The main thesis of the book is grounded in the approach of evolutionary psychology but goes far beyond the just-so stories of which that field is often accused by offering not just a plausible biological mechanism to explain the foundations of moral behaviour, but one with strong empirical support.

The thrust of her thesis is as follows:

Moral behaviour arose in humans as an extension of the biological systems involved in recognition and care of mates and offspring. These systems are evolutionarily ancient, encoded in our genome and hard-wired into our brains. In humans, the circuits and processes that encode the urge to care for close relatives can be co-opted and extended to induce an urge to care for others in an extended social group. These systems are coupled with the ability of humans to predict future consequences of our actions and make choices to maximise not just short-term but also long-term gain. Moral decision-making is thus informed by the biology of social attachments but is governed by the principles of decision-making more generally. These entail not so much looking for the right choice but for the optimal choice, based on satisfying a wide range of relevant constraints, and assigning different priorities to them.

This does not imply that morals are innate. It implies that the capacity for moral reasoning and the predisposition to moral behaviour are innate. Just as language has to be learned, so do the codes of moral behaviour, and, also like language, moral codes are culture-specific, but constrained by some general underlying principles. We may, as a species, come pre-wired with certain biological imperatives and systems for incorporating them into decisions in social situations, but we are also pre-wired to learn and incorporate the particular contingencies that pertain to each of us in our individual environments, including social and cultural norms.

This framework raises an important question, however – if morals are not objective or transcendent, then why does it feel like they are? This is after all, the basis for all this debate – we seem to implicitly feel things as being right or wrong, rather than just intellectually being aware that they conform to or violate social norms. The answer is that the systems of moral reasoning and conscience tap into, or more accurately emerge from ancient neural systems grounded in emotion, in particular in attaching emotional value or valence to different stimuli, including the imagined consequences of possible actions.

This is, in a way, the same as asking why does pain feel bad? Couldn’t it work simply by alerting the brain that something harmful is happening to the body, which should therefore be avoided? A rational person could then take an action to avoid the painful stimulus or situation. Well, first, that does not sound like a very robust system – what if the person ignored that information? It would be far more adaptive to encourage or enforce the avoidance of the painful stimulus by encoding it as a strong urge, forcing immediate and automatic attention to a stimulus that should not be ignored and that should be given high priority when considering the next action. Even better would be to use the emotional response to also tag the memory of that situation as something that should be avoided in the future. Natural selection would favour genetic variants that increased this type of response and select against those that decoupled painful stimuli from the emotional valence we normally associate with them (they feel bad!).

In any case, this question is approached from the wrong end, as if humans were designed out of thin air and the system could ever have been purely rational. We evolved from other animals without reason (or with varying degrees of problem-solving faculties). For these animals to survive, neural systems are adapted to encode urges and beliefs in such a way as to optimally control behaviour. Attaching varying levels of emotional valence to different types of stimuli offers a means to prioritise certain factors in making complex decisions (i.e., those factors most likely to affect the survival of the organism or the dissemination of its genes).

For humans, these important factors include our current and future place in the social network and the success of our social group. In the circumstances under which modern humans evolved, and still to a large extent today, our very survival and certainly our prosperity depend crucially on how we interact and on the social structures that have evolved from these interactions. We can’t rely on tooth and claw for survival – we rely on each other. Thus, the reason moral choices are tagged with strong emotional valence is because they evolved from systems designed for optimal control of behaviour. Or, despite this being a somewhat circular argument, the reason they feel right or wrong is because it is adaptive to have them feel right or wrong.

Churchland fleshes out this framework with a detailed look at the biological systems involved in social attachments, decision-making, executive control, mind-reading (discerning the beliefs and intentions of others), empathy, trust and other faculties. There are certain notable omissions here: the rich literature on psychopaths, who may be thought of as innately deficient in moral reasoning, receives surprisingly little attention, especially given the high heritability of this trait. As an illustration that the faculty of moral reasoning relies on in-built brain circuitry, this would seem to merit more discussion. The chapter on Genes, Brains and Behavior rightly emphasises the complexity of the genetic networks involved in establishing brain systems, especially those responsible for such a high-level faculty as moral reasoning. The conclusion that this system cannot be perturbed by single mutations is erroneous, however. Asking what does it take, genetically speaking, to build the system is a different question from what does it take to break it. Some consideration of how moral reasoning emerges over time in children would also have been interesting.

Nevertheless, the book does an excellent job of synthesising diverse findings into a readily understandable and thoroughly convincing naturalistic framework under which moral behaviour can be approached from an empirical standpoint. While the details of many of these areas remain sketchy, and our ignorance still vastly outweighs our knowledge, the overall framework seems quite robust. Indeed, it articulates what is likely a fairly standard view among neuroscientists who work in or who have considered the evidence from this field. However, one can presume that jobbing neuroscientists are not the main intended target audience and that both the details of the work in this field and its broad conclusions are neither widely known nor held.

The idea that right and wrong – or good and evil – exist in some abstract sense, independent from humans who only somehow come to perceive them, is a powerful and stubborn illusion. Indeed, for many inclined to spiritual or religious beliefs, it is one area where science has not until recently encroached on theological ground. While the Creator has been made redundant by the evidence for evolution by natural selection and the immaterial soul similarly superfluous by the evidence that human consciousness emerges from the activity of the physical brain, morality has remained apparently impervious to the scientific approach. Churchland focuses her last chapter on the idea that morals are absolute and delivered by Divinity, demonstrating firstly the contradictions in such an idea and, with the evidence for a biological basis of morality provided in the rest of the book, arguing convincingly that there is no need of that hypothesis.

Mirrored from Wiring the Brain.

(Republished from by permission of author or representative)
• Category: Science • Tags: Evolution, Genetics, Morality 
🔊 Listen RSS

There is a paradox at the heart of behavioural and psychiatric genetics. On the one hand, it is very clear that practically any psychological trait one cares to study is partly heritable – i.e., the differences in the trait between people are partly caused by differences in their genes. Similarly, psychiatric disorders are also highly heritable and, by now, mutations in hundreds of different genes have been identified that cause them.

However, these studies also highlight the limits of genetic determinism, which is especially evident in comparisons of monozygotic (identical) twins, who share all their genetic inheritance in common. Though they are obviously much more like each other in psychological traits than people who are not related to each other, they are clearly NOT identical to each other for these traits. For example, if one twin has a diagnosis of schizophrenia, the chance that the other one will also suffer from the disorder is about 50% – massively higher than the population prevalence of the disorder (around 1%), but also clearly much less than 100%.

What is the source of this extra variance? What forces make monozygotic twins less identical? I have argued previously that random variation in the course of development is a major contributor. The developmental programme that specifies brain connectivity is less like a blueprint than a recipe (a recipe without a cook) – an incredibly complicated set of processes carried out by mindless biochemical algorithms mediated by local interactions between billions of individual components. As each of these processes is subject to some level of “noise” at the molecular level, it is not surprising that the outcome of this process varies considerably, even between monozygotic twins.

While such developmental variation can be referred to as “non-genetic”, a new study suggests that one important component of this variation may be genetic after all, just not inherited. Mutations can be passed on from parents to offspring or arise during generation of sperm or eggs and thus be inherited, but they can also arise any time DNA is replicated. So, each time a cell divides as an embryo grows and develops, there is a very small chance of new mutations being introduced. These “somatic” mutations (meaning ones that happen in the body and not in the germline) will be inherited by all the cells that are descendants of that new cell and so will be present in some fraction of the final cells of the individual. Mutations arising earlier in development will be inherited by more cells than those arising later.

Each person will therefore be a mosaic of cells with slightly different genetic make-up. The vast majority of such mutations will not have any effect of course (with the obvious exception of those that cause dysregulation of cellular differentiation and result in cancer). But sometimes a new mutation will affect a trait and cause a detectable difference. The most obvious examples are in genes affecting hair or eye colour – where a patch of hair may be a different colour, or the two eyes may be different colours.

But what if the mutations in question are linked to a psychiatric disorder? If such a mutation arises early in the development of the brain and is therefore inherited by many of the cells in the brain then this could lead to the psychiatric disorder, just as if the mutation had been inherited in a germ cell.

A new study adds to the evidence that such mutations do indeed occur at an appreciable frequency and may help explain the discordance in phenotype between pairs of twins where one has schizophrenia and the other does not. The authors analysed the DNA from blood cells of pairs of twins discordant for schizophrenia and their parents. They were looking for two different kinds of mutation: ones that changes the identity of a single base of DNA (one letter of the genetic code to another), called point mutations, and ones that delete or duplicate whole chunks of chromosomes, called copy number variants, or CNVs.

As expected, they were able to detect both inherited mutations (present in one of the parents) and de novo mutations (present in both twins but not in the blood cells of either parent). What is more remarkable though, is that they also detected de novo mutations present in the blood cells of one twin but not the other – lots of them. About 1,000 point mutations and 2-3 new CNVs not shared by the other twin. The implication is that these mutations arose during the somatic development of one twin. They identify a couple CNVs in the twins affected by schizophrenia, raising the (very speculative) possibility that those mutations may contribute to the development of the disorder. It will obviously require a lot more work to test that specific hypothesis.

An earlier study also found a high rate of somatic mosaicism for CNVs – this time by analysing the DNA of multiple tissues taken from single (deceased) individuals. Across 34 tissue samples from 3 subjects they identified six CNVs present in one tissue but not others. What this implies is that not only do we carry additional mutations making us even more different from one another, our cells and tissues can also be genetically different from each other.

Time will tell whether such mutations really do contribute to psychiatric disorders, but it certainly seems plausible that they might. This adds to a couple other potential mechanisms of increasing individual variance: the transposition of mobile DNA elements in somatic tissues, especially neurons, and the “epigenetic” silencing of regions of the genome, which may be clonally inherited in groups of cells and contribute to differences between twins.

This has one immediate and important consequence for clinical genetics. When a mutation in an offspring is not carried by either parent it is usually interpreted as having arisen de novo. The implication is that the risk of another offspring carrying the same mutation is negligible. Clinical geneticists are finding this is not necessarily always the case, however – apparently de novo mutations may have actually arisen at an early stage in the germline and not just at the final division generating the sperm or egg. The parent in question may not actually “carry” the mutation, but their germline does. Great care must therefore be taken when advising parents with one affected child of the risk to future offspring.

Maiti S, Kumar KH, Castellani CA, O’Reilly R, & Singh SM (2011). Ontogenetic de novo copy number variations (CNVs) as a source of genetic individuality: studies on two families with MZD twins for schizophrenia. PloS one, 6 (3) PMID: 21399695

Piotrowski, A., Bruder, C., Andersson, R., de Ståhl, T., Menzel, U., Sandgren, J., Poplawski, A., von Tell, D., Crasto, C., Bogdan, A., Bartoszewski, R., Bebok, Z., Krzyzanowski, M., Jankowski, Z., Partridge, E., Komorowski, J., & Dumanski, J. (2008). Somatic mosaicism for copy number variation in differentiated human tissues Human Mutation, 29 (9), 1118-1124 DOI: 10.1002/humu.20815

Fraga, M. (2005). From The Cover: Epigenetic differences arise during the lifetime of monozygotic twins Proceedings of the National Academy of Sciences, 102 (30), 10604-10609 DOI: 10.1073/pnas.0500398102

Mirrored from the Wiring the Brain blog

(Republished from by permission of author or representative)
• Category: Science • Tags: Epigenetics, Mutation, Schizophrenia, Twins 
🔊 Listen RSS

Recent evidence indicates that psychiatric disorders can arise from differences, literally, in how the brain is wired during development. Psychiatric genetic approaches are finding new mutations associated with mental illness at an amazing rate, thanks to new genomic array and sequencing technologies. These mutations include so-called copy number variants (deletions or duplications of sections of a chromosome) or point mutations (a change in the code at one position of the DNA sequence). At the recent Wiring the Brain conference, we heard from Christopher Walsh, Guy Rouleau, Michael Gill and others of the identification of a number of new genes associated with neurological disorders, epilepsy, autism and schizophrenia.

The emerging picture is that each of these disorders can be caused by mutations in any one of a large number of genes. Strikingly, many of these genes play important roles in neural development, with mutations affecting patterns of cell migration, the guidance of growing nerve fibres and their connectivity to other cells. Even more remarkable has been the observation that most such mutations predispose to not just one specific illness (such as schizophrenia) but to mental illness in general, with a strong overlap in the genetics of schizophrenia, autism, bipolar disorder, epilepsy, mental retardation, attention-deficit hyperactivity disorder and other diagnostic categories. These different categories may thus represent arguably distinct endpoints arising from common origins in neurodevelopmental insults.

What we do not yet know is why. How does a mutation in a gene controlling say, the formation of connections between specific types of nerve cells, ultimately result in someone having paranoid delusions? (While another person carrying the same mutation may develop the quite different symptoms of autism at a much earlier age). Answering such questions will require much greater integration of efforts across a wide range of disciplines.

These efforts must include neurodevelopmental biologists. Over the past couple of decades, tremendous progress has been made in elucidating the molecular mechanisms underlying nervous system development. In many cases, these advances have been made using fairly simply model systems – fruit flies and nematode worms have been favourites in this field, as well as simple parts of the vertebrate nervous system such as the spinal cord and retina. While more and more researchers are trying to figure out how these mechanisms apply in the vastly more complicated mammalian brain, we are still a long way from understanding how this structure develops. This is especially the case as much of the circuitry of the brain is not prespecified by genetic instructions down to the last synapse, but is strongly affected by patterns of electrical activity within developing circuits. Nevertheless, it has been possible to use animals with mutations in particular genes to figure out what the functions of these genes are in the development of specific brain circuits.

The logic of these approaches is fairly straightforward: in order to discover the normal function of Gene X, mutate it, look at what happens to some part of the brain and work backwards to deduce the cellular processes that have been affected. What is needed now, if neurodevelopmental biologists are to make a contribution to the study of mental illness, is a different approach. We must develop an interest in the phenotypes themselves, not just as tools to elucidate the gene’s normal functions. If mutations in Gene X can cause autism, for example, then a mouse with the same mutation becomes a valuable and informative model of disease. It becomes of interest to analyse not just the direct processes affected by the mutation but all of the knock-on consequences. While these questions may start with neurodevelopmental biologists they rapidly require additional expertise to address.

This will entail a framework to link investigations across levels of analysis typically carried out by researchers in quite different disciplines. For example, if the mutation affects formation of synaptic connections between certain types of cells in certain brain regions, then how does this change the function of the circuits involved? If this changes the activity of the circuit, then how does this affect further activity-depdendent development of interconnected regions? How does that affect the information processing capabilities of these networks? What cognitive functions are carried out by these networks and how are they impacted? At what level can we most directly translate findings in animals to humans? Each of these questions requires researchers in different disciplines to work together.

The imperative to do this could not be more stark. Roughly 10% of the world’s population is affected by mental illness at any one time, and over 25% will have some mental health problem over their lifetime. As well as the costs to individuals and their families, the public health and economic burdens from these disorders are massive, as large as that of cancer and cardiovascular disease. In fact, the proportional burden is growing as we are making good progress in treating the latter disorders, while mental illnesses have lagged far behind. This is mainly because we have not been able to apply the tools of molecular genetics to the problem. This is now changing, thanks to the revolutionary advances in psychiatric genetics. The challenge now will be to translate these discoveries into real understanding of disease mechanisms and ideas for novel therapies.

This post is based on a brief article that introduces a thematic series of reviews and primary research papers on the theme of Wiring the Brain. This series will appear across various journal titles of the open access publisher BioMed Central and can be accessed here.

Mitchell KJ (2011). The miswired brain: making connections from neurodevelopment to psychopathology. BMC biology, 9 (1) PMID: 21489316

(Republished from by permission of author or representative)
🔊 Listen RSS

I have a new post over on the Scientific American Mind Matters website. It describes new research which suggests that tune deafness and face blindness – two examples of conditions known as agnosias, both of which can be genetic – are caused not by a failure of the brain to recognise previously seen faces or detect incongruous musical notes, but a failure to communicate these events to frontal brain regions where conscious awareness is triggered. In essence, your brain knows something but can’t tell you. Read more…

(Republished from by permission of author or representative)
• Category: Science 
🔊 Listen RSS

If some guy spilt your beer by accident, would you punch him in the face? If he was unapologetic, you might at least consider it – you might in fact feel a pretty strong urge to do it. What stops you? Or, if you’re the type who acts on those urges, what doesn’t stop you? New research has found a mutation in one gene that may contribute to these differences in temperament.

Self-control is the ability to inhibit an immediate course of action in the pursuit of a longer-term goal or to consciously override a base urge. Some people show far more inhibitory control than others. This trait is very stable – indeed, inhibitory control in children, which can be assessed using the famous “marshmallow test”, is predictive of their score on scales of impulsivity as adults. (The marshmallow test must go down as one of the cruellest experiments in psychology – it involves asking four-year olds not to eat a lovely yummy marshmallow for five minutes, after which they will be given another one to go with it if they have resisted. The videos of these poor kids as they struggle to resist this urge are priceless). Impulsivity is also partly heritable – that is, more closely related people are more similar in this trait.

This is generally true of all personality traits, suggesting they are influenced by genetic variation. However, the specific genes involved are almost entirely unknown. Indeed, a recent study that failed to find any such genes was interpreted by many (e.g., 1, 2) as evidence that either personality was not really genetic or that measures of personality traits were effectively meaningless. In fact, this was a gross misinterpretation of the results of this study. What these researchers did was look for common genetic variants that were associated with differences in personality traits, across a sample of over 5,000 people. Common variants are ancient differences at specific positions in the DNA code, where some proportion of the population carries one base, say a “C”, and the rest carry another base, say an “A”. There are millions of such variable positions across the human genome. Most of them do not do anything – they do not affect the sequence of a protein or how much of it is made. And, it seems, none of them affects personality significantly.

This does NOT mean that these traits are not affected by genetic variation. The genome-wide association analysis could not detect rare variants – ones that only a few people in the population carry. These are mutations that have arisen in the much more recent past and which have been passed on to only a small proportion of the population. In general, such mutations are far more likely to affect a protein and have some influence on the observable traits of an organism (its phenotype). Why? Because usually such effects are not very positive and natural selection pretty rapidly weeds them out – if a variant becomes common it is usually because it does not have any effect. (Not always, but usually).

So, how can these rare variants be found? Well, advances in sequencing technologies now make it possible to sequence the entire genetic code of a person or determine the entire sequence of a specific gene or genes across large numbers of people. This approach will pick up all the genetic differences, whether they are rare or common. This is what researchers from the National Institutes of Health and from Helsinki have done in a new study that led to the identification of a mutation in the Finnish population that apparently affects impulsivity.

They started with the hypothesis that this trait might be affected by variation in genes involved in the synthesis or signalling pathways of the neuromodulators dopamine and serotonin. These molecules act in the brain to alter the responsiveness of neurons to other signals – they set the tone, the internal context that helps determine how the organism will respond to various stimuli at any given moment. Differences in these pathways may also explain why different people will respond differently to the same stimulus (like that guy spilling your pint). There is a good deal of pharmacological evidence implicating these pathways in mood and temperament, as well as some prior genetic evidence for a couple specific genes.

To look for variation specifically affecting impulsivity, the researchers sequenced fourteen genes involved in the dopamine and serotonin pathways in a sample of the most impulsive people they could find – prisoners who had been convicted of violent, spontaneous crimes. All of these subjects had one of several psychiatric diagnoses that specifically include impulsive behaviour as a core symptom: borderline personality disorder, antisocial personality disorder or intermittent explosive disorder.

The scientists found one mutation that had never been seen in any other population – in the gene HTR2B, which encodes a receptor for serotonin. The mutation completely abolishes the production of the protein, so that people who carry one copy of this mutant version of the gene have only half the normal amount of the receptor protein. The mutant version was found to be greatly over-represented (7.5% frequency) among a set of 228 violently impulsive subjects, compared to 295 controls from the general population (1.2%). Among family members of the violent offenders who carried the mutation there was also an increased rate of the psychiatric disorders listed above, specifically in those relatives who also inherited the mutation.

These findings therefore suggest that this mutation increases the risk of this kind of violent, impulsive behaviour. It must only be one factor, however, as most of the 1% in the Finnish population who carry it are not violent criminals. Being male and alcohol abuse are two other likely risk factors. Almost all of the violent impulsive cases had committed crimes under the influence of alcohol, mostly unpremeditated “disproportionate reactions to minor irritations”. (Note the difference with psychopaths, who show much more cold-blooded and goal-directed violence). Two-thirds had also attempted suicide at least once, with an average of over 3 attempts.

So, does this mutation really affect the personality trait of impulsivity specifically, or is that just one component of a wider and more severe phenotype? The authors did look for effects on cognitive measures across a large Finnish twin sample, identifying significant effects on working memory in males, but do not report a test of association with impulsivity as a trait in this sample. We shall therefore have to wait to see if that more general association holds.

Their case is supported by observations in mice which carry mutations in the same gene – mice with both copies of this gene mutated score higher on a range of test used to measure impulsivity (yes, mice can be more or less impulsive). Also, the protein encoded by the HTR2B gene, the serotonin receptor 5-HT2B, is the target for the mood-altering drug ecstasy (3,4-methylene-dioxymethamphetamine, MDMA). When this drug binds the 5-HT2B receptor it induces serotonin release in the brain and a subsequent chain of events including dopamine release in the reward area of the brain.

These data naturally lead to the idea that the mutation found in this study has its effect by altering the amount of this receptor protein in the adult brain, thereby altering the tone of serotonin signalling. There is an alternative hypothesis, however, which is that the brain develops differently due to this mutation. There is good reason to think this may be the case as it is known that serotonin plays important roles in brain wiring at early stages of neural development. More on that possibility in a later post.

Whether the mechanism is acute or developmental, these findings emphasise the importance of rare variants – which may occur only in one population, in one kindred or family, or even in a single individual – in determining an individual’s phenotype.

Verweij KJ, Zietsch BP, Medland SE, Gordon SD, Benyamin B, Nyholt DR, McEvoy BP, Sullivan PF, Heath AC, Madden PA, Henders AK, Montgomery GW, Martin NG, & Wray NR (2010). A genome-wide association study of Cloninger’s temperament scales: implications for the evolutionary genetics of personality. Biological psychology, 85 (2), 306-17 PMID: 20691247

Bevilacqua L, Doly S, Kaprio J, Yuan Q, Tikkanen R, Paunio T, Zhou Z, Wedenoja J, Maroteaux L, Diaz S, Belmer A, Hodgkinson CA, Dell’osso L, Suvisaari J, Coccaro E, Rose RJ, Peltonen L, Virkkunen M, & Goldman D (2010). A population-specific HTR2B stop codon predisposes to severe impulsivity. Nature, 468 (7327), 1061-6 PMID: 21179162

(Republished from by permission of author or representative)
• Category: Science 
No Items Found
The “war hero” candidate buried information about POWs left behind in Vietnam.
What Was John McCain's True Wartime Record in Vietnam?
The evidence is clear — but often ignored
Are elite university admissions based on meritocracy and diversity as claimed?
A simple remedy for income stagnation