The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 
Email This Page to Someone

 Remember My Information



=>
Topics/Categories Filter?
Neuroscience Science
Nothing found
 TeasersMatt McIntosh@GNXP Blogview

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

Our adaptive immune system (thought to have evolved around the time of the earliest jawed vertebrates) functions by recognizing things in our bodies that aren’t us and attacking them, which is why transplants and grafts of tissues that are different from our own tend to get rejected by our bodies. But this poses an interesting problem for the evolution of placental mammals (first pointed out by Peter Medawar in 1953): The fetus is genetically different from the mother, so before she can start carrying her progeny around inside her for long stretches of time there would have to be some mechanism in place to prevent her immune system from going into attack mode on it.

There are a few different ways this could plausibly be accomplished, but the one that evolution actually seems to have hit on is pretty neat, I think. One way we know it doesn’t happen is by the mother somehow recognizing that the fetus carries half her genes, because otherwise IVF blastocysts implanted into surrogate mothers would spontaneously abort. So whatever is going on here is a “kin-blind” adaptation.

A significant chunk of our DNA had its origins as retroviral DNA. Most of these are now inactive, but a tiny portion actually appear to still code proteins. It’s been found in mice, sheep and humans (and presumably generalizes to all placental mammals) that a particular kind of endogenous retrovirus is highly expressed in the outermost layer of the blastocyst (see e.g. Venables et al. 1995 for the human example). Furthermore, when you inhibit the expression of these genes the result is uniform spontaneous abortion immediately following implantation (Dunlap et al. 2006).

Most retroviruses are immunosuppressive, the most infamous example being HIV. Connecting the dots, it’s quite plausible that these particular ancient retroviruses have been recruited into the mammalian genome and serve as local immunosuppressors in the uterus during development. In fact, we already know that syncytin, a protein crucial in placenta formation, is the product of a retroviral gene (Knerr et al. 2004), so there’s nothing at all far-fetched about this. (In fact talking about these genes as if they were viruses just clouds the issue: The fact that they’re now propagated in exactly the same way as the rest of your nuclear genome means that they’re just as much your genes as any other bit of your DNA.)

The idea that viruses played a crucial role in the evolution of placental mammals is pretty nifty, but this is just the best-investigated case and there’s circumstantial evidence suggesting that retroviruses have been involved in other major evolutionary innovations too. For instance, it turns out that eukaryotic DNA polymerases bear a closer structural resemblance to viral DNA polymerases than they do to those of eubacteria, suggesting that perhaps the genes of DNA viruses were recruited in the evolution of eukaryotic cellular machinery (Villarreal & Filippis 2000).

But around here we’re more interested in human evolution, and there’s some suggestive data on that score: Turns out human endogenous retroviruses are expressed in a wide range of tissues during development (Andersson et al. 2002; Muir, Lever & Moffett 2004); that retroviral promoters, enhancers & silencers inserted near genes can alter gene expression (Thornburg, Gotea & Makalowski 2006; Dunn, Medstrand & Mager 2003; Ting et al. 1992); and that sequence & phylogenetic analysis suggests they may be responsible for a significant portion of large-scale deletions and insertions on the genome (Huges & Coffin 2001). We’re used to thinking of predators and parasites as indirect drivers of evolutionary change in organisms, but when the parasites can obtain direct access to their host’s DNA this gets taken to a whole deeper level that’s only recently been appreciated.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Disease 
🔊 Listen RSS

RPM’s slamming of some silly coverage of the C-value enigma got me thinking about the problem of why we see the sorts of variation we do in the amount of non-coding DNA between species. People are right to heckle the questionable assumption that these differences in ncDNA have anything to do with the evolution of phenotypic complexity (though probably a small fraction do), but I think it might still have an interesting functional tale to tell. I’m probably not the first to think of this, but the idea is that variations in quantity of ncDNA are not functional for the organism in themselves but rather the waste product of a particular kind of functional change: gene duplication.

Recall that eukaryotic genomes are regularly bedevilled by selfish tranposons. These are rogue genetic elements with a vested interest in creating duplication events, and the basic idea is that every once in a while one of them will succeed wildly at it and in the process end up dragging a whole gene along for the ride (maybe several times). Most of the time this will be bad, but occasionally it’ll be good, and sometimes it’ll be nearly-neutral and you’ll see functional divergence on the copied locus after the initial duplication event. In the cases where a duplicated gene confers a selective benefit, the newly formed transpositional elements hitchhike along on the newly selected gene’s coattails.

The upshot of this is that we should expect cases of adaptive evolution via gene duplication to be frequently be accompanied by increases in the amount of transpositional cruft in the genome of the species. This also would neatly account for much of the ncDNA variation between species, since gene duplication seems to play an important role in the emergence of species-specific traits. If this idea is correct, the amount of ncDNA should correlate more highly with how much adaptive gene duplication a lineage has undergone rather than phenotypic complexity per se.

This theory should be pretty easy to test: Look at cases of adaptive gene duplication that have happened relatively recently (geologically speaking) and compare the LINEs and such around these loci with those close to the presumed “parent” locus. The further back in time you go the harder it will be to do this comparison due to drift wiping out the traces, but in the cases that are comparable they should have a very similar pattern of nonfunctional repeats. If I have this right. (EDIT: Duh. This isn’t a good test, since you’d probably see the same thing under any sort of duplication. Need to think of something else. Maybe compare lineages of recently duplicated genes: If gene B is a “recent” duplication of gene A, and gene Y is a “recent” duplication of gene X, but genes A and X diverged an extremely long time ago, then the two duplications were probably caused by different retrotransposons and so the LINEs around A and B should tend to be highly similar to each other but very different than those around X and Y, and vice-versa. You’d probably have to compare a bunch of different gene lineages to get a statistically significant result, though, and I don’t know how easy it would be to find enough good candidates.)

Has anyone actually looked at anything like this? Does this idea hang together? How else could we test it?

Update: Looks like another beautiful hypotheses slain by an ugly fact. I’ll just copy-paste what I said in the comments:

Having looked into it, this doesn’t work the way I thought it would. I knew that LINEs sometimes end up dragging some of the host’s genetic material along in their replications, but now I know that the way this happens is that sometimes the reverse-transcription machinery grabs onto host mRNA that’s floating around and splices it in. So what’s being inserted is automatically a pseudogene since the mRNA has already been processed (i.e. there’s no promoter attached to it). For this idea to work it would need to be an active gene. Rats.

Mind you, DNA transposons could still easily easily be a major source of gene duplication since they skip the RNA middleman. But since they’re only a tiny fraction of ncDNA that means it probably has nothing to do with the C-value enigma.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Speciation 
🔊 Listen RSS

To my mind, the most compelling evidence in favor of Flynn Effect gains being real is physiological: it’s well known that there have been increases in height concurrent with increases in intelligence in all the countries where the FE has been operative. What’s less well known is that there have also been recorded increases in cranial capacity:

Standard metric data from 885 crania were used to document the changes from 1850 to 1975. Data from 19th century crania were primarily from anatomical collections, and 20th century data were available from the forensic anthropology data bank. Canonical correlation was used to obtain a linear function of cranial variables that correlates maximally with year of birth. Canonical correlations of year of birth with the linear function of cranial measurements ranged from 0.55 to 0.71, demonstrating that cranial morphology is strongly dependent on year of birth. During the 125 years under consideration, cranial vaults have become markedly higher, somewhat narrower, with narrower faces.

. . . and in brain size:

7397 post-mortem records have been studied. These comphrhend all 20- to 50-year old men and women who had been autopsied in The London Hospital since 1907. Fresh brain weight, body weight and height were abstracted and analysed statistically according to sex and to year of birth, any person with a cerebral or skeletal abnormality having been excluded. Fresh brain weight in men increased gradually by an average of 0-66 g per year from a mean of 1372 g for those born in 1860 to 1424 g in 1940-a total of 52 g. The weight of the female brain increased by 0-28 g per year from 1242 g to 1265 g over the same period.

Given an increase in brain size and the correlation between IQ and brain size (0.4), it’d be pretty remarkable if there wasn’t any corresponding increase in intelligence. Also, in support of Lynn’s nutrition hypothesis, there have been correlations found in developed countries between IQ and presence of certain micronutrients:

The relationship between nutritional status and intellectual capacity in 6-year-old children was investigated in 83 subjects of medium-high socio-economic status, without any apparent risk of malnutrition and normal or high intellectual capacity. Nutritional status was evaluated by measuring food consumption, anthropometrical measurements and biochemical indicators (iron status, red cell folate and total plasma homocysteine concentration (tHcy)). IQ was evaluated using the WPPSI test. The relationship between nutritional status and IQ was investigated by multiple linear regression analysis adjusting for socio-demographic variables and sex. There was a significant and positive relationship between iron intake and both total and non-verbal IQ. This was also the case for folate intake and both total and verbal IQ. The fact that these observations were made in children from a developed country, in which their energy and education requirements are met, suggests that their cognitive development may benefit from specific preventive nutritional interventions with these nutrients.

Also, there have been a few studies showing that FE gains tend to be disproportionately located at the left half of the curve rather than the right, which is the nutrition theory would predict given that the less bright people tend to be poorer and thus benefit more than the wealthier (who tend to be smarter) from nutritive improvements.

Finally, from a psychometric angle, there’s this paper (though I’ve only read the abstract) which found that the amount of covariance on test items explained by g has been decreasing as the scores have been increasing. This is what you’d expect if the biological fundamentals underlying g had been improving among the lower end of the range: when you decrease the variance of one component, item covariance attributable to other components necessarily increases.

I think any satisfactory theory of the Flynn Effect has to also take these pieces of evidence into account and unify the whole picture, either explaining them or explaining them away. The only theory on the table that I think does this plausibly is the nutrition-centric hypothesis, though alternative takes are of course welcome.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Flynn Effect 
🔊 Listen RSS

(Achtung: I’m mostly thinking aloud here and haven’t extensively explored these ideas. Just something I thought I’d throw out there.)

Stretches of DNA can get themselves replicated in one of two ways: They can help an organismal vehicle reproduce more effectively, or they can hijack an organism’s transcriptional and replicative machinery to get themselves copied. In the latter case, if they travel autonomously they’re called viruses and if they quietly hitch a ride in the DNA of some host organism they’re called selfish DNA. Replicating inert pieces of selfish DNA takes up resources, but the cost of the marginal copy is low enough that quite a few can accumulate — over 40% of the human genome, for instance, is estimated to be made up of retrotransposons.

But as long as the cost is non-zero, there’ll still be a kind of fitness optimum for any bit of selfish DNA — one that was too adept at getting copies of itself spliced into the genome would eventually get to the point where the host organism would start taking a fitness hit, and one that wasn’t adept enough would get outcompeted by ones that did a better job.

How can selfish DNA elements be said to compete, if they’re neutral by assumption and hence will just be blown around randomly by drift? While it’s true that at any single locus the dynamics of selfish DNA elements will follow straightforward drift dynamics, retrotranspositional elements are special because identical copies of them can exist at many loci. To quantify this, at any single locus a string of selfish DNA’s odds of sweeping are 1/N, and that goes for all its competitors too. But its expected score in the multi-locus game will be k*(1/N), where k is the number of loci where copies of it are present. It now becomes a game of “whoever has the most loci wins”: if you’re a retrotransposon with 500 identical copies of yourself on a genome, you’ve got an absolute advantage in the replication game over a different one that only has 100 copies of itself.

What retrotransposons are competing for in this case is the total excess carrying capacity of a population’s genome — i.e. the finite amount of room to expand your numbers before your hosts start getting hurt — and this will vary from species to species. Note that all other things equal, a bit of selfish DNA that gets in early in the game will tend to profit more than a similar one that comes in late. But also note that all other things may not be equal: if population size varies over the course of the game, then the expected payoff for a new piece of selfish DNA that enters the game will be greater when N is lower, since its odds of sweeping to fixation are better.

But what happens when the carrying capacity has been filled? Since the dynamics at all the loci will be governed by mutation and drift, the equilibrium in this game is that nobody really wins because in the long run very few (none?) of the loci will be identical by state.[1] But in the long run we are all dead, and mutation+drift acts slowly enough that it would take a very long time to reach this equilibrium. A string of selfish DNA that controlled many loci would “outlast” one that only controlled a few, since even if another sweep occurred by chance at one of its loci it’d still be dominating a whole lot more.

In the mean time, the circumstances could change: Mutations at other loci or changes in the organism’s environment could increase the carrying capacity of the genome (creating more room for expansion), or conversely the organism could evolve some mechanism for recognizing and snipping out (or suppressing replication of) the most common lineages of selfish DNA, clearing out room for an uncommon variant to take up the newly-freed space.[2] In this latter case we could hypothetically see a discontinuous sort of negative frequency-dependent selection, with variants that were “too successful” getting periodically wiped out.

I just thought this up as a way of demonstrating how a form of natural selection among replicators can occur even conditions where the loci involved are all effectively “neutral” from the organismal point of view. I have no idea whether something this has actually happened, but it seems quite possible in principle, unless I’ve gotten something terribly wrong somewhere.

[1] There is a way to escape this endgame, however: A bit of selfish DNA could mutate into something useful for its organismal vehicle, thereby biasing its own odds of being replicated. But in that case it’s nolonger playing the drift game and can be modeled by the standard single-locus selective equations that we all know and love.

[2] Remember that the assumption here is that the marginal locus is neutral; but if the same string of selfish DNA was present in a large amount of loci, the total fitness hit to the organism could in principle make it worthwhile to evolve some mechanisms to keep the parasitic DNA under control.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Over at his other blog Razib talks about the selective pressures that shaped the modern distribution of skin color. One thing he didn’t emphasize but which I found illustrative is that loss of function plays a prominent role in this story as a special sort of adaptation: sometimes losing something can be good. But this raises a minor problem for interpreting some kinds of tests of selection: When you run, say, the McDonald-Kreitman test on MC1R in Europeans, the verdict returned is “neutral”. But at the same time, the loss of function that turned this locus nearly-neutral is an adaptation!

The fact that a locus has faded into the nearly-neutral background isn’t evidence against adaptation — quite the opposite, in fact. Nothing is metabolically costless, so if the cost is significant and the benefit nolonger exists then there’ll be selection for loss of function. Relaxation of constraint is just one way of shifting the fitness peak, and natural selection will respond to that as it always does. This is a convenient example of the general principle that you can’t just take the outputs of canned statistical tests at face value: They require interperetation in the light of theory and history.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

There are four kinds of ions involved in the intricate feedback system described in Part I: calcium (Ca2+), potassium (K+), sodium (Na+), and chloride (Cl-). They all serve different functions, but for now all you need to know is that first three carry a positive charge while last is negative, and that “at rest” the interior of the neuron caries a negative charge relative to its surrounding environment. With me so far? Onward.

Each type of ion exists in different concentrations within the the neuron when it’s “at rest”, and this concentration is governed by how permeable the cell membrane is under conditions of inactivity. In this state, K+ flows pretty freely in and out of the membrane and is close to equlibrium, but the others are all restricted (and thus out of chemical equilibrium) to varying degrees: Na+, Cl-, and Ca2+ are all banging at the gates to get inside. Also, each type of ion channel permits flow at different rates, K+ being the slowest of the four. I mention this now because it becomes important later on.

So let’s say that while a neuron is just minding its own business, some meddling neuroscientist comes along and injects a bunch of Na+ ions into it. Since sodium ions carry a positive charge, this would make the voltage across the cell membrane tend to become less negative—i.e. it would move the neuron closer to electrical equilibrium with its environment (known as “depolarization”). If it had been Cl- ions instead, the opposite would occur: the voltage would become more negative, moving the system further from equilibrium (known as “hyperpolarization”). Or suppose instead that some K+ ions were sucked out; hyperpolarization would tend to occur in this case too. But in each case, the excess ions would gradually be pumped out by the cell’s regulatory system (or more ions allowed to flow in, in the K+ example), bringing it back to the initial set point.

This is exactly what happens in normal neural activity, except instead of a meddling neuroscientist it’s mediated by ion channels. When the synaptic endpoints of the neuron’s dendrites (which I’ll talk more about in a later post) recieve a particular neurotransmitter, their ion channels for Na+ or Cl- will open depending on which neurotransmitter it is. We’ll get into neurotransmitters more later, but for now I’ll just introduce two of them: GABA and glutamate.

To simplify it in very crude terms for now, GABA is the signal for “open some Cl- channels” and glutamate is the signal for “open some Na+ channels”, which will cause an influx of these respective ions into the cell, lowering or raising the voltage accordingly. Glutamatergic transmissions from other neurons are considered “exicitory modulation” because they tend to encourage the neuron to fire, while GABAergic transmissions are considered “inhibitory modulation” because they tend to discourage it.

Whether or not a neuron “fires” depends on whether the summed ionic charges inside it cause the neuron to depolarize below a certain threshold: in the short run, if there’s an equal influx of Cl- ions and Na+ ions then the effects will be a wash—the overall voltage doesn’t change and nothing happens. Once the dust clears, the sodium and chloride ions get pumped out and the cell goes back to waiting for the next round of stimulus.

But if the influx of Na+ outweighs the influx of Cl- by a certain threshold (i.e. the excitory stimulus outweighs the inhibitory stimulus), it sets off a chain reaction which I’ll now describe. Remember that I said in Part I that some (but not all) of the ion channels in the cell were voltage-senstive (or “voltage-gated“); now two of these become important.

First, there are voltage-gated Na+ channels that start to open up when the cell depolarizes, allowing more sodium ions in and depolarizing it further in a wave of positive feedback that starts in the dendritic region and spreads along the cell body toward the axon. (When Cl- ions are predominant, they act to cut this process off at its starting point by overpowering the positive charge and keeping the downstream sodium gates from opening, halting the chain reaction before it really takes off.) Once the cell depolarizes past a certain point, these gates close again and the Na+ stops flowing in.

As the influx of Na+ spreads through the cell, it triggers the opening of a special set of K+ channels that are normally closed, but open up when the cell depolarizes past a certain threshold, allowing K+ ions to flow out of the neuron and causing the cell’s voltage to start going negative again. As I mentioned a few paragraphs ago, the action of these K+ gates is slower than those for Na+, so you end up with a second wave of re-polarization lagging behind the initial wave of depolarization.

Once the depolarizing wave reaches the axon, the real action begins. Most axons in mammals are coated with a sheath of myelin (basically a special extra-thick layer of phospholipids), which acts as an insulator preventing any ion transfer across the cell membrane and makes the axon the most highly conductive region of the neuron. If it isn’t obvious why, you can think of an unmeylinated axon as a leaky pipe: when a pipe is full of holes, it takes a lot of force to push water all the way through it because the water keeps diffusing outward. Myelin effectively “plugs the holes”, which means it requires less force to push “water” (charge) through the “pipe” (axon) at a faster rate. So myelin speeds up transmission along the length of the axon by easing propagation of electrical charge.[1]

However, there are still only so many ions, so in order to keep the charge from dimnishing as it travels the length of the axon, fresh influxes of Na+ are necessary. For this purpose, there are periodic breaks in the myelin sheath about a micron wide, known as nodes of Ranvier.[2] These nodes are each loaded with a whole bunch of hair-trigger Na+ channels that will open up under conditions of depolarization, and their combined effect is to make the current move along the axon in a series of fast hops, basically acting as voltage repeaters.

The K+ channels continue to play catch-up, bringing the charge back to normal negative polarity in the wake of the wave of positive charge. Eventually the ion pumps in the cell will clean everything up and bring all the ion levels back to baseline, but right now the cell is busy just trying to normalize its voltage. This entire process is called an “action potential”, or simply a “spike” because of the spike that appears on a voltmeter monitoring the neuron during an action potential. One important feature of an action potential is that “a spike is a spike is a spike”—the amplitude of the wave is always the same for every single action potential. This is the only part of the neuron that can be considered “digital”: either it fires or it doesn’t, with no grey area. (What can change is the number and frequency of spikes, but that’s for later posts.)

Once the wave of positive charge reaches the e
ndpoints of the axons (“axon terminals”), sodium’s job is done and voltage-gated calcium channels are waiting to pick up the ball. When the voltage at the axon terminal goes positive, there’s a flood of Ca2+ into the axon terminal, which triggers . . . well, we’ll get into that later after we cover synapses in the next post.

Addendum: There are, of course, many more ways of modulating neural activity than I’ve presented here, but when writing an introduction to the workings of any complex system you have to balance the need for thoroughness against the need to avoid overwhelming the reader with too much at once. For instance, there’s almost no such thing as a neurotransmitter or ion that just does one thing in the brain, but the goal for now is to get the gist across so everyone has a working mental model of neural processes and then build up the complexities from there.

Notes:

[1] Those of you with an interest in Ashkenazi intelligence & diseases will rightly perk up here. Some of recessive diseases they’re prone to like Niemann-Pick screw up the myelin sheath in ways that likely result in faster signal propagation in a heterozygote.

[2] How these guys got a piece of neuroanatamy named after them, I’ll never know.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Neuroscience 
🔊 Listen RSS

Go read Hawks on Nisbet & Mooney:

This kind of cynical strategy is the province of used car salesmen and other charlatans. And it’s easily exposed by any clever critic who happens to be watching . . .

My point isn’t that these critics are right, but that such criticisms pretty much write themselves! A scientist trying to “frame” in this way is going to end up discredited unless they retreat to the facts anyway. This is, after all, why scientists are typically so cautious in print — because they work in a field where bad arguments are quickly torn apart by their critics. Why in the world would anyone think politics would be any easier?

This is pretty much right, and I just want to add that this is especially bad advice to give to scientists, because scientists wouldn’t be scientists if they were really good salespeople. Spinning is not their comparative advantage, and “fight the enemy on his own turf” is awful tactical advice. Scientists owe whatever respect and deference they’re given to the fact that they’re percieved as being interested primarily in the truth: their reputation for earnestness and lack of guile is a big part of their cred. The best way to get people to regard you as honest is to really be naively honest.

People may be dumb in a lot of ways, but they generally know how to spot when someone’s trying to sell them something, and telling scientists that they should behave more like salespeople will result in them being regarded in much the same way—and they are never going to be better salespeople than professional demagogues. I can think of no better way to erode whatever benefit of the doubt that scientists currently enjoy in our culture. If scientists try to play the political game, they’re going to lose. Better to try to stay above the fray than get dragged in and trampled for sure.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Culture, Politics 
🔊 Listen RSS

One of the awkward things about GNXP is that we’ve got an audience that’s very mixed in terms of knowledge, so it’s hard to know just how much background information you should assume when writing posts. In the archives we’ve got “basic concepts” posts on population genetics and psychometrics that hopefully help lay-readers follow the more technical posts on these subjects, but so far we don’t have too much “neuroscience 101″ stuff. With that in mind, this is going to be the first in a series of posts that try to give the uninitated reader an adequate background to follow the more advanced posts by Amnestic.

Most people are familiar with terms like “neuron”, “synapse”, and “neurotransmitter” and have a vague notion that the brain operates with electrical impulses and chemical messenger molecules like serotonin and dopamine, but don’t have a clear idea of how these things all fit together—much the same way that I know that there are things called an “alternator” and “transmission” under the hood of a car but don’t really know how these things work together to make the car go. But the basic idea of how neurons work is pretty easy to understand.

Neurons are fundamentally devices for transmitting and storing small amounts of information in an analog format. Most of you have probably seen diagrams of what neurons look like: a fat cell body called the “soma” with a lot of branches radiating from it called “dendrites”, and a big fat tube extending from it called the “axon” which also has a bunch of braches sprouting from it. The dendrites are the input sites, the axon is the output, and synapses are the sites where the endpoints of dendrites and axons meet. A synapse is where the actual communication between two neurons occurs, but before we get to that it’d be better to understand what decides whether they even communicate at all.

Neurons, like all animal cells, have a membrane “skin” composed of two layers of phospholipids (fatty acids attached to a phosphate group). This is what keeps the cell’s insides inside and everything else outside, and it works in a rather ingenious way: the phospholipids are polar molecules which at one end are attracted to fats and at the other end attracted to water. The reason why oil and water don’t mix is the same reason your cells stay cohesive: the lipid molecules cling together and form a collective barrier that most molecules won’t pass through. In the case of a cell membrane, instead of just forming round globules they form a uniform wall.

Of course this membrane isn’t totally impermeable, since a cell needs to allow certain chemicals to enter and exit, just as complex living organisms do. There are a variety of different doors embedded within the wall—specialized channels that permit this or that kind of atom or molecule to bypass the membrane based on their electrical charges and/or shape, like bouncers at a nightclub. This selective permeability is central to cell function in general, and in neurons the most important purpose it serves is to control the difference in electric charge between the inside and outside of the cell via different concentrations of ions.

If you recall your chemistry classes, ions are atoms that have had one or more electrons either removed or added to their outermost electron shell, giving them either a positive or negative charge. Ions are the basis of bioelectricity—the bridge between biochemistry and electricity. When people talk about electrical impulses in your nervous system, they’re talking about changes in the proportions of various ions.

In accordance with the second law of thermodynamics, ions (like everything else) tend to spread out evenly unless impeded. That’s where the cell membrane comes in: by selectively channeling ions in and out of the cell while maintaining a barrier, it can control the difference in concentrations of ions inside and outside the cell. Because of the electric charge of ions, this creates a difference in charge, AKA “potential [electrical] difference”—in a word, voltage. (Voltage is defined as the difference in electrical charge between two points with a resistive barrier between them.) So cells in general and neurons in particular are always operating out of equilibrium with their environments, chemically and electrically.

The neuron has a negative feedback system that works to maintain a voltage “set point”, just as your thermostat works to keep your house’s atmosphere at a temperature set point. Just what that set point is will vary from neuron to neuron for reasons I’ll explain in later posts; for now all you need to know is what role that system plays in neural activity—arguably, this feedback system is the foundation of all neural function.

Many of the ion channels embedded within the cell membrane are voltage-senstive and will alter their shape if the voltage gets “too high” or “too low”, thereby closing or opening those channels to the free flow of particular kinds of ions (there are different channels for potassium ions, calcium ions, etc). You can probably see how this works now: when the voltage drops or rises beyond a certain point, some of the gates will open and permit ions to go the way they want to—toward equilibrium, which could be into or out of the cell depending on the charge of the ions and the voltage across the membrane. This will tend to bring the neuron back to its voltage set point.

Ohm’s law governs the relationship between current, voltage and resistance in a conductive medium: voltage equals current multiplied by resistance. In the cell, the impermeability of the membrane corresponds to resistance and ion flow corresponds to current. Holding voltage constant (the set point), changing the resistance (permeability) necessarily means a corresponding change in the current (ion flow). I highlight this because this simple relationship is a helpful way to summarise the whole process I’ve just described, and is easier to remember than the details about the biochemical specifics.

So now that we’ve got the basic mechanism down, next up I’ll discuss the exogenous causes of changes in voltage, which is where dendrites and axons come in.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Neuroscience 
🔊 Listen RSS

There’s a problematic tendency, particularly prevalent when dealing with emotionally-charged subjects, for people with little or no understanding of a complex subject to handle numbers that come out of that subject in much the same way that Moses handled the Ten Commandments: They are handed down from higher intellectual powers, whose ways are mysterious to us but whose authority is without question. They are the outputs of a black box whose inner workings are completely opaque, but which is quite useful to those looking for a blunt object with which to bash their opponents over the head.

Examples are not hard to find: Creationists making reference to Haldane’s limit, global warming deniers talking about arctic ice cores, price control fetishists banging on about Card & Krueger, race-skeptics quoting Lewontin’s infamous 85/15 figure, etc etc etc. We’ve all seen it, many of us have probably even done it at some point, and it’s a stupid human trick that’s centuries old.

Good old Arty Schopenhauer knew what this one was about:

This is chiefly practicable in a dispute between scholars in the presence of the unlearned. If you have no argument ad rem, and none either ad hominem, you can make one ad auditores; that is to say, you can start some invalid objection, which, however, only an expert sees to be invalid. Now your opponent is an expert, but those who form your audience are not, and accordingly in their eyes he is defeated; particularly if the objection which you make places him in any ridiculous light. People are ready to laugh, and you have the laughers on your side. To show that your objection is an idle one, would require a long explanation on the part of your opponent, and a reference to the principles of the branch of knowledge in question, or to the elements of the matter which you are discussing; and people are not disposed to listen to it. For example, your opponent states that in the original formation of a mountain-range the granite and other elements in its composition were, by reason of their high temperature, in a fluid or molten state; that the temperature must have amounted to some 480 degrees Fahrenheit; and that when the mass took shape it was covered by the sea. You reply, by an argument ad auditores, that at that temperature – nay, indeed, long before it had been reached, namely, at 212 degrees Fahrenheit – the sea would have been boiled away, and spread through the air in the form of steam. At this the audience laughs. To refute the objection, your opponent would have to show that the boiling-point depends not only on the degree of warmth, but also on the atmospheric pressure; and that as soon as about half the sea-water had gone off in the shape of steam, this pressure would be so greatly increased that the rest of it would fail to boil even at a temperature of 480 degrees. He is debarred from giving this explanation, as it would require a treatise to demonstrate the matter to those who had no acquaintance with physics.

There’s always going to be a lot of assumptions underlying any specific truth-claim pertaining to a complex subject. Altering one or more of these assumptions will have an impact on the plausibility of the specific claim in question, and often it takes a trained eye to see what assumptions are being made, which are likely to be sound and which are highly questionable. But the unfortunately large intersection between the set of subjects people have strong feelings about and the set of subjects that take a non-trivial amount of education to adequately comprehend ensures that people are always going to end up arguing beyond their range of competence. So how do we deal with this combination of ignorance and importance?

I propose an informal rule for any sort of argument where the participants involved are not competent to evaluate specific truth-claims: All arguments conducted in a state of relative ignorance must be algebraic. I don’t mean speaking in math — I mean that such arguments should focus on the relations between variables rather than on what specific values to assign them. And if someone does plug specific values into the argument, they have to either A) be unobjectionable, i.e. everyone in the argument can agree that the values are reasonable, or B) be supported with at least a cursory explanation of how the values were produced (or a link or reference to such an explanation).

Anytime you see someone supporting their argument with specific numbers in a discussion with no hint at how they were arrived at, your bullshit detector should go off and you should demand an explanation of the method by which those numbers were produced. If upon being pressed your interlocutor cannot or will not adequately explain this (or provide a link or reference to someone else’s adequate explanation), an argumentative foul has been comitted; those specific numbers and whatever parts of his argument require them may be disregarded and the offender’s credibility reduced as punishment. Call it the lamp post rule.

And if this ever becomes as commonly referenced as Godwin, rememeber, you read it here first.

(x-posted)

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Dear Henry Farrell,

Like many others of your ideological inclinations, you make much of the recent petition in favor of raising the minimum wage which was signed by over 650 economists. I will see you and raise you.

Careful where you point that thing, it’s loaded.

Regards,
Matt

(x-posted)

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science • Tags: Creationism 
🔊 Listen RSS

DARPP-32 is a regulatory protein with a lot going on that should be of interest to GNXP readers. It act as a master switch in the brain which regulates the activity of a variety of ion pumps, ion channels, neurotransmitter receptors, neuromodulators and transcription factors. (Paul Greengard won a richly deserved Nobel prize for doing the most of the heavy lifting in understanding the many important brain functions it plays a role in.)

DARPP-32 is crucial in the formation and control of the information pathways that carry signals between the striatum and prefrontal cortex. This makes it extremely interesting for several reasons, first and foremost being that it plays a central role in working memory (probably influencing g), motivation, attention, and reward-based learning. Secondly, it appears to be at the nexus of the action of pretty much all classes of psychotropic drugs.

Of course something this important is going to cause a lot of havoc when it doesn’t work right, and as you’d expect it’s been implicated in a variety of disorders — most recently to schizophrenia. (That list is getting mighty long.) There’s long been speculations about the similarities between schizophrenia and psychomimimetics (hence the name), and DARPP-32 mediation seems to provide a concrete link between the two.

Another interesting thing is that the common allele for DARPP-32 which has been linked to schizophrenia is neither a necessary nor sufficient condition for exhibiting schizophrenic pathology. Schizophrenia has also been linked to Borna virus, which is also neither sufficient nor necessary for schizophrenia to manifest. So what’s most likely going on here is that this allele has historically had some fitness advantage, probably due to a cognitive boost of some sort, but also makes the whole system it regulates more succeptible to damage by environmental insults such as pathogens that can hop the blood-brain barrier (of which Borna virus is one, but probably not the only one). It’s a good illustration of just how complex medical aetiology can get, and of course as Greg Cochran & Paul Ewald will tell you there are probably a lot of other unusual conditions that follow similar causal lines.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Aaron Haspel gives away the recipe for a productive* career in the social sciences.

* Measured the same way Alexandre Dumas got paid.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

(This is the latest in GNXP’s semi-regular “10 questions” feature; links to previous editions can be found along the sidebar or by searching the blog.)

The geneticist J.B.S. Haldane famously remarked that important theories went through four stages of acceptance: “i) this is worthless nonsense; ii) this is an interesting, but perverse, point of view; iii) this is true, but quite unimportant; iv) I always said so.” This process would be quite familiar to Charles Murray, a resident scholar at the American Enterprise Institute who has gained a reputation for staking out controversial positions a decade before they become mainstream. Starting with Losing Ground in 1984, later with Richard Herrnstein in 1994′s The Bell Curve, and most recently with In Our Hands, Murray has made his name as a public intellectual by dropping well-researched bombshells onto policy debates. In between, he’s published shorter books on political philosophy and a thorough historical study of human accomplishment in the arts and sciences.

Below the fold is our e-mail interview with Murray.

1. Let’s talk first about your latest project. You’ve stated that In Our Hands is an attempt to strike a compromise between your libertarian ideals and the current socio-political reality. The biggest worry about your plan from a libertarian point of view is that in practice it would create a large constituency who would vote to raise the grant on a regular basis, leaving the fiscal situation largely unchanged or possibly even worse. How does your plan deal with these kinds of public choice objections?

Mancur Olson and other public-choice theorists taught us that sugar farmers can get sugar subsidies because they care passionately about getting their benefit while no other constituency cares enough about preventing them from getting it. Under the Plan, the grant will be the only game in town (every other transfer is gone), and will affect every adult in the country. Every time Congress debates a change in the grant, it will be the biggest political news story in the country, and a very large chunk of the population–and people holding a huge majority of the monetary resources for fighting political battles–will lose money if it’s raised. Compare the prospects for jacking up the grant with the certain knowledge we have of the trends in spending under the current system. They have sky-rocketed and will sky-rocket, through classic public choice dynamics. The Plan uses the only strategy I can conceive to get out of the public-choice box.

2. One modification to your plan which has been suggested is to index the guarenteed income to GDP instead of inflation. This way everyone benefits from policies that increase economic growth, seemingly a perfect bargain between welfare statists and economic dynamists. What do you think of this idea? Have there been any other suggested modifications to (or criticisms of) to your plan which have impressed you thus far? More broadly, how has the reaction so far compared to what you were expecting?

An early draft linked the size of the grant to median earned income, which would have a similar effect. But the real purpose of the book was to put an idea on the table that doesn’t have a prayer of being enacted now, but could become conventional wisdom down the road. To achieve that purpose, I wanted to avoid getting hung up on bits and pieces. If the idea of converting all the transfer programs to a cash grant is a good idea, we can figure out a way to control changes in the size of the grant. Worry about it after we’ve decided what we think of the idea: that’s the logic of the book’s presentation.

As for reaction, I’ve been surprised by the number of libertarians who are attracted to the idea (though perhaps I shouldn’t have been, given that Milton Friedman thought up the negative income tax). Liberals don’t know what to say: I’m proposing a much larger transfer of resources to poor people than they’ve ever dreamed of, which they should like, but they’re obsessed with the people who would waste the money. They really do think that most people aren’t capable of running their own lives without their help. Overall, IOH has accomplished pretty much what I’d hoped in the way of reaction.

3. It’s interesting to consider what kind of downstream social effects your plan might have. For example, it’s likely to encourage people to take greater risks (such as starting their own business at a younger age) or to pursue alternative “low remuneration” paths — academic research, writing, charity work, etc. It would likely remove support for harmful labour regulations like the minimum wage, and one can also think of ways in which this might alter the impact of imigration and illegal labor. How much did you think about these kinds of downstream effects when writing In Our Hands, and what do you think the most significant social impact of the plan would be?

I hadn’t thought about the way it would work against labor regulation, but you’re right. It would. I did discuss other downstream effects–on families, the underclass, and most broadly on what might be called a climate of virtue. As far as I can see, the downstream, unintended effects of the Plan have a strong tendency to be positive, while the unintended effects of conventional social programs are always negative. Why the difference? Because the Plan taps positive human tendencies that are deeply embedded in human nature as it actually exists–self-interest, the innate desire for approbation, the innate tendency to take responsibility to the extent that circumstances require. They set up extremely positive feedback loops. For example, what happens if I squander my monthly deposit? I have to seek help from relatives, friends, or private social service agencies like the Salvation Army. I’m not going to starve–but I’m going to get that help with a whole lot of encouragement–to put it politely–to get my act together. And it won’t be a one-time thing, but a continuous process. Conventional social programs are precisely the opposite. They make assumptions about human nature that are blatantly not true (e.g., bureaucracies are not governed by the self-interest of the people who run them) and the unintended consequences are destructive.

4. In Human Accomplishment, you come to the conclusion that accomplishment has been on a decline roughly since the industrial revolution. How does this square with the exponentially accelerating accumluation of data in the sciences (along with computing power, DNA sequencing, etc.)? Also, how does it square with the Flynn effect? You would think that ceteris paribus an increase in intelli
gence would result in an increase of genius, but by your reckoning this doesn’t seem to be the case.

The chapter on the decline in accomplishment explicitly deals with that point, so my main answer is: Read the book, or at least chapter 21. The short answer is that, in the sciences, a certain kind of accomplishment–the discovery of basic knowledge about how the universe works–is declining, inevitably. Genetics is a good example. The applications of genetic knowledge are increasing nonlinearly; but the knowledge about the basic workings of genetic transmission has been close to complete for decades. Filling in the details permits all kinds of new applications, but they are details. In large numbers of disciplines–anatomy, for example, or geography–there is little new to learn. They’re effectively closed to new accomplishment as I used the word for science.

As for the Flynn effect, it has nothing to do with the number of geniuses. It appears that the increases have little to do with g (the general mental factor), and that they are concentrated at the low end of the distribution. There is still a lot to be understood about the Flynn effect, but don’t count on it for producing advances in string theory.

5. The decline in individual accomplishments in the arts is prima facie a bad thing, but is it possible that a decline in major discoveries in the sciences could be good thing? If you measure accomplishment by means other than outstanding singular accomplishments, could there be a case for collective, incremental progress?

The distinction is not between singular and collective (I include collective accomplishments in my science inventories), but between acquisition of new knowledge and the application of scientific knowledge to daily life. By the latter measure, accomplishment did not decline after the mid 19th century. It continued to increase very rapidly.

6. One of our contributors has conjectured the existence of “genius germs” to explain the examples of what could be called “pathological genius”. The elegant thing about this hypothesis is that it would explain the decline in individual achievement even in the face of the Flynn effect, which tracks temporally with improvements in hygiene and immunology. What’s your take on this?

Beats the hell out of me. Or, more dignified: I am not competent to comment. Being born on January 8 (along with Elvis, I would point out), the theory intuitively appeals to me.

7. In the wake of the Larry Summers flap, you wrote an article in Commentary revisiting familiar themes concering differences in intelligence. What was your impression of the response to that article? Were people as venomous as when The Bell Curve came out, or were they more accepting of the fact that group differences exist? More generally, where do you see the public debate on intelligence differences going in the medium- to long-term?

I got no flak for the Commentary article that I can recall (not counting blogs), which may be a straw in the wind. I took a much more aggressive position about the intractability of the B-W IQ difference than Dick Herrnstein and I took in TBC (understandably, given what we’ve learned in the last 12 years), and I said some pretty inflammatory things about sex differences. Perhaps the parsimonious explanation for the lack of flak is that no one reads Commentary. But I think in fact the dialogue is changing. Here’s a quick illustration: In the Commentary article, which appeared in September 2005, I took great pains to present the recent work demonstrating that gene markers produced results corresponding to self-identified ethnicity in 99.9% of a large sample. Later that fall, PBS had a special with people like Oprah Winfrey and Henry Louis Gates (if I remember correctly) talking cheerfully about the precise percentages of their heritages that were sub-Saharan-African versus Caucasian, etc., based on DNA tests using similar gene-marker technology. The times are changing.

8. You and Richard Herrnstein attracted a lot of really thoughtless and absurd criticism, but there were also a few more reasonable voices amid the cacaphony. Which of the critics of The Bell Curve do you respect the most as an intellectual opponent, and why?

I thought Howard Gardner treated the book more or less fairly in his review. That’s the only person I can recall who was on the other side who didn’t go nuts. There isn’t much I’d retract in a new version, because Dick and I were so mainstream in our science. We weren’t out on any limbs that could be sawed off, as far as the data are concerned (my favorite line about TBC came from Michael Ledeen: “Never has such a moderate book attracted such immoderate attention.”) But I would write a major expansion of our discussion of cognitive stratification. Living as we do in rural Maryland, my wife and I have been struck by the number of bright kids in our local high school who still go to nearby colleges and return to live where they grew up. I don’t know how this anecdotal evidence translates into macro data, but I’d like to explore it. There may be an interesting interaction between urbanization and stratification–it’s just an hypothesis, but perhaps stratification is much more severe in urban areas than in small town and rural areas.

9. Any scholar with a sincere devotion to seeking the truth is bound to have their own beliefs, expectations and prejudices falsified on occaision. Can you tell us about occaisions on which you’ve discovered something which profoundly altered your beliefs?

My epiphany came in Thailand in the 1960s, when I first came to understand how badly bureaucracies dealt with human problems in the villages, and how well (with qualifications) villagers dealt with their own problems given certain conditions. I describe that epiphany at some length in In Pursuit. The turnaround that led to TBC occurred in 1986, when Linda Gottfredson and Robert Gordon asked me to be on an American Psychological Association panel discussing their two papers on the relationship of IQ to unemployment and IQ to crime respectively, both of which discussed the B-W difference. The bibliographies astonished me–I had no idea that so much scholarly work had been done in these fields that so decisively contradicted what I had assumed (taught by the New York Times) to believe. If you want to see how far I moved: in Losing Ground, published in 1984, I cite The Mismeasure of Man approvingly.

My other movement has been less dramatic, but has been intensifying–and will not please the founders and probably most of the readers of Gene Expression. I have been an agnostic since my teens. But I am increasingly drawn to the proposition that of all the hypotheses about God, simple atheism is the least probable. That to be a confident atheist is the silliest of intellectual positions. That thinking about spiritual issues, despite all the difficulties, must be part of being a grown-up.

10. It has seemed to some of us that you regard libertarianism as really a procedural means to an all-important substantive end: the promotion and preservation of the Good Life as embedded in human wisdom and experience over many generations. Yet those of us with a futuristic orientation see a shadow looming over this project. If science and technology continue to advance unfettered, and individual liberty remains upheld more or less in its current form, then sooner or later we will achieve the means to alter the very substrate of human nature itself. Do y
ou feel this shade as well? Among the many values now held dear by this or that faction of the human race–the pursuit of scientific knowledge, the fellow feeling of families and nations, etc.–which do you think should be actively maintained by our unimaginably evolved descendants of the perhaps not-so-distant future?

I am conflicted. I think human beings are hard wired to find certain institutions satisfying. E.g., in a libertarian state established immediately (before the hard-wiring is changed), I am confident that traditional marriage would flourish, because a good marriage with children provides such a deeply satisfying form of intimate human contact, far superior to any other arrangement such as serial cohabitation, and is also such a good way to provide for one’s security. A libertarian state would do nothing to prevent people from taking other routes. Absent a welfare state, stable marriage with children would be the voluntarily preferred choice of the vast majority of people.

I am also confident that we will learn how to change the wiring, in many ways, including ones that might tweak the sources of our deepest satisfactions. That’s in our future. It’s also right to be worried. I am not confident that we are competent to make the right choices. For example, it is possible that increasing longevity dramatically–which is the primary goal of many, many people, including many scientists–will be inimical to human happiness, for reasons that science fiction writers have explored persuasively. But we don’t have the option of choosing especially wise humans who can guide the science to the right paths. Long-term, I’m an optimist. We’ll muddle through. Short-term, I think the coming technology for fiddling with human nature will produce some awful mistakes.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

A friend and I were discussing the eventual ubiquity of lie detection technology, and what sorts of social ramifications it would have. I made the point that this would immediately create greater selection pressure for self-deception, since “lie detectors” are better referred to as “sincerity detectors,” and as Trivers points out the liar who believes his own bull is the most effective kind. (I now notice that one of Parker’s commentors at the link above made the same point, but I thought of it on my own, really!) My friend then mused thusly:

I’d like to see a study of self-deception v. g and some correlation with careers. If self-deception is highest among those with the lowest g, it might not be worth it because the cost of deception is likely to be rather low. I don’t think it’s a completely implausible hypothesis: religious belief is correlated negatively with intelligence. While it might be somewhat offensive, much of religious belief does seem predicated on self-deception…

My initial reaction was that while plausible, this could go either way. Ceteris paribus, the smarter you are the easier it is to concoct believeable bullshit stories on the fly. If there *is* a negative correlation between g and self-deception, it’s probably because on the other side of things increased intelligence also makes it harder to fool yourself. There could be something of an internal arms race, and the net effect could be a wash.

So, does anybody here know of any studies that have looked for correlations between intelligence and self-deception? Alternatively, any ideas about neural mechanisms behind self-deception that might also have some effects on intelligence? Propensity for self-deception isn’t easy to measure in itself, but this seems like too interesting a question to leave unexplored.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

From the abstract:

OBJECTIVE: The goal was to determine the relationship between the parental use of sunscreen products and the skin color of children in first grade.

METHODS: Data from the National Longitudinal Study of Youth were analyzed. Families with complete data on parental sunscreen use and child skin color were included in the analysis. Sunscreen use was categorized into “High, Medium, Low, None” by quartiles. Skin color was a continuous variable assessed by computer analysis of skin images. Multivariate logistic regression analysis was used to evaluate the relationship between parental use of sunscreen products and child skin color in first grade, controlling for gender, race, maternal education, income/needs ratio, marital status, parental income, and child behavioral problems.

RESULTS: A total of 2742 children, 81.3% white, were included in the analysis. Children of High sunscreen-using parents (n=1652) had an increased risk of being light-skinned, compared with children of No sunscreen-using parents (n=241, odds ratio: 45.2, CI 33.4-63.8). The same association held, to a lesser degree, between less extreme sunscreen categories. Of the covariates, only race was significant, but a substantial effect between parental sunscreen use and child skin color remained.

CONCLUSIONS. Among the 4 sunscreen use categories, High sunscreen use was associated with the highest risk of being light-skinned among young children. Understanding the mechanisms through which parental use of sunscreen are associated with skin-color risk may lead to the development of more comprehensive and better-targeted interventions.

It’s funny because it’s true.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Thought this might be of interest: Brandon Berg, one of my co-conspirators at Catallarchy, wrote a post a while ago sketching out a possible way in which the decline in fertility rates might be overstated in the statistics that are commonly bandied about. The basic idea is that if the mean age of motherhood is increasing each year, the stats are likely to be skewed because of the way they’re calculated (simple summing of the fertility rates for each age group). Now he’s written a short follow-up demonstrating how sensitive the fertility numbers are to this effect. He suggests that the European fertility rates could realistically be understated by about 0.1-0.3, which is not huge but still significant.

Update: Related: Population Fallacies Part 2

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

I think I’m probably breaking some rule of blogger etiquitte by performing the dreaded fact-check manoeuver on Colby Cosh just after he linked to my other blog. But it always makes me wince when people I respect stake out strong positions where they’re demonstrably wrong on the facts, so I’m afraid I cannot let this pass:

Africans aren’t helpless animals–they know what works against malaria. Unfortunately, what works against malaria is DDT. But any country that proposes a program of household DDT application faces starvation at the hands of European bureaucrats and consumers. The nets are an unnecessarily expensive and epidemiologically phony sauve-qui-peut measure, a work-around for what could be described as the greatest ongoing mass murder ever perpetrated.

This is one of those strange memes that gets into the air and becomes part of the conventional wisdom, despite the fact that if you dig deep enough the whole thing turns out to be baseless. (Other examples include what Everybody Knows about the industrial revolution or the great depression.)

Cosh seems like he usually has pretty good bullshit detectors, and I would have thought that alarm bells would have started ringing when (if?) he noticed that one of the articles he cites was by Dr Rutledge Taylor, the man who directed the film 3 Billion and Counting. Hello there, lying with statistics! The film’s site claims that “Africa loses nearly 3000 women and children on a daily basis … to malaria alone”. That’s 100 million per year, which is pretty impressive considering that according to the CIA world factbook total world death rate is closer to 60 million per year (9/1000*6.5 billion). [See edits below.]

But let that pass, and let’s look at the allegation made against the EU. Let’s see what the EU ambassador to the US has to say on the matter (emphasis mine):

The European Union has no objection to the safe spraying of houses with DDT for malaria control, but it does have concerns about illegal agricultural uses. The E.U., like the United States and 149 other countries that signed the Stockholm Convention on Persistent Organic Pollutants in 2001, believes that the use of DDT in agriculture should be phased out.

Nations have the right to use DDT for public health protection, and the convention includes an exemption to allow such uses. It even sets out conditions for the safe use of DDT in malaria control — a use unlikely to leave residues in crops.

It is up to Uganda how to fight malaria, and DDT is one tool in that fight. The European Union continues to assist Uganda and other affected countries in efforts to combat malaria and contributes almost $100 million to this cause annually.

Health protection should not, however, provide an alibi for illegal use in agriculture. The European Union has granted $30 million to developing countries to strengthen infrastructures and encourage the sharing of best practices — a program singled out for praise by the World Bank.

The “ban” on DDT is and always has been one on its agricultural use; household use is perfectly allowed. Here (PDF) we find, plain as day on the 4th page, that the “WHO recommends indoor residual spraying of DDT for malaria vector control.” And there is a perfectly good reason for this partial ban: basic evolutionary logic suggests that injudicious use of a pesticide like DDT will tend to hasten the adaptation of the pests to the chemical, just as improper use of antibiotics has hastened the evolution of resistance in bacteria. And indeed this is exactly what has happened in some areas (PDF).

Saying that DDT “works” is like saying penicillin or erythromycin “works” — yeah, for a while. But if you’re not careful about it you can end up pushing your enemy to evolve faster. Like Derek Lowe says: “It’s life or death for them. Just like it is for us.”

Addendum: Tim Lambert has been covering this beat for a while, and most of my info on the subject comes by way of him. Anyone curious should read his full archives on the subject here.

Edit: I am a blithering idiot and somehow misread that as 300,000. I have no defense other than that this doesn’t seem to square with the 3 billion figure that makes the title, but then they don’t specifically say that those are deaths. This is an excellent lesson in the perils of firing off posts in a rush.

Edit the second: Aha! Now I know where I got that idea in my head. Apparently they did originally say what I thought they said, but then later changed the site after having this, er, discrepancy pointed out.

Edit the third: Lambert has more on the effectiveness of nets, which was the subject of Colby’s original post.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

When trying to get across the gist of complex technical subjects to a layperson, sometimes a good metaphor can do more pound-for-pound explanatory work than hurling large amounts of jargon-laden details.

Fore example, Gary Marcus provides a lucid explanation of gene expression by analogizing each gene to a conditional statement in a software program. Each gene has a set of conditions under which it’s expressed (IF/WHILE), and a specific protein it generates when activated (THEN/DO). While this is most readily intelligible to people with experience in computer progamming, it can be understood by intelligent non-programmers with a little coaxing and can clear up several confusions and help order one’s thoughts:

  • So-called “junk” DNA is just dead code — code whose execution conditions are never met. Piles of this stuff can accumulate during the development of sufficiently large and complex software programs, and that’s with intelligent programmers watching over it. Small wonder if orders of magnitude more happen to accumulate under the guidance of a completely blind process.
  • The oft-quoted trope that we’re 98% similar to chimpanzees at the genetic level loses significance; even a 2% change in the source code of a program, particularly on the conditionals, can have massive effects on how the program executes.
  • The sillyness of opposing genes versus environment or “innateness” versus “plasticity” becomes evident — these things are orthogonal to eachother rather than in conflict. A program can be specified to execute multiple different ways given different inputs, and even if the brain starts with a highly specified innate structure, it could be programmed to rewire itself based on environmental inputs.

And so forth. Another good example for the numerate is the definition of race that got Steve Hsu’s comments deleted by Brad DeLong: represent each individual’s genome as a point in a space of extremely high dimension, and define a race as a set of points whose distance from each other is less than some radius. These clusters map onto intuitive self-identified race with a very high degree of accuracy.

A third example (which I first encountered from Henry Harpending [PDF]) is analogizing intelligence to size. Many opponents of IQ testing start from the correct point that there is no single definition, measurement or task that completely captures everything we consider “intelligence,” but then slide from there into saying that IQ is meaningless and intelligence can’t be measured.

If you substitute “size” for “intelligence” it becomes apparent just how silly the argument is. Height is one dimension of a person’s size, as is shoulder breadth, weight, the length and circumference of one’s limbs and so forth. There are short guys who are built like tanks and tall guys who’d blow away in a strong wind, but nobody takes exception to the concept of “size” on their account or argues that size can’t be measured.

This cuts both ways: it also suggests that attempting to reify g is a category mistake. Like the concept of a center of gravity, g is an abstractum, a theorist’s fiction — but one that is well-behaved and has a causal “reality” to it all the same. We can predict that someone with a high g will perform well on all kinds of cognitively challenging tasks just as we can predict that a chair tipped on its back legs past a certain angle will fall over.

Since GNXPers are used to dealing in complex subjects which are very misunderstood and commonly regarded with perplexity (if not suspicion or outright hostility) by laymen, I thought it might be worthwhile to solicit whatever other metaphors y’all have found useful in both understanding and explaining technical concepts. They don’t have to be biology-related, though I expect most will be. Discuss amongst yourselves.

Update: John Wilkins responds thoughtfully to the brief treatment of race here, but I think he and PZ Myers, both professed Lewontinites, ultimately miss the boat. John gives it away thusly: “So, do I think there are races in biology as well as culture? No.”

Asking whether race exists in this way is a category mistake, albeit an all too common one. Race is another abstractum, like the general intelligence factor. How we define it will be wholly a matter of convention, though not totally arbitrary because some definitions obviously have more utility than others. Returning to the initial representation I used, you could shrink the race-radius right down to the point where each individual (plus his twin, if he had one) was his own distinct “race” if you wanted to, but this wouldn’t be interesting. If all that one means by “race is a social construct” is that one can can twiddle the granularity of racial categories virtually however one likes, then this is perfectly true and perfectly beside the point.

Because at bottom, all this abstraction and definition is based on a real molecular substrate. Piles of rock and dirt don’t change their height based on whether one choses to call them hills or mountains. Hypertension doesn’t suddenly stop being more frequent in African men based on how one decides to classify them. Genetic variation between populations doesn’t become any less real. Eppur si muove.

Update the 2nd: GMTA. Razib goes on at greater length and depth, as usual.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Since nobody else has noted it on here yet I figured I might as well: Griffe’s poked his head back up for the first time since the Summers flap, with an essay on “Politics, Prison and Race”. Nothing particularly new here — using similar methods as he’s used in past essays, he posits a model to explain the ironic fact that black/white incarceration ratios in the US correlate positively with the “progressiveness” of states and their governments.

The short answer is the same as with sex differences in the mathematical sciences — due to the difference at the tails of the distributions, the higher you set the threshold the greater the disparity will be. More “progressive” states tend to set higher thresholds for incarceration, ergo the higher B/W imprisonment ratio. It’s straightforward, and given the crudeness of the model it fits the data passably well.

I know I’m not the only poster here who finds this sort of “ideal gas model” approach remarkably frustrating in its limitations. Suggestive though it may be, one could sit and poke holes in it all day, and ultimately the macro model persuades nobody until you’ve given it microfoundations

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Matt Hogan pointed me to this Chris Roach post on the music used in al Qaeda propaganda videos. The post is worth a read and Roach links to some exemplars for those who haven’t listened to any of the stuff. It got me wondering about the actual neuroscience and psychology of music. This is an area where the research is fairly thin, but there were two papers of particular interest that I found, which together hint at an interesting picture.

Peretz et al (1998) (PDF) studied a patient who sustained sequelar lesions in both of her temporal lobes, which left her unable to recognize once-familiar melodies, discriminate between musical sequences, or sing more than a single pitch, despite being able to sing well before the damage had occurred. Her other cognitive capabilities were otherwise totally unimpaired. Now here’s the weird part: she claimed to still be emotionally affected by music even though she couldn’t articulate why. Subsequent study bore this out: her emotional responses to music correlated very highly with those of a fully-functional control group. After controlled experiments which you can read about in the paper, Peretz et al concluded that there was a dissasociation between structural and emotional cognition in music.

This in itself isn’t really surprising, but it neatly underscores the “under the hood” nature of so much of our cognition. Subtle cues can have deep effects that we’re not consciously aware of.

The second paper is by Blood and Zatorre (2001) (PDF), who found using PET scans that the “shiver down the spine” effect of pleasurable pieces of music correlated with increased activity in the emotion & arousal centers, and also the reward/motivation centers of the brain. These are much the same areas that get activated for addictive drugs, and it’s plausible the association of these sensations with videos glorifying terrorist attacks can can have subtle psychological effects.

Recall that per Marc Sageman (see Razib’s earlier post on Sageman’s book here, and overviews of Sageman’s work here and here), in the vast majority of cases he studied, social bonds arose before ideological commitment. The common pattern seems to be that young disaffected men form isolated cliques, one or more of them starts taking an interest in Islamic extremism (usually through visits to extremist mosques), and draws the rest of the group in as well. This dynamic bears a resemblance to the role of social circles in the forming of drug addictions as well.

No real culminating point to all this, other than to gesture at the vague outline of what psychological role jihadi music might play in the phenomenon of Islamic terrorism.

Addendum from Razib: It seems to me that too much of the public and foreign policy discourse operates with the assumption of Rationality(Culture) = Behavior. That is, inferences based on cultural axioms are the way in which we operate. In the current conversation about Iran I am a bit disturbed at the tendency to take the rhetoric of the radical political leaders at face value, or, interpret them through our own world-views (ie., instead of positing rational inferences from the axioms, the nutsoness of a set of axioms or behaviors in the light of our own values allows us to quickly deduce that the Other is insane and inscrutable). A less gross, but nevertheless overly simple, representation might be Rationality(Mind(Culture)) = Behavior. That is, our behavior is a function of the architecture of the mind channeling culture and guided by a few basic rational principles. If you are to kill an enemy it often behooves one to invade their house and map the lay of land so you might wait in ambush. In argumentation I have found it far easier to convince individuals to take the knife and cut their own throat because their beliefs demand it rather than moving earth and sky and showing them my truth and wielding the knife myself. There is more than one leash with ties man in this world, and it is important that we manipulate all of them.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
No Items Found
PastClassics
The “war hero” candidate buried information about POWs left behind in Vietnam.
What Was John McCain's True Wartime Record in Vietnam?
The evidence is clear — but often ignored
Are elite university admissions based on meritocracy and diversity as claimed?
A simple remedy for income stagnation