The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 
Email This Page to Someone

 Remember My Information



=>
Topics/Categories Filter?
Science
Nothing found
 TeasersCoffee Mug@GNXP Blogview

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

“There are places I remember / all my life / though some have changed.”

Step right up to see the tiny horse. The tiny seahorse. The brain structure mildly resembling a seahorse. The relatively large brain structure mildly resembling a ram’s horn. Every time we think we get the hippocampus nailed down it squirms out of grasp. Anatomically and functionally it can be lumped in with a larger structural formation referred to as the hippocampal formation, or it can be divided into hippocampal subregions based around curving cell layers: the CA (cornu ammonis) fields, 1 and 3 being the clearest and best understood, and the dentate gyrus. There is too much to know about the hippocampus. It is studied for its pattern completing and separating computational properties, its rhythmic electrical field potential oscillations, its deceptively simple trisynaptic wiring diagram, its role in modulation of the stress response, the birth of new neurons in the adult dentate gyrus, the plasticity of its synapses, the response of its cells to spatial information, and its role in autobiographical memory of events. I will focus on the last two: spatial representation and episodic memory. Both aspects are areas of ongoing research generating controversy and debate within the field, and they have significant overlap. It may become apparent as we dig deeper that they are two sides of the same coin and that investigating each provides insight into the other.

Temporal Lobes and Temporal Gradients

The contemporary era of hippocampus research really kicked off in the 60s and 70s. There were hints that the hippocampus might be related to memory before then, but a breakthrough came in 1957, when patient HM, suffering from intractable epilepsy for 11 years, had large portions of his medial temporal lobe on both sides removed to eliminate the epileptic foci. If you picture the brain as a boxing glove, the temporal lobe is where the thumb would be. The medial temporal lobe is the part of the temporal lobe nearer to the middle of the brain, where the thumb touches the rest of your brainfist. The hippocampus runs along the inner side of this lobe from thumbtip to thenar and, depending on the species might curve up to the first joint of the index finger. When HM had his medial temporal lobe removed his epilepsy was cured, but his memory was severely disrupted. Specifically, he could remember new experiences for a short time so long as he wasn’t distracted, and he could remember experiences form his remote past. He was not capable of storing new long-term memories or recalling recent memories.

Two major aspects of the hippocampus’ relation to memory arose from studies with HM and other patients like him. One was the idea of memory taxonomy that separates memories that can be recalled as ‘mental time travel’ returning to the spatiotemporal context where the experience happened. This type of memory is referred to as episodic memory and contrasts with procedural memory, for instance, in which motor or coordination tasks are learnt. HM would probably be able to learn to ski, but he would not remember the process of learning. Memory taxonomy is not yet clear. A lot of terminology has been invented, but because recollection is a subjective experience it is hard to assay accurately in humans and especially hard to test for in animals. The second idea arising from HM’s deficits was that of a temporally graded retrograde amnesia. The memories acquired more recently, in the years just prior to surgery were much more greatly affected than those from HM’s childhood. This led to the idea that recently acquired memories spend some time stored in the hippocampus as a waystation before they take off to permanent storage sites in the synapses between various cortical neurons.

The literature nowadays is conflicted and disconnected regarding the idea of a temporal gradient. It has taken a while to discover memory tasks that test the sort of memories we expect the hippocampus to handle. A favorite is contextual fear conditioning, but it is still frustratingly complex. A rat is supposed to learn about a context by combining all the information about its surroundings coming from different sensory processing areas into an index such that re-exposure to just a partial set of the initial context cues can reactivate the memory of the whole experience. To be concrete, if the context smells like acetic acid, the light is red, and there is a fan blowing in the background, the rat might just need the smell to bring all the rest of the context flooding back. In contextual fear conditioning, a rat learns to associated a context with shock, presumably via hippocampus to amygdala connections. A temporal gradient for contextual fear conditioning was reported in 1992 by Kim and Fanselow, now a classic, highly cited paper. They trained rats in this task and then lesioned the dorsal hippocampus at various times afterwards (retrograde lesions). The rats that received lesions a day after training forgot to be scared in the context, whereas rats that received lesions a month after retained their memory. This is a perfect analog for the story of HM in rat years. Recently acquired memories are more vulnerable to hippocampus lesions.

The theory that the hippocampus is only a temporary memory storage site is called “systems consolidation”. Remember how the hippocampus is supposed to pull together information from different sensory processing areas? In systems consolidation theory, the memory is rehearsed with those same areas until they get wired together and can reactivate the memory independent of the hippocampus. There are many reasons to like systems consolidation theory, but there are complexities. Here is one example. Kim and Fanselow didn’t lesion the whole hippocampus. An alternative story to systems consolidation is the multiple trace theory. Rather than moving the memory out of the hippocampus, the memory representation could become distributed in the hippocampus up and down the dorsal-ventral axis making it more impervious to small lesions. Nadel and Moscovitch have been major proponents of the multiple trace theory and recently reviewed the human memory literature, even questioning the original interpretation of HM’s deficits. One issue is how to ascertain whether a memory is truly episodic or is merely semantic (memory for facts that doesn’t require autobiographical recollection; think rote memorization). For instance, I can tell you from my semantic memory what Neil Armstrong said after his first steps on the moon, but when he tells you the same thing he is re-living an episode from his life in a way that requires a type of processing that the hippocampus is especially good at. Using tests more sensitive to this distinction, HM has recently been reevaluated and found to have amnesia for episodic memory across his entire life.

This is a potentially devastating blow for systems consolidation theory, but the discussion continues. New papers have reported temporal gradients for retrograde amnesia in other hippocampus dependent tasks in animal models. Studies using activation markers and multi-electrode recordings are providing new evidence of post
-training hippocampal-cortical communications and coordination especially during sleep. Part of the reason the new multi-unit recording studies are so provocative is that they allow a strong link between our understanding of the hippocampus as a memory storage device and another major theory of the hippocampus: cognitive map theory.

Grid cells and Gridlock

Perhaps the best way to understand the cognitive map theory of the hippocampus is to look at a place field. This is a two-dimensional map of an environment that some rat is exploring. The line tracks the path taken by the rat during the rat and marks red where firing of a particular cell rises above some threshold. Note that there is a bright red hotspot in the upper right hand corner. You would call this the place field for the cell being recorded in this experiment.

O’Keefe and Dostrovsky provided the first observation of place cells in 1971. When they recorded electrical signals from the hippocampus of a rat as it explored an environment they found that some cells had a fairly low basal firing rate, but would fire much more rapidly while the rat was in a specific part of the environment. There is a cell in your hippocampus that fires when you are standing at the foot of your bed and another for when you are in your shower. Actually, the cells remap if the environment is distinguishable as an entirely new context, so you might have two separate maps for the bedroom and the bathroom. O’Keefe and Nadel built on the idea that the hippocampus is responsible for encoding spatial information to create the theory of the hippocampus as a cognitive map. They proposed that our perceptions help produce and are represented in relation to a spatial framework in the hippocampus which we use to figure out where we are and where we’re going. In humans, the cognitive map is the framework for storing memory for events in a spatiotemporal context. You could imagine, for instance, that your location is the anchor for your memory of events. What’s the most important thing to establish at the beginning of a work of fiction? The setting.

The cognitive map theory is not without its critics and controversies, but I don’t want to get bogged down in it. These are exciting times for place cell researchers. A great deal of enthusiasm has surrounded the discovery by the Moser lab in Trondheim, Norway of grid cells. Grid cells are very similar to place cells, but they serve a different purpose. They are found in the entorhinal cortex which is the last stop for sensory information being funneled down to the hippocampus. Rather than firing in one specific place field, grid cells have a triangular-grid shaped receptive field. Once again, it is easier to show than say.

Grid cells solve the problem of navigation in a novel environment. Place cells are consistent within a familiar enviroment, but it takes a little while for them to form their preferences upon exposure to a new room. How does one keep track of position when you don’t have a map yet? Zork fans know the answer. If you went two spaces north and two spaces east, you can go two spaces southwest and get back to the room where the elf stole your jeweled key or whatever. It’s called path integration. You sum the vectors of your movements to add up to the total displacement from your starting point. To turn this into a map of the white house with a boarded front door, you might start with a fresh piece of graph paper. The regularly repeating structure of spatial representation in grid cells serves the same purpose. They provide a framework for you to build a map inside.

A memory is more than just where you were though. It also includes a sequential order of events. Simultaneous recording of large numbers of place cells in the hippocampus has allowed the investigation of the sequential/temporal aspect of memory by way of patterns of spatial experience. I will provide one example from earlier this year. Foster and Wilson recorded from many place cells as a rat moved down a linear track like you or I would walk down a hallway. They move at relatively constant rate in one direction and eat some food at the end. The place cells that represented location during the previous trek down the hallway can fire action potentials while the rat is chillin’ out at the end, eating his reward, and they do it in a rapid meaningful sequence. The cell closest to the end of the hallway kicks it off and the one at the beginning of the hallway is at the end. These and other sequentially organized patterns in the hippocampus might bring the firing in close enough proximity to be characterized as coinicidental. Coincidental firing is the first sign to the nervous system that maybe these two neurons ought to be hooked up to each other, so later when the memory of one portion of the hallway is recalled you are able to traverse across synaptic bridges to the rest of the house. There are also hints that this sort of sequential replay may occur during sleep and coincide with activation of neocortical areas, providing links between sleep, memory, and systems consolidation theory.

Another popular example of the special relationship between space and the hippocampus is the story of the London cabbies. London cabbies have to memorize the spatial layout of the city to an incredible level of detail. Imaging studies have revealed the hippocampus of a London cabbie is shaped differently than your average person; some parts are bigger and some are smaller. Greater volume differences are associated with more years of experience as a taxi driver. The obvious implication is that learning and using all those map details really works out their space muscles, growing more cells or larger cells in the hippocampus, and reorganizing according to the demand. This isn’t entirely implausible since the hippocampus is one of only two confirmed sources of newborn neurons in the adult brain. There are alternatives though: the simple act of driving in the city with the attendant stress, motor planning, and cognitive demands might affect the hippocampus. A recent update attempted to control for these issues by comparing cabbies who have to really know the maps to bus drivers who do the same amount of driving without the memorization requirements. The volume differences hold up, and what’s more, taxi drivers had more trouble learning new spatial information. This finding hints that there may actually be a limit to storage capacity for spatial information, which is not something we run up against day-to-day. Of course, the caveat that correlation is not causation remains. People with funny-shaped hippocampi might just have a predisposition to become cabbies and stick with it, but the study authors point to some evidence suggesting that initial spatial capabilities don’t correlate with hippocampus shape suggesting that this alternative isn’t strongly supported.

Memory for places or places for memory?

The discovery of place cells clearly indicated that the hippocampus represents where we are, but the discov
ery of grid cells in the entorhinal cortex reallocates a good-sized portion of that responsibility. Why then are the cells that should be handling memory wasting their time representing spatial location at all? One possibility is that location is the framework in which we embed our memories. A recent issue of the journal, Hippocampus, contained a collection of articles suggesting a strong correspondence if not isomorphism between the characteristics of place cells and memory. Sheri Mizumori provided an excellent introductory article profiling areas of overlap and distinguishing features of these two processes. For instance, certain drugs affect memory and place field representation in the same direction, and place fields become more refined as rats gain more experience in an environment. As a more concrete example, a study in 2003 reported changes in the response properties of place cells after fear conditioning. After establishing the place field representation in exploring, the study’s authors delivered a tone paired with an aversive stimulus. Afterwards, place cells became responsive to tone and location. The cells of the hippocampus are capable of responding to more than just place, but place still held a primary role. A cell could only fire in response to the tone if the rat was in its particular preferred location.

Exploration of the hippocampus’ abilities continues. I have not scratched the surface even in this mega-post. The theories of the hippocampus as a memory storage device and a cognitive map are not mutually exclusive and may turn out to be one and the same. I came to an interest in the hippocampus via a more philosophical route: considering the role of memory in the definition of the self and such, but there are plenty of noble non-navel-gazing reasons to delve deep into this structure as well. The hippocampus shows signs of damage and deterioration early on in Alzheimer’s disease and other forms of dementia, and the same recurrent excitatory connectivity so useful for memory storage may also provide the substrate for the electrical storms underlying epilepsy. I hope you now have some more of the conceptual framework necessary for reading and evaluating new hippocampus articles, and we can talk about it some more when good studies come out. Now make like a hippocampectomized rat and get lost!

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

For the die-hards. I taped the Buzsaki and Wilson lectures at the Visualizing and Recording Large-Scale Ensembles short course at SFN. Quality isn’t really great and a lot of it doesn’t make sense without the slides, but, hey, it’s there if you want it:

Buzsaki Intro and Lecture
(mostly about multi-unit recording and unit isolation)
Wilson Lecture (more about the same, but focus on tetrodes and some data)
Buzsaki vs. Wilson (Breakout group with some questions, interesting to hear them converse and joke. Best sound quality I think.)

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Fly pointed out this paper in the new issue of Science. The “T allele” of the protein KIBRA is associated with performance on certain human episodic memory tasks first in a Swiss cohort and then in two other populations controlling for attention effects and stratification effects. The “good memory” allele frequency varies across populations: Asian (75%) > African (~50%) > European (25%). fMRI during a memory task showed higher activation in the hippocampus and other related memory structures in people with the “bad memory” allele, suggesting that the memory structures were having to work harder or more inefficiently to encode the information. So far KIBRA is known to interact with dendrin, protein kinase C zeta, and protein kinase m zeta. You may recognize the name PKM zeta from a science paper a few weeks ago implicating PKM zeta in memory maintenance. I don’t know what dendrin does yet. The initial SNP discovered is in an intron and doesn’t seem likely to affect function, but there are other polymorphisms really close by, in neigboring exons, that are also significantly associated with the memory phenotype. It will be interesting to see in which part of the KIBRA protein these differences lie.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Shame I gotta do white labels to keep my life stable. – Common

I checked out a couple sets of posters dealing with spine dynamics at SFN. One bundle from the Hayashi lab showed that the beta subunit of CaMKII serves an actin bundling role in dendritic spines. Get used to hearing about CaMKII. It makes up two percent of brain protein and has several interesting properties that make it intriguing as a memory-related signaling molecule. CaMKII at the synapse takes the form of 12-subunit donut-like structures (two interlocked 6-subunit structures). There is a family of CaMKII subunits. I don’t know all of their properites, but CaMKII alpha is the one everyone usually concentrates on because it is synthesized following activity and has these nifty self-activating features that might make it good for maintaining a synaptic biochemical state. CaMKII beta is the boring wallflower subunit, but Hayashi may now have given it a chance to shine. They showed that actin polymerization (forming into long chains or scaffolds that provide structure for the spine) causes more beta-CaMKII to show up in dendritic spines. Dendritic spines are little knobs that stick off of dendrites where most excitatory synapses occur (here’s some pics). The beta subunit slows down the actin dynamics responsible for spines wobbling around. Wobbling around is immature behavior for a spine, indicative of a weak synapse or perhaps searching for a new synaptic connection. Experimental reduction of beta CaMKII makes spines look more like filopodia, long, thin, wobbly growths off the dendrite (presumed to be the intial stages of spine formation). Expressing the beta-CaMKII binding domain can rescue this phenotype and make the spines look more stable and mature. Hayashi suggest a model in which initially actin is bundled in a fairly weak spine. Activity can destablize this bundling and allow the spine to wobble some and perhaps rearrange the post-synaptic protein formation. This might be mediated by increasing the alpha subunit content. I’m forming an alpha=unstable, beta=stable dichotomy in my mind right now. After the activity-induced destabilization, more beta-CaMKII can come in and re-stabilize/re-bundle the actin filaments in a new (maybe stronger) configuration.

Sutton and Schuman had some posters continuing their examination of synaptic homeostasis and mini EPSPs (excitatory post-synaptic potentials). Refresher: minis are caused by spontaneous neurotransmitter release even when there is no activity driving action potentials. We know from previous work that blocking minis can cause synapses to start inserting new receptors and grow in strength. In one poster they used a GFP flanked by the regulatory elements of the alpha-CaMKII mRNA to assay a certain signaling pathway’s role in mini-mediated dendritic protein synthesis regulation. The experimental techniques get convoluted quickly because every manipulation involves inhibition of two processes to discover the effect of one. I’ll skip those, you are welcome. In general they found that minis tend to increase phosphorylation of eEF2 and that action potentials tend to decrease the same. eEF2 phosphorylation is thought to decrease global translation during the elongation step. I mentioned it a couple weeks ago in reference to alpha-CaMKII synthesis as one of those counterintuitive regulatory paths in which turning down translation increases certain genes. Sutton and Schuman didn’t find anything like that. Phospho-eEF2 (and thus minis) was associated with suppression of the reporter signal. Inhibiting the kinase that phosphorylates eEF2 (something that action potentials, non-spontaneous activity, might do) led to more protein synthesis and more reporter signal. Sutton didn’t seem overly concerned that the direction of regulation for alpha CaMKII was opposite to that reported in Scheetz et al. They focused more on the idea that it is relatively uncommon to think of translation being regulated at the elongation step instead of the initiation step. I’ll give’em that, but I thought the Scheetz story was cool, so I wish they would examine it a little more closely.

The other poster from Sutton and Schuman examined fast and slow spine dynamics in relation to minis and action potentials. We are now getting a dichotomy between minis and action potentials as stabilizing and destabilizing forces respectively. Fast dynamics was defined as movement of the spine during a 5 minute monitoring window. Slow dynamics usually involved spine growth (in the more stable widening of the head and neck manner) over about an hour or two. Minis tended to reduce fast dynamics and increase spine growth (slow dynamics) and action potentials did the opposite. Blocking minis or action potentials over an extended period of time produced effects on not just the size and shape of individual spines but also on the number of spines period. Initially letting minis run wild led to an increase in spine number, but over days this dropped off below baseline. The effect of action potentials appers to be to initially drop off the number of spines and then stabilize, maybe as neurons are adapting to the new higher activity level. Everything I have written has to be taken with the caveat that you can’t actually isolate the effect of action potentials. All of the effects attributed to action potentials are really taken from experiments blocking the effect of minis AND action potentials at the same time, basically blocking any neural signals at all. The effects I attributed to minis are more directly related to actually looking at the different response between neurons with minis and without.

So with just a few posters we can build up a couple categories: stabilizing forces versus destabilizing forces.

  • Stabilizing: miniEPSPs, beta CaMKII, actin polymerization, eEF2 phosphorylation, inhibition of translation at the elongation step, inhibition of global protein synthesis.
  • Destabilizing: activity, action potentials, alpha CAMKII, protein synthesis upregulation

I’m thinking that memory may be encoded by passing through a destabilizing phase to a new, stronger stabilized phase. This would allow an interesting explanation for the phenomenon of reconsolidation. Reconsolidaton is complicated and a matter of debate, so keep in mind that the story is more complicated that I will present it now. Reconsolidation is demonstrated by allowing an animal to retrieve an established memory and then performing some manipulation to screw with their memory consolidation capabilities. For instance, you can give a strong electric shock or certain drugs. The animal then seems to have forgotten that particular reactivated memory in subsequent testing. Maybe the initial destabilizing force of synaptic activity for those memories allows a window of time when, if the pattern of activity becomes erratic, the memory will not be put back together right during the stabilizing phase.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

The Devil sat behind me on the plane the night before last. I tried blogging from SFN last year. It is a silly idea. I will SFN blog after the fact from now on. One of the most remarkable posters I saw this week was from the Ledoux lab. The amygdala is one of the clearest cases for showing a particular memory trace in a structure (conditioned fear). The poster examined a protein marker for activation/plasticity one hour after training in either paired or unpaired tone-fear conditioning. This means that the animals all received the same amount of all stimuli, but the paired group received the tone just prior to shock which is the condition necessary to produce an association.They used a statistical technique called stereotypy to look for patterns of neuronal activation in the amygdala. There were more marked neurons in paired vs unpaired animals, but the more remarkable thing is that it appears to be the same neurons across animals (for 6 animals). Meaning that there may be a neuron that we can name (P1 or P2 were names they were using) whose hardwired function is to handle association between that particular tone and shock. Yes, I realize that this is probably not what it is “hardwired” to do, but you get the point. Part of the case for calling these the same neurons is that P1 (and others) has the same orientation, dendrite outgrowth, and position across animals. They are currently doing electrophysiology to strengthen the case. There are a subset of neurons that are activated with pairing, but not with unpaired conditioning that are being referred to as AANs (Associatively Activated Neurons? I think that’s right.) As far as I could see there wasn’t any particular anatomical locus you could stuff them in. They looked sort of like Cassiopeia if I remember correctly.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Almost done with ATL and the Society for Neuroscience conference. To gel all this info in my head I’ll try to write some of it down. A friend and I went to a short course on Friday organized by Gyorgy Buzsaki entitled, “Visualizing large-scale patterns of activity in the brain: optical and electrical signals.” I am currently reading Buzsaki’s book “Rhythms of the Brain” (recently reviewed in Science). Buzsaki is a leader in his field and is making substantial contributions to the understanding of electrical signals in the brain and the organization of neuronal assemblies. I recorded all the lectures on my cell, but somehow in the midst of lots of BootCamping have (i hope temporarily) fried my ability to establish a relationship via bluetooth. The upshot is that I will try to provide you with decent-quality audio of the lectures when I get organized again.

Buzsaki did the intro and the first talk, which is basically a reiteration of Cycle 3 in his new book. Here’s the deal. Neurons produce electrical signals that correlate with stimuli and behavioral states. We would like to be able to speak this language so we can read neurons and know what they are talking about while an animal is awake and behaving. The possibility exists in the future of using this information to speak to neurons directly with electrical stimulation. Currently, the best way to listen to neurons is with extracellular unit recording. With this technique you mainly capture short large-amplitude changes in the potential difference across the neuronal membrane. Most of these are action potentials, in which positively-charged sodium rushes into the cell creating a negativity outside the cell near the cell body (soma, a current sink). It is becoming increasingly clear that to understand some very important brain functions we need to be able to do more than sample individual neurons. We need to be able to look at the covariation in the “noise” of single-unit firing. To do this you need contemporaneous recording from a large number of neurons. So techniques have been developed for inserting multiple extracellular electrodes into, say, a rat hippocampus, and analyzing the temporal relationships between firing patterns of individual neurons.

If you just put one electrode in, you can’t tell whether all the spikes you record are coming from a unique source. That electrode might capture activity from ten neurons that you can’t tell apart. For instance, if the electrode was placed directly in between two neurons with similar firing properties you might confuse them with one neuron that fires a whole lot. To get around this you need to triangulate using multiple recording sites close together with a known spatial configuration. Now neuron B will be slightly weaker than neuron A in electrode B, but both electrodes will pick up the signal and you can do math to figure out that neurons B and A are separate signals. Most of Buzsaki’s talk was on refinements of this process because it is really really difficult to actually be sure you are uniquely identifying a unit. Action potential sizes and signatures change, especially during really interesting complex burst firing patterns. More recording sites buys you more information, but at the cost of tissue damage.

Most multi-unit recording is performed using “tetrodes” four small wires wrapped around each other to give you four recording sites near each other. They can be implanted in an array in the ballpark of 30-100 tetrodes. These are still very much in use and are producing some beautiful data. Buzsaki is developing an alternative: Micro-Electro-Mechanical System-based recoding devices. These are silicon probes created using the techniques established for creating microchips, integrate circuits, etc.. (I don’t know how this works, I assume some of you do). These have the advantage of being thinner, producing less tissue damage, having more refined control over the spatial configuration of the recording surfaces, and allowing some on-chip processing (i.e. signal amplification).

I won’t dazzle you with my poor understanding of the algorithms he is developing to sort out the unit isolation and identification problems. One of the major issues as far as I could tell is that extracellular potentials don’t always come from the axon-hillock at the cell soma, but instead can be generated by activity spikes in the dendrites, so algorithms that treat units as a point-source miss the complexity of electrical signals coming from a ramified polar cell-type. Another issue is the classification of cell-types. You can begin to guess at an inhibitory connection when you see a spike from one unit followed almost immediately by decreased activity in a separate unit. Those sorts of things. There is incredible diversity especially in the inhibitory neuron population. Neurons can be classified in terms of spiking patterns and forms and the effect of their spikes on other neurons. This last classifier is only achievable using multi-unit techniques.

The book is recommended. I’ll hit you back with the mp3s soon I hope. I have nothing to report about ATL. No playas playin, no ridin on them thangs.. Looks mostly like rainin on my thangs, like everyday.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Another aspect of the Raab-Graham et al. paper that might be missed in the flurry of different pharmacological agents and signaling pathways, is that the synthesis of Kv1.1 is actually increased in response to a manipulation that normally inhibits translation. Rapamycin is the drug of interest here. It inhibits a protein called mTOR (mammalian target of rapamycin). There is still some debate about what mTOR does exactly, but suffice it to say that it activates other signaling molecules downstream. The end result of mTOR signaling is to promote cap-dependent translation. Most mRNAs have a structure on the head end called a 5′ cap. It is basically a nucleotide put on backwards, but it can be recognized by proteins involved in the initiation of protein synthesis. Recognition of the cap by these proteins is usually necessary to get the mRNA hooked up with the ribosomes. So rapamycin should be inhibiting this process and thus inhibiting translation initiation for most cellular mRNAs. Note that other protein synthesis inhibitors that act further downstream actually do decrease the rapamycin-induced increase in Kv1.1.

We don’t get an explanation for this counter-intuitive regulation mechanism. The simplest explanation would be that Kv1.1 has an internal ribosomal entry site (IRES). IRESs are common in viral transcripts. Their mechanism is not fully understood, but they somehow allow ribosomes to jump in and bind mRNAs without having to deal with caps or cap-binding proteins. When cap-dependent translation is inhibited, cap-independent translation-capable transcripts would get all the goodies. RNAs with IRESs will be at an advantage. It is not unprecedented for important neuronal mRNAs to carry these elements. For instance, Pinkstaff et al. found functional IRESs in five dendritic mRNAs (including Arc and CaMKII). Unfortunately for this hypothesis, Raab-Graham et al. found no evidence for an IRES in Kv1.1 and showed that the 5′ UTR (where IRESs usually hang out) was not necessary for the rapamycin regulation. This suggest that rapamycin is having its effect through some signaling pathway besides that responsible for cap-dependent translation.

Rather than dwell on Kv1.1 since we don’t know the answer. I thought I’d show you a couple more examples where translation is inhibited, but certain RNAs do extremely well. I think the mRNA for CaMKII must contain every type of regulatory element, and must be controlled by every possible pathway. In this paper, Scheetz et al. used a fairly unusual in vitro preparation that should mostly just carry the synaptic compartments of dendrites to study the effects of NMDA receptor activation on protein synthesis. As you may know, the classic story is that when two neurons fire at the same time NMDA receptors are activated on the post-synaptic neuron allowing calcium into the cell and initiating signaling that will lead to plasticity. Scheetz et al. found that in the first two minutes following NMDAR stimulation there was actually a drop in global protein synthesis accompanied by an increase in activity of a protein called eEF2 kinase, and an increase in CaMKII synthesis. The activation of eEF2 kinase slows down the elongation phase of translation. The idea here is that mRNAs that are good at initiating translation may actually be at a disadvantage during this state because they get stuck further down the road. This would shift the balance to mRNAs that have a hard time with initiation, potentially like CaMKII. One cause of initiation difficulty may be complicated secondary structures near the beginning of the transcript that make it hard for ribosomes to scan down to the start codon (the codon that signals the first amino acid in the newly forming protein).

Speaking of start codons, one further example where global protein synthesis inhibition can actually be good for specific translation is that of the GCN4 upstream open reading frames. This mechanism hasn’t been shown to play a role in plasticity-related protein synthesis yet. Most of the details have been worked out in yeast. In response, to amino acid starvation, an enzyme called GCN2 is activated. It in turn phosphorylates (sticks a electronegative, function-altering phosphate group on) a protein called eIF2alpha. When eIF2alpha is phosphorylated it serves to inhibit production of a key component in translation initiation (called the ternary complex). So it is harder to initiate translation. The ternary complex carries the first amino acid of every protein, methionine, which matches up to the start codon, AUG. The mRNA for GCN4 has several AUGs that are not the start codon for the actual protein. These AUGs are associated with upstream open reading frames (uORFs), sections of mRNA sequence that code for little chunks of protein that don’t do any good. These are like 8 or 9 amino acid peptides we’re making here. When amino acids are around and all is well with the cell, ribosomes start at the cap of GCN4, find uORF1, translate it, find another uORF downstream, translate that, and fall off. They don’t make it to the actual protein-coding part of the transcript. When initiation is inhibited, they can’t get a new ternary complex in time to translate the later uORFs. They finally get a new ternary complex in time to read the actual open reading frame of the GCN4 gene. So once again, global protein synthesis is turned down, but an mRNA with a funky 5′ end gets the advantage.

I find it interesting to consider that dendritic protein synthesis following synaptic activity might actually contain two components. In the first couple minutes after stimulation, global synthesis might be downregulated and certain RNAs with special attributes that normally weaken their translation are upregulated. In the minutes to hours afterward, global synthesis is upregulated and more general synapse building proteins are manufactured. The synthesis could come in waves coordinated by complex secondary structure and ribosome-obstructing elements in the 5′ untranslated regions of certain key mRNAs. Strangely, the intriguing Scheetz et al. finding released back in 2000 has not been followed up. One wonders if others have tried and been unsuccesful or what.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Nature made a genetics podcast. I can’t tell if this is a special feature just for the ASHG meeting or if it will continue.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

The recent Science paper by Raab-Graham et al. that rosko linked shows dendritic synthesis of a protein modulated by various pharmacological agents. A review by Sutton and Schuman came out in Cell on the same day and serves as a good guide to the importance of dendritic protein synthesis in synaptic plasticity and memory. Raab-Graham et al. speaks to one of the issues in the S&S review, namely, how to identify proteins that are actually being synthesized in dendrites. What follows is the Readers’ Digest guide to key issues identified by S&S and an interjection showing where the Science paper fits in.

Why are we concerned with local/dendritic protein synthesis? Well, first, why are we concerned with protein synthesis at all? Memories appear to require protein synthesis for permanent storage. People have studied this in numerous learning paradigms by injecting protein synthesis inhibitors into slugs or rodents at different times after training sessions. Without going into specifics and caveats, you can generally take it that these inhibitors are only effective in disrupting the memory if they are administered within the first couple or three hours after training. The same goes for cellular models of learning and memory (i.e. long-term potentiation, LTP). So in order to stabilize a nascent memory new proteins must be synthesized.

The end result of all of this protein synthesis must be the modification of synapses. We know that not all of the synapses on a neuron are potentiated in response to stimulation, so we need the new proteins to act only at certain synapses, presumably synapses that are in some way related to the ones that drove the protein synthesis. The proteins could either be generated in the soma and trafficked to the proper portions of the dendrite or they could be synthesized from preexisting mRNAs in the dendrites. You can see some advantages to the latter proposal as all of the action can happen in one little area without expending energy to carry around all those proteins and figure out a zipcode system to make sure they end up at the right inputs. In ways it is similar to the contrast between using snailmail to ask for a reprint and receiving the pdf via email so you can print it off yourself next to your desk.

Not to mention, as S&S lay it out, polyribosomes (protein printers) were shown in dendrites by Steward and Levy using electron microscopy. It remains possible that they are there performing a housekeeping role and that plasticity-related synthesis is carried out in the soma. We do know that dendritic synthesis can do the job of maintaining LTP into the protein synthesis-dependent phase. Some of the best demonstrations involve microsurgical dissection where the dendrites are physically separated from the cell body prior to LTP induction. Others have shown that restricting protein synthesis inhibition to the dendrites still blocks late-phase LTP.

It has been more difficult to show the necessity of dendritic protein synthesis in live, behaving animals. The closest anyone has come was a transgenic mouse carrying a mutated version of the alpha subunit of calcium/calmodulin-dependent protein kinase II (CaMKII), one of the leading candidates for the plasticity-induced, synapse-altering protein. The mouse lacked a chunk of the CaMKII mRNA that doesn’t code for protein, but instead carries a dendritic localization signal. So there can be no local synthesis of this really important synaptic protein because the mRNA isn’t there locally to synthesize. These mouse had impaired memory. But alas, this mouse and all the other mice that have been used to study protein synthesis and memory are second-generation transgenics, meaning that the genetic manipulation is not tightly restricted in the temporal domain. Compensation is the rule not the exception, so if we screwed up the system, I expect lots of things besides CaMKII mRNA localization were altered in this mouse. In memory research, timing is everything. Not only could the mouse be just globally weird, but the manipulation could be affecting the acquisition, storage, maintenance, or retrieval of the memory.

S&S suggest a number of areas where basic knowledge related to the issue of local protein synthesis is lacking.

1) What mRNAs are in dendrites? Should we be identifying each one individually or will microarrays using samples that ostensibly carry only dendritic or synaptic mRNAs do the trick? It’s a classic quantity vs. quality issue in my mind. It would be nice if you could do a neuronal cell line array like people have been doing with yeast arrays . I can imagine a library of mammalian cells (either a neuronal line or a stem cell-line treated with neuronal differentiation cues) containing MSH2 tags in each mRNA and an MSH2-binding protein-GFP hybrid. Sorry, I’m indulging myself. There is probably a more efficient solution than what immediately pops into my head.

2) Which proteins are actually synthesized in dendrites? This is the part that Raab-Graham et al is getting at. It is not good enough to show which proteins can be found in dendrites, you have to show that they appear there in response to synaptic activity. The CaMKII mouse mentioned above provides one way to examine the issue. Restricting the mRNA to the soma led to a large decrease in synaptic CaMKII. This is reasonable evidence that synaptic CaMKII comes from dendritic CaMKII mRNA. Also, the amount of CaMKII in dendrites far from the cell body jumps up following stimulation, so fast that it can’t have been transported from the soma. This protein, and now Kv1.1, are the only ones that have undergone this type of analysis. The CaMKII studies didn’t require any photoconvertible protein hybrid though, so I’m a little uncertain as to why Raab-Graham et al had to go through all that technical trouble. Also, if my understanding is correct, they could have reached the same conclusions using FRAP (fluorescence recovery after photobleaching). Rather than looking for a new green signal after converting everything to red, they could have simply looked for a new green signal after bleaching out all existing fluorescence. Paging Dan Dright.

3) How specific is LTP? While many use the terminology, S&S note that we haven’t shown that LTP is synapse-specific. When ribosomes are found in the dendrites they are found near synapses, at the bases of dendritic spines, not in synapses. Synaptic tagging experiments have shown that proteins synthesized due to strong stimulation at one input can be ‘captured’ by other synapses receiving weak stimulation. So the locally synthesized proteins aren’t all that faithful to the inputs that led to their creation. Some have used two-photon glutamate uncaging to try to stimulate individual synapses, but still it is not clear that the changes are restricted to just those receiving stimulation. Perhaps multiple nearby synapses are affected at once. There is some evidence that synapses near to each other might at lease share input modality, so you could potentially still be encoding a specific association between stimuli using clustered plasticity.

4) One last issue, and I’ll keep it short. Protein synthesis inhibitors usually only have effects in the first few hours after training at best, while memory persists for weeks, months, or years. Proteins turn over though. We really just don’t know how fast most synaptic proteins are degraded. They could last long enough to do the job of maintaining memory. If not, perhaps the mem
ory can be maintained for the amount of time it takes for the inhibitors to wear off. You can’t really address this question by extended protein synthesis inhibition because you will eventually end up harming the cells irreversibly. So we need to measure some protein half-lives.

A few of these questions seem like they would be trivial for people outside of the neuroscience community, people who have expertise in high-throughput biology. It’s not clear to me why their wild ever-shifting attention hasn’t landed on synaptic plasticity yet, but if any of you are reading this maybe you could just get some physiologist to hand you tissue (preferably synapse-enriched) on a time course after LTP. Run a microarray. Do some mass spec proteomics. Tell us which proteins there are more or less of before and after. That’s not that big a deal right? I’m pretty sure I can find you the tissue if you are having a hard time.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Pasko Rakic (cited in the Lahn paper that JP just linked) has studied the role of programmed cell death (apoptosis) in determining the complexity of the mammalian neocortex. The neocortex is the outer shell of your brain, has a six-layered structure, and has ten times the surface area in humans compared to macaques. When brains get that much surface area they have to become convoluted and get all those gyri and fissures that make it look like a bowl of noodles. Mouse brains are, by contrast, smooth. However, when Rakic and colleagues knocked-out the caspase-3 and caspase-9 genes in mice (the former being identified as *hot* by Lahn) their brains got all noodly. There is obviously more to it than this however because the mice weren’t supergeniuses. I think they died pretty quick actually.

This all relates back to Rakic’s radial unit hypothesis of cortical development and evolution. During corticogenesis, cells that are destined to be neurons are produced by asymmetric division of radial glia cells. You might describe the fated neurons as budding off the radial glia mother cell. Radial glia are named thusly because they have long processes that radiate out from the neuronal birth area closer to the middle of the developing brain and attach to the outer areas where the neocortical cell layers will form. Like so:

When the neuron is born it grabs onto the radial glia and starts climbing. It climbs past previously born neurons, so the neocortex develops in an inside-out fashion. Neurons that climbed up the same radial glia end up as part of the same functional unit, perhaps even acting as one big electrical component, since they are more likely to be connected by gap-junctions.

Rakic proposed that the size of the neocortex is determined by the number of radial units and thus the key developmental step for producing one of the most noticeable differences between humans and other primates will be radial glial proliferation. There are several factors that could affect the number of radial glia. I’ll let him tell you what they are. Here is the free paper, skip to the last section:

A comparative embryological analysis of telencephalic development led to the proposal of the radial unit hypothesis of the ontogenetic and phylogenetic expansion of the cerebral neocortex at the cellular level (Rakic, 1988). According to this hypothesis, the expansion of the surface of the cerebral cortex is accompanied not only by the proportional increase in the number and length of RG cells, but also by the earlier onset of their differentiation, as well as an increase in duration of the G1 phase of the cell cycle and their longevity during individual embryonic development (Kornack and Rakic, 1998).

Recent experiments on embryonic brains in which the number of founder cells is manipulated by either the reduction of programmed cell death (Kuida et al., 1996; Haydar et al., 1999) or an increase in cell proliferation (Chenn and Walsh, 2002, 2003) support this hypothesis. In both cases the number of founder cells in the early VZ increases, resulting in a larger number of proliferative units that generate a corresponding number of radial minicolumns that enlarge the cortex in surface and create convolutions in the normally smooth (lissencephalic) mouse brain. Thus, an increase in the number of founder cells leading to the larger number of radial units in the neocortex is clearly correlated with the evolution of RG scaffolding.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

I always say I don’t want to have kids because I don’t want that much an anchor keeping me from doing important things, and I don’t want my significant other’s attention drawn away. Plus, I’m giving the finger to those genes for trying to manipulate me. I was arguing with my friend about whether this was rational as far as life satisfaction. Looking for data to shore my side up I found a report (pdf) about Marriage, Children, and Subjective Well-being. I’m not sure my side got shored, but it’s interesting data:

Thus we find that:
(i) age exhibits the U-shaped relationship with life satisfaction found in multivariate
research employing large samples (with life satisfaction lowest at around 44 years
for men and 42 years for women);
(ii) life satisfaction is lowest for persons from a non-English speaking background,
especially women, and especially if their English language speaking ability is poor;
(iii) the effects of education on life satisfaction, while relatively small, are negative,
possibly the result of high aspirations that have yet to be met (Clark and Oswald
1994);
(iv) levels of life satisfaction are strongly affected by the presence of health conditions
and disabilities that limit activity;
(v) persons not in employment but who are actively looking for work (that is, the
unemployed) express the highest levels of dissatisfaction, while the most satisfied
are persons who are neither employed nor looking for work (so long as this
situation is not the result of poor health);
(vi) the presence of persons other than immediate family members in the household has
predictable effects, with children enhancing satisfaction of men but reducing it for
women, and adults enhancing satisfaction of women but not men;
(vii) satisfaction levels rise with household income per head, though the magnitudes of
the estimated coefficients suggest that the effect is relatively small and that very
large increases in income are required to raise life satisfaction scores by even one
point on the scale;
(viii) homeowners tend to be more satisfied with their lives than renters;
(ix) religion tends to be an influence that enhances life satisfaction;
(x) persons who are more forward looking in their financial planning and savings
behaviour are more satisfied, though the effect is only pronounced among women;
and
(xi) more stable home environments when young (as represented by living with two
parents at age 14) are associated with greater levels of life satisfaction.

SUMMARY
• Couples are, on average, much more satisfied with their lives than single persons.
• Any difference in life satisfaction between married couples and cohabiting
couples is small and confined to long-standing relationships.
• Differences in life satisfaction between formerly married persons and other single
persons are only marked for women, and even then the reported life satisfaction
scores of most of these women have almost completely recovered to the level of
other single women by the time divorce is finalized.
• Remarriage appears to benefit men more than women, with the life satisfaction of
married men rising with each subsequent marriage. In contrast, women are no
more (or less) happy in a second marriage.
• Life satisfaction declines with the number of dependent children living at home
but rises with the number of adult children who have left home.
• Dependent children who live elsewhere have a depressing effect on life
satisfaction (though large standard errors mean relatively little confidence can be
attached to this result).
• The negative effects of young dependent children are very large for single parents
but non-existent for married mothers.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

In case you’re not enjoying it already, allow me to bring your attention to Married to the Sea. I find the more juvenile stuff the funniest, but here’s some science/evolution related to justify the post. Personal favorite.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

The Daily Transcript reports on a PNAS study showing that transcription occurs in bursts. Transcription factors (regulators of when a gene is “on” or “off”) are often characterized as ‘activators’ or repressors. The paper suggests activators may instead be stabilizers. Genes are always flipping back and forth between different levels of on and off states and when transcription factors bind they can hold one state steady. When upregulation happens a gene isn’t ‘turned on’, it is just kept on. He’s got a nice little def. of transcription on the left there. You are eventually gonna need to know what transcription and translation are. Might as well be now.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

epigenetics[1] and non-coding RNAs. I encourage everyone to check out the new Cell and find “the first genome-wide high-resolution mapping of DNA methylation and the first systematic analysis of the role of DNA methylation in regulating gene expression for any organism” and a new microRNA target prediction method (rna22) that makes big (perhaps too radical?) predictions about the percentage of mRNAs regulated by this pathway.

I haven’t even had time to read the methylation paper yet, it looks like tiling arrays are used and this will be the third instance of that technique I’ve come across in the past week. Basically, this is an array that contains little chunks of sequence making up all of the non-repetitive parts of the genome. You can then wash some sample over it and see which chunks of sequence on the array get sample stuck to them through hybridization. Looks like in this paper they pulled out all the methylated vs. unmethylated DNA and hybridized that to the tiling array. Scanning the paper I note that when methylation occurs within the coding portion of a gene it is likely to be expressed whereas when methylation occurs in the promoter region it is likely to be controlled in a tissue-specific manner. Also, they present some evidence downplaying the role of microRNAs in transcriptional regulation through guided methylation, which was getting some buzz a couple or three months ago. BTW, this study was performed in a plant genome; Arabidopsis thaliana to be exact.

On the other hand, microRNAs in translational regulation are still getting played up as a major force. I won’t pretend to understand all of the pros and cons of rna22′s algorithm, but they do some false positive and sensitivity analysis and predict that it will find 1 false-positive binding site per 10,000 nucleotides and will discover 83% of real binding sites. With those rates in mind, consider the number of binding sites their algorithm predicts in the human genome. Conventional wisdom is that microRNAs are most likely to bind to the 3′ untranslated region of mRNAs, so that is where you should look for binding sites. 92.3% of 3′ UTRs in the human genome contain one or more “target islands” according to rna22. Even better, 99% of coding sequences are in the human genome do the same. That result almost seems outlandish to me, but I really have no expertise to evaluate it from.

Whenever the Schratt et al. paper came out earlier this year I was totally hyped on it. One microRNA (miR-134) was found to control LIMK-1 expression and thus dendritic spine morphogenesis. The translational repression of LIMK-1 was released in response to brain-derived neurotrophic factor (BDNF, associated with LTP and memory and all that jazz). I thought, “Maybe miR-134 has multiple synapse-related targets that are co-upregulated by release from microRNA inhibition in the face of synaptic activity.” I was thinking in the 50-100 range. This paper predicts 2318 targets for miR-134. There are not that many dendritically localized RNAs, so my little theory is at best incomplete.

Finally, everything you knew about microRNA-target hybridization is wrong. Many heuristic-based approaches have focused on the “seed region” of miRNAs for target recognition. The idea is that it is particularly important for the first 7 or so nucleotides of a miRNA to match its target sequence and bind effectively. This paper says good seed-binding can still lead to crappy translational repression, and poor seed-binding (weird base-pairs or nucleotides with no binding partner at all) can still lead to strong repression. So everything right is wrong again and we can all go back to the drawing board, but at least now we have IBM on our side.

[1]I’m a little concerned that people won’t understand the connection between epigenetics and DNA methylation. It seems like lately when I see the term it refers to any sort of heritability that can’t be directly attributed to DNA sequence. When I first learned the term it was primarily in relation to the molecular modifications that can occur around the DNA. For instance, DNA can be methylated and histones (the proteins that DNA wraps around to condense) can be acetylated or methylated or phosphorylated. People were very concerned with the methylation states of chromosomes and how they were modified by paternal and maternal imprinting. Of late, it seems that there is increasing focus on potential environmental effects on germline genome which are epigenetic in the broad sense, but may or may not be in the DNA methylation sense. Maybe I’m the only one who finds this distinction necessary/troublesome.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

If the 24-hour science news cycle hasn’t knocked it out of the banks yet, recall that there was some hub-bub about a non-coding RNA (ncRNA) being a candidate for the fastest-evolving human gene. There was some discussion about whether ncRNAs might be something to keep an eye on and why RNAs might be particularly good at evolving quickly. This review from March had some factoids worth mulling.

There are more ncRNAs than you thought:

  • Half of the “full-length long Japan” library of human cDNA clones appear to be non-coding. Anti-jargon: cDNA (complementary DNA) is sequence read off of RNA backwards. This group tried to take a very large scale unbiased picture of the RNAs floating around in human cells and did bioinformatics to guess whether they coded for protein or not.
  • The FANTOM 3 consortium sez that 62% of the mouse genome is transcribed. Half of these transcripts are non-coding.
  • MicroRNAs are a subset of ncRNAs that regulate other RNAs. 70% of microRNAs can be found in the brain. There’s a big section in the review on particular brain-specific miRNAs and potential roles in development.
  • There are seven brain-specific small nucleolar RNAs (snoRNAs). These are RNAs that act as enzymes to chemically modify nucleotides in other RNAs. There is a particularly provocative connection between snoRNAs and serotonin receptor subtype processing in Prader-Willi Syndrome, a form of mental retardation arising from deletion of a chunk of chromosome 15.
  • LINE-1 retrotransposons make up 17% of the human genome. These are the so-called “jumping genes”. They code for the proteins to reverse transcribe them back into the genome. So it’s relatively easy for them to proliferate. Technically they aren’t non-coding, but the RNA is used in a nontraditional way in that it serves as a template for reading the gene back into the genome at some point. There is some evidence for a role for active LINE-1s in neural differentiation.

The authors say ncRNAs might evolve more quickly because they can be duplicated with less probability of harm than coding RNAs. I’m not so sure about their argument. I’ll buy the first point, that ncRNAs are generally smaller and can be duplicated with greater fidelity. They then state that their small size also decreases their likelihood of disrupting existing genes. It seems to me that size wouldn’t matter. If a chunk of DNA gets copied into the middle of some gene, the gene should be disrupted. Maybe the consequences could be worse in the specific case where the duplication occurs between regulatory elements and the transcription start site, so that the gene’s regulation is more disrupted the further the promoter is pushed from the gene, but this seems like it would be uncommon.

I stated a while back that one reason for expecting a lot from RNAs over protein is that RNAs used to do the whole job back in the good ole days of the RNA World. Surprisingly, I was not the first brilliant, original thinker to arrive at this conclusion. In fact, these folks end by referencing Sean Eddy, who wrote a whole review about the importance of ncRNAs and how the RNA World isn’t over. From his review:

The discovery of RNA catalysis and the “RNA world” hypothesis for the origin of life provide a seductive explanation for why rRNA and tRNA are at the core of the translation machinery: perhaps they are the frozen evolutionary relic of the invention of the ribosome by an RNA-based ‘riboorganism’. Other known ncRNAs have also been proposed to be ancient relics of the last riboorganisms123-125. The romantic idea of uncovering molecular fossils of a lost RNA world has motivated searches for new ncRNAs. However, as these searches start to succeed, more and more ncRNAs are being found to have apparently well-adapted, specialized biological roles. The idea that ncRNAs are a small and ragged band of relics looks increasingly untenable. The tiny stRNAs and miRNAs, for example, seem to be highly adapted for a world in which RNAi processing and developmentally regulated mRNA targets exist.

Therefore, consider an alternative idea – the “modern RNA world”. Many of the ncRNAs we see in fact have roles in which RNA is a more optimal material than protein. Non-coding RNAs are often (though not always) found to have roles that involve sequence- specific recognition of another nucleic acid. (The choice of examples in Figs 1, 2 and 4 is deliberate, showing how snoRNAs, miRNAs and E. coli riboregulatory RNAs all function by sequence-specific base complementarity.) RNA, by its very nature, is an ideal material for this role. Base complementarity allows a very small RNA to be exquisitely sequence specific. Evolution of a small, specific complementary RNA can be achieved in a single step, just by a partial duplication of a fragment of the target gene into an appropriate context for expression of the new ncRNA.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Hit it up if you have access. For the less fortunate:

Were you surprised at the reaction to the book? Yes. The human past is a touchy subject because many people use it – quite misguidedly in my view – to reason from what was to what ought to be. You mustn’t say people practised cannibalism in the past because that would justify cannibalism today. Despite its absurdity, this argument makes almost every attempt to reconstruct the past controversial.

In my book I tried to let the facts speak for themselves, a somewhat more original idea than it may sound because some writers about the deep past, like the otherwise very readable Jared Diamond, start with explicitly political premises and adduce facts to support them. I cannot see that this is a justifiable scientific procedure, the popularity of Guns, Germs and Steel notwithstanding. Having compiled my apolitical account, I figured the conclusions that had emerged would be about equally vexatious to the right and the left. But so far, which I hadn’t expected, the book has had more attacks from the left, particularly for the lese-majeste (accents removed by CM, cos i dunno the codes for’em) of saying our recent ancestors, far from being noble savages, were a lot more savage than we are.

What has been the reaction of the scientific community? Many people have been kind enough to tell me they liked the book, though I wasn’t sure how to interpret the comment of one biologist who said he read it on nights when he couldn’t get to sleep. I’ve been a little disappointed it hasn’t received more reviews from scientific journals because it has enough references for scientists to follow the technical background. Both Nature and Science assigned the book for review, but to dreary ideologues who assailed my failure to discover that political correctness has been evolution’s guiding principle all along, though fortunately they managed to find no other errors. I think these journals would have served their readers better with apolitical reviews.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Techniques like this on a larger-scale might help get a handle on nervous system evolution. They took several mouse lines expressing fluorescent proteins under the control of different promoters and isolated individual cells from brain regions like the cingulate cortex, somatosensory cortex, hippocampus, and amygdala. They then used a microarray to look at the level of ~13,000 mRNAs within each cell type and measured the distance between the expression profiles to generate a tree. The major distinction between excitatory and inhibitory neurons (glutamatergic vs. GABAergic) was the first split. Cortical pyramidal neurons (the bulk of cortical neurons) were the most closely related by this measure. There are pyramidal neurons in the hippocampus as well and these weren’t too far from the cortical version. Interesting to see how closely the neocortex stuck to the template of allocortex cell-types as it evolved. Lots of interesting tidbits. The genes that are expressed at different levels in different cell-types tend to be part of gene families. The authors claim this provides support for the notion of gene duplication loosening the clamp of selection a little bit. I’ve been curious about how much one can generalize experimental results from hippocampal to neocortical pyramidal cells and vice versa. I’ll tell you why if I ever find time to discuss these papers.

Neuronal cell-type dendrogram below the fold:

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

I read Endless Forms, and I read Coming to Life by Christ. But I still don’t feel like I’ve got anything like a grip on development. Endless Forms avoided specific terminology too much IMHO. I don’t think it necessarily helps anyone’s understanding to make up a new term like “tool-kit genes” in place of “transcription factor.” So I didn’t learn all that much from Mr. Carroll. Coming to Life was a little less lay-book-ey, but still left out the details and (I felt) that there wasn’t an overarching theme. It felt like a list of facts, a description, at a somewhat general level of what happens during development.

I think Dr. Davidson’s new book, The Regulatory Genome: Gene Regulatory Networks In Development And Evolution, might be the solution I’m looking for. But since it is more like a textbook it comes at more of a textbook price: $70 new, ~$60 on Amazon. Dunno how quick I can get it at a library. Anyway, here is some from the book review in Nature Genetics:

The book begins with an introduction of his idea of the regulatory genome or the regulatory apparatus encoded in the genome. In the second chapter, he explains in detail cis-regulatory modules and the structural and functional basis of regulatory logic. Here the author emphasizes that comparison of genome sequences of different animals (or species) is helpful to identify highly conserved cis-regulatory modules. After a brief explanation in chapter 3 of animal development as a process of regulatory state specification, Davidson argues persuasively that cis-regulatory modules act as networks and that the gene regulatory networks are the key to understanding embryonic development (chapter 4) and evolutionary construction of various animal forms (chapter 5). It is also emphasized that computational and systems biological approaches are essential to create the networks. Every step of his logic is presented with examples to explain his idea, with beautiful, well-designed color figures. His concept of the significant roles of gene regulatory networks in development and evolution can be clearly understood using the aforementioned key terms. Namely, animal embryogenesis seems to be established by a complex combination of input and output linkages, plug-ins and differentiation gene regulatory batteries, usually in this order. The diversity of animal forms may be explained in terms of core kernels, alteration in deployment of plug-ins, input and output linkages and differentiation gene regulatory batteries. This order is important for understanding animal evolution at the level of phylum, class, order and family, respectively.

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

Here is a repository of links to nerdy nerdy rapping. Seems like most of the expert nerdiness is computer-related. MC++ and Frontalot are probably the best known. Frontalot has a recent incredibly nerdy take on Wagner in collaboration with Baddd Spellah (I suppose the track is attributed vice versa.)

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
🔊 Listen RSS

The September issue of Hippocampus is a special on Hippocampal Place Fields and Episodic Memory. Episodic memory is the type of memory that you immediately think of if you are not a memory researcher. It is what Proust is doing while Remembrancing (which is part of why he is such a favorite quotee among the memory community). EM is supposed to have key elements that distinguish it from other memory types, one of which is this experience of “mental time travel”. This is an idea almost entirely based in introspection. It is not clear how to ascertain if a patient is replaying the experience subjectively or merely retrieving facts without the VR-like aspect. The subjective nature of this type of memory retrieval makes it well-nigh impossible to study in rodents. But it may be that people who are mostly concerned with the mental time travel issue are hardly interested in memory, per se, but are rather interested in our ability to “fill in the gaps” and create continuous narratives. From the second article of the issue by Ferbinteanu et al.:

Furthermore, though intuition also suggests that our memories are veridical-an accurate reproduction of past events-empirical data indicate that autobiographical memories are in fact reconstructed by active processes sensitive to systematic errors based upon inattention, suggestion, expectancy, and familiar cognitive scripts (Schacter, 1999; e.g., Conway, 2001b). Even completely false memories are acquired easily (Loftus, 1997, 2004) and activate the same neural network involved in true memories (Okado and Stark, 2005). These memory distortions show that rather than “traveling down the memory lane” to re-experience past events, memories for episodes are reconstructed representations based on fragmentary data fit together using heuristics (Schacter, 1999; Conway and Pleydell-Pearce, 2000).

So, you don’t really remember. You confabulate based on the information you have at hand, the “fragmentary data”. I have gotten so paranoid about confabulation that I will hardly ever state one of my memories of a past event as fact. I would be a horrible murder-trial witness. I notice errors all the time among friends of mine that are better story-tellers. We will be reminiscing about some event 4 or 5 years ago and place some individual there that we didn’t even know yet. Everyone knows we have memory errors. I wonder if better story-tellers are better at “smoothing the curve”. We have a bunch of data points stored up, but it may take our narrative-building ability (which may be uniquely human) to experience “mental time travel”.

This habit humans have of telling ourselves stories seems pervasive. I am reminded of the hyperactive agency detection device, proposed as an explanation for our need to make gods. Also, Gazzaniga’s experiments with split-brain patients:

Studies on split-brain patients have dominated Dr. Gazzaniga’s work ever since. In the 1970′s, he and his colleagues reported that the left hemisphere acts as an interpreter, creating theories to makes sense of a person’s experiences.

Their first clue came from an experiment Dr. Gazzaniga carried out with Dr. Joseph LeDoux, now at New York University. A patient called P.S. was shown a picture, and was then asked to choose a related image from a set of other pictures. What P.S. didn’t know was that he was being shown a different image in each eye.

Dr. Gazzaniga and Dr. LeDoux showed P.S. a picture of a chicken claw in his right eye and a snow-covered house in the left eye. P.S. pointed to a chicken with his right hand and a snow shovel with his left.

”I’ll never forget the day we got around to asking P.S., ‘Why did you do that?”’ said Dr. Gazzaniga. ”He said, ‘The chicken claw goes with the chicken.’ That’s all the left hemisphere saw. And then he looks at the shovel and said, ‘The reason you need a shovel is to clean out the chicken shed.”’

Dr. Gazzaniga hypothesized that P.S.’s left hemisphere made up a story to explain his actions, based on the limited information it received. Dr. Gazzaniga and his colleagues have carried out the same experiment hundreds of times since, and the left hemisphere has consistently acted this way.

”The interpreter tells the story line of a person,” Dr. Gazzaniga said. ”It’s collecting all the information that is in all these separate systems that are distributed through the brain.” While the story feels like an unfiltered picture of reality, it’s just a quickly-thrown-together narrative.

The point of the Hippocampus article is that we can ignore this business for now, while we study how the hippocampus brings together the information that we actually have remembered rather than confabluated. This process is amenable to animal studies, which is great because we are getting really good at multi-electrode recording in the rat hippocampus and gaining lots of insight into place cells, which may provide the spatial context for remembered events to happen in. Which brings up an interesting question: Do you ever remember an event without remembering where you were during that event? Or does the term “event” imply a spatial setting?

(Republished from GNXP.com by permission of author or representative)
 
• Category: Science 
No Items Found
PastClassics
The unspoken statistical reality of urban crime over the last quarter century.
The “war hero” candidate buried information about POWs left behind in Vietnam.
The major media overlooked Communist spies and Madoff’s fraud. What are they missing today?
What Was John McCain's True Wartime Record in Vietnam?