The Unz Review: An Alternative Media Selection
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 TeasersGene Expression Blog
Organismic Complexity Is Just Duct Tape
🔊 Listen RSS
Email This Page to Someone

 Remember My Information



=>

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New ReplyRead More
ReplyAgree/Disagree/Etc. More... This Commenter This Thread Hide Thread Display All Comments
AgreeDisagreeThanksLOLTroll
These buttons register your public Agreement, Disagreement, Thanks, LOL, or Troll with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used three times during any eight hour period.
Ignore Commenter Follow Commenter
Search Text Case Sensitive  Exact Words  Include Comments
List of Bookmarks

ResearchBlogging.orgThe Pith: Biological complexity may be a particular evolutionary path taken due to to random acts of nature, not because there is a selective advantage to complexity.

The title above basically describes the message of evolutionary biologist Mike Lynch from what I can gather. His basic argument is outlined in long form in The Origins of Genome Architecture, though the outline of the thesis is evident over 10 years back (see Preservation of Duplicate Genes by Complementary, Degenerative Mutations). Verbally I think the easiest way to explain Lynch’s framework is that in species with small effective population sizes the creativity of stochastic forces in generating non-adaptive structure and complexity tends to overwhelm the power of natural selection to prune this tendency toward baroque. I reviewed a paper last year which argued that Lynch’s observation of an inverse relation between effective population and genome size was an artifact, that once you controlled for phylogenetic history it disappeared. Suffice it to say this is an area of dispute and active research, so we shouldn’t take any individual’s word for it. This is science on the broadest canvas. Extraordinary general claims need to backed by a generation of publication I’d think.

Lynch is now a co-author on a new letter to Nature (which is open access, so read it!), Non-adaptive origins of interactome complexity. Imagine if you took biochemistry, specifically the nearly impenetrable language of protein interactions, and crossed it with evolutionary genomics. This is what you’d get.


The gist of the letter is in the figure to the left. You see 36 species in order of relative “interactome complexity.” An interactome is a basically a network of biochemical interactions, in this case in particular at the level of protein chemistry. Organisms which we presume to be complex, such as H. sapiens, have a more complex interactome than organisms which we presume to be less complex, such as prokaryotes.

Why does interactome complexity matter? Let me quote from the abstract : “This leads to the hypothesis that the accumulation of mildly deleterious mutations in populations of small size induces secondary selection for protein–protein interactions that stabilize key gene functions. By this means, the complex protein architectures and interactions essential to the genesis of phenotypic diversity may initially emerge by non-adaptive mechanisms.”

You probably know about neutral theory, whereby most evolution on the molecular level is due to substitutions which have neither positive nor negative selective effects. That is, they’re not adaptive. A related model is the nearly neutral theory, where a substantial fraction of substitutions are ever so mildly deleterious. So mild in fact that selection does not “see” these mutations as harmful enough to “purge” them from the genome. This is related to effective population size, which measures the proportion of individuals which matter for purposes of genetics in a set of individuals (if a population is 100, but only 10 breed, then there’s a huge difference between census and effective population size). As effective population size declines the power of random genetic drift becomes more evident. This is simply sample variance, which converges upon zero as N approaches infinite. If you flip a coin 100 times you will expect it to deviate proportionally from expectation (0.50 of both sides) more than if you flipped the coin 1000 times. Low effective population sizes are like the swell of noise against which natural selection is attempting to “make music.”

In this paper the authors seem to be suggesting that these broader population genetic dynamics result in suboptimal functionality on the molecular level as deleterious mutations build up. Complex molecular interactions then emerge through secondary natural selection pressures as a way for the whole Rube Goldberg system from collapsing in on itself. Wheels within wheels. The implication then is that complex organisms evolved not because they were better in a reproductively fit sense in relation to simple organisms, but that organismic complexity is simply a way for collections of simple organisms to not fall apart when subject to stochastic forces which increase the mutational load.

Back in the day my background was in biochemistry, but there’s a reason that I don’t talk about it much in this space. I’m not too excited about the prospect of visualizing the shape and character of a protein and its various subunits (I should have realized something was up when I found that I preferred physical chemistry to biochemistry!). But I’d be curious about the impression of readers who are versed and fluent in the biochemistry to evaluate the claims within. After the criticism of the genome size – effective population size correlation I’m a touch wary about an argument which relies on just 36 species. I also haven’t totally given up on the idea that one could introduce a fitness landscape model here, where organism complexity may initially have been a response to suboptimal fitness, but that after crossing the fitness “valley” the species could then ascend a new “peak.”

Citation: Ariel Fernández, & Michael Lynch (2011). Non-adaptive origins of interactome complexity Nature : 10.1038/nature09992

(Republished from Discover/GNXP by permission of author or representative)
 
• Category: Science • Tags: Biology, Genome Complexity, Genomics, Mike Lynch 
Hide 5 CommentsLeave a Comment
Commenters to Ignore...to FollowEndorsed Only
Trim Comments?
  1. Biological complexity may be a particular evolutionary path taken due to to random acts of nature, not because there is a selective advantage to complexity.

    Sounds like MS Windows.

    And also Gould’s spandrels and exaption.

  2. miko says:

    Yes,we are exactly in the Windows kernel situation. But I don’t think it’s news that the robust properties of networks (and chaperones, the UPR, and pretty much all cellular quality-monitoring systems) buffer variation, or that most variation is deleterious when unmasked. This was Susan Lindquist’s point with hsp90, and Waddington’s long before. These guys seem to have just put it in the new(ish) network jargon of systems biology.

    I haven’t looked at the data sources in their paper, but most protein-protein interaction databases are a freaking disaster. The false positive rates for the common methods are astounding, particularly when you are measuring interactions for multicellular eukaryotes (where the interaction network of every tissue, cell type, organelle, and subcellular compartment can be distinct) in some heterologous system like yeast two-hybrid. Co-IPs are even worse. Lots of proteins stick to each other under a given set of conditions, it does not mean they have biologically meaningful “interactions” in normal, living cells. But figuring out how proteins work one by one is messy and boring and would require “systems biologists” to spend time in actual labs instead of sitting at computers drinking Coke, writing PERL scripts and inventing new words for old concepts.

    OMG. I just skimmed the paper…I think even their interaction data is “theoretical,” based on PDB structures to predict the relative likelihood of homologous proteins participating in protein-protein interactions. It seems like a very complicated, fraught, and weak way to make a point that has been already demonstrated using, like, living organisms.

  3. DK says:

    I would like to just second everything miko wrote. The paper is a weak illustration of what everyone already knew long ago anyway. “The complex protein architectures and interactions
    essential to the genesis of phenotypic diversity may initially emerge by non-adaptive mechanisms”. really? Jeez, what news.

    And to call yeast two hybrid and co-IPs (as far as high throughput is concerned) a freaking disaster is a huge understatement.

  4. But figuring out how proteins work one by one is messy and boring and would require “systems biologists” to spend time in actual labs instead of sitting at computers drinking Coke, writing PERL scripts and inventing new words for old concepts

    lol. i did get that sense. if u want to write a theoretical paper write it 🙂 doesn’t seem like that a letter to nature gave them enough to really get anything out the synthesis.

  5. Anonymous • Disclaimer says: • Website

    If you guys think there’s nothing new in the paper, then you clearly misunderstood it. For one thing, many biochemists are seldom even trained in population genetics, let alone applying it to their work, so the blend of biochemistry and population genetics is already a quite novel thing, even if it does seem obvious.

    The key point of the paper is that relaxed selection associated with reduction in effective population size (eg in cumbersome stupid things like animals…) is consistently associated with an increase in complexity of protein interactions (even at finer levels than just unicellular vs. multicellular), and one mechanism that could explain that is that reduction in selective efficiency allows for more biochemically unstable and less efficient proteins to be tolerated. A major contributor to protein instability is exposure of the polar backbone, so more exposed proteins tend to be subject to more errors just because the optimal stage for the organism doesn’t have as much of thermodynamic edge over suboptimal foldings (given the same sequence). Thus, and their data show that, species with lower effective population sizes have more poorly wrapped proteins, on average.

    This ties in with interaction networks in that exposed backbone sites have an affinity for recruiting secondary proteins for greater stability. After time, these proteins become dependent on each other as more deleterious mutations are now compensated for and therefore allowed (see “presuppression”, eg Stoltzfus 1999 J Mol Evol) , and more components are required for a system of roughly the same functionality/adaptive value. Eventually, some of these interactions may be exapted to something useful, but that’s not *why* they stuck together initially.

    That’s a pretty new model. Haven’t come across anything similar anyway, and I’m quite obsessed with non-adaptive evolution so I’d probably have heard of it by now if it’s so “common knowledge”. Clearly the paper wasn’t particularly well-read by some critics…

    Oh, and I blogged about it here, FYI: http://skepticwonder.fieldofscience.com/2011/05/sticky-proteins-complexity-drama-and.html

Comments are closed.

Subscribe to All Razib Khan Comments via RSS