RSSMy own recollections from the last 50 years:
“astronautics hasn’t advanced much since the last moon shot in 1972”
The manned space program is extremely risk adverse and only uses flight proven equipment, i.e., old stuff. NASA’s unmanned space program is much more innovative, e.g., the Mars explorers. Also the military is strongly pushing RPV’s and robotics. Non-government astronautics has been progressing.
“chemistry seems to have frozen in place except for medical uses”
Computer modeling of chemical structures has significantly improved, e.g., quantum mechanics, transition states, charge distributions, i.e., smart material design. Chemistry plays a critical role in nanotech. Nanotech improves surface characteristics for new catalysts. The materials in modern products are stronger, lighter, and more durable than in prior decades…that is largely due to chemisty.
“Without access to very expensive apparatus, experimental and theoretical physicists have been marking time for much of the last 30 years.”
Physics has multi-billion dollar particle accelerators and fusion experiments that employ teams of hundreds of scientists. Some of the most expensive computers are largely devoted to physics. If progress in physics has been slow it is because the problems are extremely difficult and each generation of accelerator or fusion reactor costs far, far more.
There has been significant progress in optics, nanotech, super conductivity, quantum mechanics, and cosmology.
“Nuclear engineering has stagnated”
There are many new, innovative designs. The problem has been a public that fears nuclear power. That is changing.
“unconventional energy schemes such as tidal dams and OTEA have dropped from sight”
The Hawaiian OTEA research station was never able to produce competitively priced electricity. There are many ongoing tidal dam projects around the world. Likewise for geothermal. These projects have major engineering problems which is why windmills and solar power dominate the news.
“huge delays in engineering projects of sorts have become commonplace”
The political/legal environment slows down big projects.
In the 1950’s the US didn’t even have an interstate highway system. Today cities have massive infrastructure which require maintenance and hampers new development, e.g., digging a ditch means cutting through existing pavement, plumbing, power lines, and communication lines.
“virtually all forms of modern day engineering seem to have atrophied into mere shells of what they once were”
Better materials, better design tools, etc. Buildings must withstand earthquakes and be energy efficient. Compare a modern bullet train to a 1950’s train. Compare modern mining equipment to what was available in 1950.
“Something else — in say 1950, engineers and scientists were in the top 10% of the population in income and educational level (and dare I say, social prestige). They were up there with lawyers and doctors and high ranking military officers and civil servants;”
In 1950 engineers and scientists weren’t at the top. The heyday followed the Sputnik scare after which the US began a major push to promote science education and the space program began in earnest. The media was very favorable toward scientists and engineers (I Dream of Genie, Star Trek). This peaked in the late 1960’s and early 1970’s. By the late 1970’s aerospace engineers were out of work and the salaries and prestige for scientiests were declining. The media turned against technology. The Sputnik period was atypical for US history.
JEB: “There are thousands of genes that affect intelligence, and even the smartest people have no more than (say) 52 percent of the “good” alleles,…”.
This model assumes random mating with thousands of common additive variants of very small effect determining the genetic component of a person’s intelligence. Compare an IQ 150 person to an IQ 50 person where a 4% difference in “good” alleles would account for 50 of the 100 IQ point difference (assuming heritability of 0.5). If the difference between the good and bad variants caused an 0.1 point difference in IQ (genome wide association tests have established an upper limit on the effect size of common variants) there would have to be 12,500 common variants that cause a 0.1 point IQ difference. However it is even worse because the variant effect sizes are likely to follow a power law with only a few common variants producing even 0.1 point difference in IQ. So many more IQ altering variants would be required.
Spread out over the functional areas of the human genome this would imply virtually all regions have some minor impact but no regions have significant impact. This doesn’t match the gene expression patterns seen throughout the body. Protein expression in the brain directly affects brain function and some proteins are far more important than others. Brain endophenotypes such as regional differences in gray matter and white matter volumes correlate highly with IQ and are heritable. I.e., the patterns of biological tissue differences don’t match this model of genetic intelligence.
So what is happening?
1) There could be many different rare variants of large effect. Or SNP association tests may be missing common causative genetic factors. If so, only a modest number of variants might largely determine intelligence in each person.
2) Assortative mating tends to concentrate good variants and bad variants so the estimate that “the smartest people have no more than (say) 52 percent of the “good” alleles” is wrong.
3) The inheritance models only fit the usual IQ ranges between 70 and 130. I.e., studies of very high IQ populations might show different patterns. E.g., stochastic development factors may play a larger role in the very high IQ. Or non-additive genetic factors may play a larger role. Or the tests used to measure very high IQ might not be very reliable (more noise at the top or the “g” factor begins to separate into non-correlating intelligence subtypes).
4) I don’t know.
Note that even within families there can be considerable IQ variance. E.g., my IQ is over 160 while my brother’s IQ is around 115 and we share over half our genetic variants.
“…so culling those people out will have very little impact on overall gene frequency, except in the very long run”
Most extreme retardation in white people is organic, e.g., due to problems during the birth process or the result of an injury or disease. Keeping such people from breeding will have no eugenic effect.
However when genetics is the cause, eugenics would work. Assortative mating concentrates good variants and eugenics would then increase the good variants.
Consider embryo selection for intelligence. Suppose that two embryos are tested and a genetic IQ potential for each could be accurately predicted. The lower IQ embryo is then discarded. How much would IQ increase each generation? On average siblings differ by around 12 IQ points and about half of that is due to genetics. So IQ could be raised by about 3 points per generation until genetic variance was exhausted. (Testing 100’s of embryos for each implantation could increase IQ by 15 points per generation.)
In my opinion this is moot. Within a few decades brain function and the genetics of intelligence will be well understood. It should be possible to intervene with nutrition, training, drugs, gene-engineered stem cell transplants, computer implants, and wireless access to smart applications running on an Internet cloud. Society will make geniuses instead of breeding them.
The transition from nations, races, and social classes to a global society where all people can alter their traits as desired will be disruptive. My preference is to publicly acknowledge scientific facts while avoiding political policies likely to increase societal strife. However it is difficult to predict the policies that would minimize strife over the short and long terms.
“Students below roughly 115 IQ won’t remember it just due to that.”
Does anyone have a good link that discusses the relationship between IQ and episodic or semantic memory. My recollection is that intelligence does not correlate strongly with long term memory. (Intelligence does correlate with working memory.)
When I understand a subject well then I see patterns between the facts. I also see mappings between what I already know and what I am learning. I can then remember specific facts by recalling the pattern or the mapping. So intelligence helps me remember structured information. However, I don’t think intelligence helps me remember a phone number. (Intelligence would help me learn a method for memorizing phone numbers.)
“On the other hand, whole swaths of factory work will be eliminated, as robots are more reliable and can work in more hazardous conditions, without benefits.”
KIVA warehouse robots: http://www.spectrum.ieee.org/jul08/6380
MIT greenhouse robots: http://gadgets.softpedia.com/news/MIT-Student-Develop-Robotic-Gardeners-2026-01.html
In the last twenty years processors, sensors, and software have gotten much better and much less expensive. The time is now ripe for automation of many unskilled jobs. We are presently in a transition stage where unskilled humans are still needed for most jobs. However, many of those jobs are being restructured so that robots work together with unskilled laborers. One unskilled laborer + 5 robots can do the work formerly done by several unskilled laborers while also reducing response time and errors. In another decade the robots will be far better and cheaper and more jobs will have been restructured for robots.
Personally I have no idea how a young person should prepare for a modern career.
Plumbers and handymen should still be needed. However, the entry barriers to such jobs are low and my guess is that as low skilled workers lose jobs in manufacturing and agriculture they will move into other low skill jobs and reduce wages. E.g., suppose I’m a skilled plumber with my own business. In the new job market I can hire many unskilled labors. With their help I can complete far more plumbing jobs. These workers will take on more and more of the skilled work. Eventually I will be managing many low paid semi-skilled plumbers. My low cost plumbing business will lower the wages paid to other plumbers. (This already happened in the landscape business.) So low wages should spread from one low skilled occupation to the next.
I’m not sure doctors will fare much better. Suppose a doctor specializes in breast radiology. Medical advances lead to molecular diagnostics that detect early stage cancer from protein markers in the breath or blood. Advanced chemo or immune therapy then cures the cancer. The $250,000 a year breast imaging specialist is no longer needed.
Within a few decades no one, no matter how smart or how well educated, will be able to keep up in a job market where old occupations rapidly disappear or drastically change.
“All you really learned you learned on your own.”
If you have an IQ over 130 then you can learn non-science material better on your own than in a classroom. If your IQ is over 140 then you can learn basic science and math better on your own. If your IQ is over 150 you can learn all undergraduate university subjects better on your own. People with very high IQ’s pretty much assume that school is worthless because it didn’t help them.
The US military might be an excellent model for US education. They track by ability. They only teach topics that are important for career success. They have rigorous standards for success. They have monitoring and feedback to improve training.
Hmmm, here is a modest proposal. Let the grade schools operate as they do now. Let the US military educate students with IQ’s less than 130 for grades 7-12. Students with IQ’s over 130 would teach themselves under the guidance of experts in various fields. Those students would take online tests to demonstrate competence in the basic subjects. At 18, the average students would have a good basic education and be prepared to enter a trade or begin training for a profession. The very bright students would begin an apprenticeship with a practicing expert, much as graduate students work with their advisor. The expert could be a lawyer, doctor, engineer, scientist, business manager, etc. and the student would learn by doing.
re: Selection signature
Biologists Discover How ‘Silent’ Mutations Influence Protein Production
http://www.physorg.com/news158506251.html
“synonymous mutations determine mRNA folding and thereby the eventual protein level”
This is cool.
Step 1: Look for weak associations between a trait and common variants to get a list of potentially important DNA regions.
Step 2: Closely examine those DNA regions in people with extremes in that trait looking for rare variants with strong effect.
Step 3: Use the rare variants of strong effect to identify the important genes, proteins, and molecular pathways underlying a trait.
Step 4: Tie it all together to get a good DNA-protein-system model of the trait.
This would be a fast track method for predicting phenotype from genotype.
Sequence a person’s genome and identify variant DNA. Some of the variants will be common and have a known effect. Some of the variants will be in regions that are known to have little affect on the trait. Some of the novel variants will lie in DNA regions known to be important for that trait. Combined with a good model connecting the genotype through molecular mechanism to phenotype, one then predicts how the novel variant will affect the trait. E.g., a novel variant causes a change in a critical part of a protein. Knowing the protein’s function in the biological system the doctor then uses a model to predict the affect of that DNA variant on the trait.
bbartlog: “And why wouldn’t it be a viable treatment for HIV?”
For now it is too dangerous, 1/3 die during the transplant. Finding a compatible donor who is also protected against HIV would also be hard.
HIV persists in non-blood cells. However, the protected immune system should hold the disease in check.
I think bone marrow is a primary source for most stem cell types. I.e., stem cells released from the bone marrow can migrate, take up residence in other tissues, and then differentiate and integrate into the tissue. Combined with local growth factors and factors that increase cell turn-over, this could be used to replace most original tissues (replacing too many neurons might cause memory and skill loss).
The bone marrow transplant could be done gradually. Inject new stem cells into the blood each day combined with a drug that increases stem cell migration. Gradually, all of the old stem cells would be replaced. (No radiation or chemotherapy and no loss of immune function during the transition.) (Autoimmune diseases might require wiping out immune “memory cells” before reconstructing the immune system.)
The new stem cells could correct genetic diseases such as sickle cell anemia, fight persistent diseases such as HIV, cure cancer (see cancer resistant “PAR” mouse) or cardiovascular disease, or enhance a person by improving genotype.
The technology needs to improve:
1) Safety – host/graft rejection. Scientists need better technology for controlling and training the immune system.
2) Better technology for isolating, growing, and differentiating the stem cells.
3) Better control of the migration, differentiation, and integration of stem cells into target tissues.
4) Scaffolding technology to support growth or renewal of critical body structures, e.g., heart valves, joints, and nerves.
Significant progress is being made in all these areas. Within twenty years I expect to be a chimera.
PS Such technology is one reason I tend to discount concerns about the aging population or dysgenic demographic trends.
Razib: “we really don’t know the correlation between # of del. alleles & fitness, do we?”
If you could rank the “fitness penalty” of specific mutations you would likely see a power law distribution. A few mutations are fatal, more are moderately harmful, far more are slightly harmful, and the vast majority are essentially neutral. Evolution through natural selection is strongly weighted toward removing the most harmful mutations. I.e., selection preferentially removes the more harmful mutations lying toward the left side of that distribution. More selection pressure results in less harmful mutations also being removed. Flies occupy the peaks of the fitness landscape while humans wander around the foothills. (E.g., human regulatory sequences tend to be less conserved than mouse regulatory sequences.)
P-ter: “I’m not sure this (a large number of strong selective sweeps) is the same situation considered by Haldane;”
Right. I didn’t intend to diss Haldane.
In a large, single, connected, panmictic, stable population in an unchanging environment the specie should be highly adapted to that environment. In this case there would be few beneficial mutations of even moderate selective advantage in the population (harmful mutations are quickly removed by selection). The probability that one animal would have several more good variants than a competing animal would be small (sexual selection and thresholding are thus less efficient at propagating the good variants). There would be few opportunities for beneficial haplotypes to recombine to form even better haplotypes. Instead, a good mutation would slowly sweep the population at a rate determined by the selection advantage of that trait. In that case, Haldane’s Limit applies.
P-ter: “If you consider a single selected allele at a selection coefficient of 1%, that 1% is relative to all individuals not carrying that allele (depending on how you model selection). But those individuals are also carrying alleles that, in this model, are highly beneficial. So if this allele had arisen on its own (ie. not in a poppulation with thousands of other selected alleles), I think we have to conclude its selection coefficient would have been orders of magnitude larger?”
P-ter: In my mental model I separate the early stage from the middle and late stages.
In the early stage there are thousands of slightly beneficial variants with selection coefficients of around 0.1% compared to wild type (large population in changing environment generates the variants). These variants are rare so there isn’t much interference between variants. In parallel, all the variants slowly increase in frequency. At this stage, the variants neither slow nor accelerate the sweeping of other variants.
In the middle stage, the frequency of the variants has increased so that the variants are beginning to interact. No one variant is common but with thousands of good variants each animal may have several good variants. At this stage, sexual selection and thresholding begin to be more efficient at replacing the wild types with the good variants, i.e., each “selection death” promotes several good variants. Also, good haplotypes recombine to form even better new haplotypes with increased selection advantage. As the new haplotype replace the wild haplotype, several good variants replace the old wild variants. A few of the new haplotypes will have selection coefficients of 1% or more compared to wild type and will begin sweeping rapidly.
In the last stage, the new haplotypes containing the variants are now common. They interfer with each other, they merge and split creating new haplotypes that are optimal for each specific environmental niche. Some sweep to fixation, some stabilize under balancing selection. Some lose their selective advantage because the environment changes.
With that simple mental model in mind, I then imagine all of the stages occurring simultaneously in populations with complex substructure, spread over a large geographic area with diverse environments.
“What is not clear to me is what happens if there are a thousand such genes in the population at once. Is the process additive? Does each advantageous gene increase in frequency independent of the others, as if it was the only advantageous gene in the population? Or does the process get clogged up? If everyone has a random selection of 200 genes each giving them a one percent advantage, then does anyone actually have an advantage over anyone else? How does this work?”
See David’s GNXP article: http://www.gnxp.com/blog/2006/04/haldanes-dilemma-should-we-worry.php
Haldane makes assumptions that don’t apply to real world populations.
Sexual selection. The most successful stag will likely have many better alleles compared to his rival. That stag will produce far more offspring than would be predicted by considering the relative fitness value of each allele. In one generation, one stag may propagate hundreds of beneficial alleles.
Nonlinear thresholds where many harmful variants combine in the same animal. The animal dies, is infertile, or fails in mate competition and many inferior alleles are removed.
Chromosome recombination during meiosis alters the fitness coefficient. Suppose “A” and “B” are nearby beneficial variants of a chromosome with wild types “a” and “b”. At first haplotypes “Ab” and “aB” will have a modest selective advantage over haplotype “ab”. Eventually there may be a recombination event creating a new haplotype “AB” which has significant selective advantage over haplotype “ab” and modest selective advantage over haplotypes “Ab” and “aB”. The new haplotype will sweep faster and will replace two wild types at the same time. Hence, nearby beneficial variants on the same autosome could accelerate sweeps.
Many thousands of existing variants persist and new variants arise in large populations. The changing environment gives some variants a selective advantage. In parallel, the frequencies of these rare variants increase. Recombination eventually produces superior haplotypes with even greater selective advantage. Those haplotypes sweep the population. Haldane’s Limit is left in the dust.
Human populations are even more complicated. Population substructure matters. Migration patterns matter. Inheritance of family wealth, power, or prestige matters. Reality is complex with diverse and changing environmental niches, new haplotypes continually arising, balancing selection, partial sweeps, stochastic historical events, etc. Imagine trying to model the spread of Gengis Khan’s Y-chromosome.
A person centric model of gene migration would suggest small traveling bands with only a few women surviving.
A gene centric model gives a different picture. Consider humanity as being spread continuously along a migration path. Genes are transported along that path as men and women move between between tribes. The female gene carriers remain close to home while the male carriers range much farther. The effective population size for the male carried genes would be greater than the effective population of the female carried genes.
Trans variation should be common due to copy number variation. Multiple copies of the DNA that codes the trans regulatory elements should change the expression levels of the regulatory elements. Extra copies would also be free to mutate and take on new functional roles. Fast track adaptation.
“Do the extra years between 70 to 80 in expected life span matter that much?”
If you are 60 then the difference between 70 and 80 is large and meaningful. If you are 20 then you should expect very significant developments in medical technology that make 50 year projections moot. For longevity it would be wiser for a young person to focus on the factors that are more likely to kill in the next couple of decades, i.e., driving while talking on a cell phone or while drunk.
David, this was interesting.
“By far his most common example is that of a quantitative trait controlled by several loci where the selective optimum for the trait is at an intermediate value, i.e. neither the highest nor the lowest that can be produced by the various possible combinations of alleles. In this situation it is likely that the optimum intermediate value of the trait can be produced by different allele combinations. The effect of an allele on fitness (not necessarily on the quantitative trait itself) is epistatic, i.e. dependent on the combination of other genes in the genotype. Which of the relevant alleles are favoured by selection may then depend on the accident of which allele at a locus happens to be most frequent when selection begins, with all other alleles at the locus being driven to extinction.”
The above example is particularly interesting. Consider gene expression controlled by trans-acting transcription elements. Being short, such regulatory elements should remain functional long after a duplication event. Thus many functional copies could be spread around the genome. The total number of active copies would be maintainded by selection but no individual copy would be preserved. This mechanism would tend to preserve diversity in gene expression and support fast adaptation to new environments.
If the population consisted of many small tribes with limited gene flow then drift would favor “robust combinations”, i.e., trait values would be kept near the optimum even when the genetic background shifted due to drift. If so, long term evolutionary pressure should produce modularization of traits that needed to vary with the environment, e.g., changing tooth shape to match diet without harming other traits.
“(did i just use a word with the syllable “ortho” in it three times???)”
Word choice is strongly affected by “mental priming”. Your “ortho” frequency will be unusually high today…wear a raincoat.
How the brain thinks is more interesting than what the brain thinks…unless the brain is thinking about how it thinks.
Punnett Square: “Do you have a link for that? I’ve never heard that Schiz or any other common mental disorder is that heritable.”
Here is one representative link. I don’t view such studies as conclusive evidence of high heritability, I only conclude that schizophrenia is a complex disease that is strongly influenced by both genetic and environmental factors.
http://www.ncbi.nlm.nih.gov/pubmed/14662550′
“By using a multigroup twin model, we found evidence for substantial additive genetic effects-the point estimate of heritability in liability to schizophrenia was 81% (95% confidence interval, 73%-90%). Notably, there was consistent evidence across these studies for common or shared environmental influences on liability to schizophrenia-joint estimate, 11% (95% confidence interval, 3%-19%). CONCLUSIONS: Despite evidence of heterogeneity across studies, these meta-analytic results from 12 published twin studies of schizophrenia are consistent with a view of schizophrenia as a complex trait that results from genetic and environmental etiological influences.”
Punnett Square: “Scientists know today that most cases of Schiz are the result of environmental damage.”
From the DailyScience link you provided:
“Since schizophrenia and autism have a strong (though elusive) genetic component, there is no absolute certainty that infection will cause the disorders in a given case, but it is believed that as many as 21 percent of known cases of schizophrenia may have been triggered in this way. The conclusion is that susceptibility to these disorders is increased by something that occurs to mother or fetus during a bout with the flu.
Now, researchers have isolated a protein that plays a pivotal role in that dire chain of events. A paper containing their results, “Maternal immune activation alters fetal brain development through interleukin-6,” will be published in the Oct. 3 issue of the Journal of Neuroscience.
Surprisingly, the finger of blame does not point at the virus itself. Since influenza infection is generally restricted to the mother’s respiratory tract, the team speculated that what acts as the mediator is not the mother’s infection per se but something in her immune response to it.”
I.e., genes and environment together produce disease.
The brain is an exceeding complex organ. Brains can fail from a vast number of distinct causes. Factors may combine to increase the probability of failure. Schizophrenia is a type of brain failure defined by a fuzzy set of symptoms. Specific combinations of genetic factors may increase the probability of schizophrenia…twin studies show heritability over 80%. Specific environmental damage may increase the probability of schizophrenia. Genes and environment interact to produce phenotype. For complex traits such as mental function there will be many genetic and environmental causes.
Matt: “You’d need >1000 different ways to cause schizophrenia in order to get a 1% mutation-selection equilibrium frequency.”
Your estimate depends on the rate of harmful mutations as well as the number of ways that failure may occur. Some types of mutation occur far more frequently than the genome average for SNP’s.
E.g., during meiosis recombination can cause duplications/deletions at DNA “hot spots” where similar DNA sequences occur multiple times along the same strand. This type of mutation occurs frequently. I’ve seen estimates that up to 5% of severe retardation cases are due to recent “hot spot” CNV’s in DNA that codes for brain proteins. The children seldom reproduce so the mutations are quickly removed from the gene pool. However the deletions occur so frequently that they account for a significant fraction of mental retardation cases.
Fragile X retardation occurs in 1 in 4000 boys and is caused when a CGG segment exceeds around 200 copies. Normally a person has between 5 and 40 copies. This type of mutation occurs frequently.
Genetic tests are refining disease categorization. E.g., in breast cancer, pathologists using cell morphology or staining can’t distinguish between two major classes of ductal carcinoma. One class has only a small chance of remission and doesn’t require chemotherapy. The other cancer class is likely to return unless the patient undergoes chemotherapy. A gene expression test, MammaPrint, can reliably determine what type the patient has and whether chemotherapy is appropriate.
Better genetic testing will lead to refined diagnosis which will lead to improved treatment. This will become very important when patients routinely have their genome scanned. That should begin happening in about five years.
“But his main point is that there will be selection in favour of closer linkage between favourable gene combinations on the same chromosomes, and it is therefore a puzzle why recombination is as frequent as it is. I think this remains a problem.”
I think it is only a problem if adaptation is slow and that new good alleles are rare. Here is how I see it:
The advantage of recombination combining new good alleles on the same chromosome segment outweighs the disadvantage of recombination breaking old good linkages. I.e., when two good alleles are present at low frequencies on competing chromosome segments then a high recombination rate increases the chance of a new chromosome segment with both good alleles. The new chromosome segment with both good alleles begins sweeping the populace, picking up more good alleles along the way. (I use “chromosome segment” rather than chromosome since the human recombination rate is so high that the unit of inheritance is typically smaller than a chromosome. There are between ten and twenty crossovers for each chromosome pair during meiosis.)
A high rate of recombination favors adaptation when many good alleles are circulating in the populace on competing chromosome segments. This should occur when the environment changes rapidly or when a large population is generating many new good alleles. E.g., the last fifty thousand years of human history.
PS I second the “Bravo!”. Excellent post.
gc: “1) SNPs are a priori going to be of small effect as they change only one bp. CNVs, by contrast, are changing kilobases or megabases at a time (i.e. entire genes or exons).”
Relevant links:
http://microarraybulletin.com/community/article.php?p=284&page=1
http://www.bio-medicine.org/medicine-news/Genetics-Of-Mental-Retardation-13248-1/
5% of the retardation cases are likely due to recent CNV’s in brain genes. As far as I know all races are equally likely to experience such hotspot CNV’s. These retardation CNV’s are so harmful that they are rapidly eliminated by selection. I doubt they are the source of genetic group differences.
Other CNV’s with less drastic effects are likely common. It would be interesting to compare the hotspot distribution differences between racial groups. Looking at CNV hotspots might make identifying CNV variation more efficient.
SNP’s in regulatory DNA could significantly affect gene expression. Since there is twice as much conserved regulatory DNA as coding DNA in the human genome there are a lot of potential variants in the human population.
re: Neural development of the cortex.
Kaleidoscopik, clearly your concerns are justified. My reply is more of an intuition and a hope than a prediction.
The brain may have repair mechanisms separate from the developmental program. I.e., when a neuron dies then brain stem cells and progenitor cells migrate to the wound site and differentiate into the proper neural type based on the chemical, mechanical, and electrical signals in the local environment. Perhaps many new neurons would be produced and only those that made proper connections would survive. (I suspect that is how the new neurons generated in the dentate gyrus of the hippocampus help in the formation of new memories.) The mouse brain continually replaces neurons lost in the olfactory bulb (where they are exposed to a harsh environment). The new cells must migrate significant distances and properly integrate into functional tissue. So at least some parts of the mouse brain already self-repair. I also believe there is evidence for minor levels of repair in other brain tissues. If so, it might be relatively simple to renew brain tissues by temporarily increasing the neuron production rate and decreasing the new neuron apoptosis rate while targeting senscent cells for destruction.
I suspect that mammals brains are more plastic than insect brains. I.e., brain circuitry in drosophila is genetically controlled down to the low level structures while only the upper level architecture is fixed in mice. If so, then the repair would just have to be in the right ball park and then training would optimize the resulting tissue. If this is the case then we might not need to replicate the developmental path that creates cortical minicolumns. Instead, cells would migrate to a region that was sending signals indicating repair is needed. There they would divide and differentiate into the appropriate neurons. The local tissue environment would provide the signaling needed to organize the cells into a functioning cortical minicolumn.
“another concern is the surgical precision it would take to stick a dot of neural stem cells in the right spot”
Yes, my hope is that other cell migration methods already exist in the brain and that we only need to supply stem cells to the cerebrospinal fluid and increase the rate of cell turnover. If instead we have to directly insert the right cells in the right location and directly controlled the local signaling environment then rejuvenation becomes much harder. In that case we would need robotic micro devices to do the implanting. That would take several more decades of technological advance.
Statins have unexpected effect on pool of powerful brain cells:
http://www.eurekalert.org/pub_releases/2008-07/uorm-shu070208.php
“Scientists found that both compounds, when used at doses that mimic those that patients take, spur glial progenitor cells to develop into oligodendrocytes. For example, in one experiment, they found about five times as many oligodendrocytes in cultures of human progenitor cells exposed to pravastatin compared to cultures not exposed to the substance. Similarly, they found that the number of progenitor cells was just about one-sixth the level in cultures exposed to simvastatin compared to cultures not exposed to the compound.”
“These are the cells ready to respond if you have a region of the brain that is damaged due to trauma, or lack of blood flow like a mini-stroke,” said Sim, assistant professor of Neurology. “Researchers need to look very carefully at what happens if these cells have been depleted prematurely.”
“Glial progenitor cells are distributed throughout the brain and, according to Sim, make up about 3 percent of our brain cells. While true stem cells that can become any type of cell are very rare in the brain, their progeny, progenitor cells, are much more plentiful. They are slightly more specialized than stem cells but can still develop into different cell types.”
bbartlog: “…you’re glossing over the fact that a lot of higher-level structures are the result of a developmental process, and the results are not going to be changed by this maintenance or piecemeal replacement therapy”
Yes, I’m glossing over a lot of stuff. There are significant questions regarding how plastic adult bodies really are. That is why I provided the link to “curing” the autistic mouse. There are traits that one would have thought were set during development that can be changed by rather crude genetic engineering on adult cells. I’d love to see mouse experiments that explored the plasticity of the adult animal, especially the adult brain.
I believe that large structural remodeling will soon be achieved. E.g., regrowing a finger or repairing a spinal cord injury. I believe present medical research will lead to rebuilding brain structures that have been destroyed due to injury or disease. It should be possible to enlarge the skull with surgery, allowing the brain to expand. I tend to believe that gradual changes in brain size and structure wouldn’t be too disruptive of existing memory and skills. Old people can recover from stroke and young children show significant plasticity in rewiring around damaged areas.
I don’t really know what to do about restoring nerve fibers that connect different brain regions. (Researchers are working on restoring the optic nerve connection. http://www.scienceblog.com/cms/node/7070 ) The original connections depended on a carefully orchestrated sequence of cellular events and signaling combined with external environmental feedback and pruning. Duplicating that process would likely destroy memories and skills. For the next few decades we might have to support the existing neurons with their axon connections by rejuvenating oligodendrocytes, astrocytes, and brain stem cells. That would limit potential intelligence improvements. (Brain rejuvenation is difficult.)
Ben G.: “.. taking it would be (wrongly or not) labeled by many group Y members as an admittance of intellectual inferiority[1] (many don’t want to contribute to the headline “Group Y seeks out controversial IQ gene in record numbers”)
When the technology is commonly available I think such concerns would be marginalized.
The individual would mainly be focused on the difference it would make in their own life. Consider the popularity of cosmetic surgery and then imagine a treatment that made you younger, improved your looks, and made you smarter. Once people saw the results of the treatment on friends and family they would demand access for themselves.
The early adapters would be old, rich people. There would be strong public pressure to make the treatments available to everyone as a matter of social justice.
Those people that are committed to identity politics would want to make certain that “their people” didn’t get left behind. There might be racial pride initiatives to mobilize the group members to rapidly use the treatments.
I do see potential problems. The first treatments might go awry in numerous ways. People might demand access to the treatment before all the bugs are worked out. Also, what would the parent of an autistic child do?
Finally, I may be naive but I believe most people do want a better world for everyone. I suspect many liberals completely reject biological determinism because it provides no solution to social injustice. Likewise many conservatives reject “socially progressive” programs because they believe the programs don’t work and often make the problems worse. An effective technology would win converts from both groups.
Keith: “Wouldn’t implanted stem cells get rejected by the immune system?”
If you just injected adult stem cells from an immune incompatible donor without first destroying the the patient’s immune system then the transplanted cells would be rejected. As long as the donor cells are immune compatible then there is no rejection. There are multiple solutions to the rejection problem.
Choose a donor cell line that is immune compatible.
Do minor genetic engineering on the donor cell line so that it is compatible with all humans.
As part of the rejuvenation program, rebuild the immune system. This has the advantage that new immune system might attack the senescent cells of the old type. Immune suppressant drugs could be used to control the rate the old cells were attacked.
Advances in biotech to manipulate the immune system will have advanced to the stage where rejection can be controlled or stopped. I believe this level of immune system technology will be available before we have the technology for tissue rejuvenation. (Advanced immune system technology should also reduce the threat of cancer.)
“wouldn’t it become a point of group pride not to take the “other guy’s gene’s” pill?”
I believe there are enough exceptional members in all large groups that a person could choose a donor from their own group. It would be up to the individual whether to change groups or not.
An exceptional individual with an IQ over 200 and demonstrated accomplishment might prefer to keep his own genes. Such an individual might convince others to use him as a donor. I imagine becoming a donor would confer high status.
I don’t really know how most people would react to this choice. For myself, it would be an easy. Grow old and deteriorate or renew myself in a better body. If the renewal process caused me to lose many important memories, the decision would be harder. I’d probably defer it until very old.
Also, I don’t really think of this as gene engineering, i.e., selecting and combining specific alleles. The stem cells would be from individuals known to have superior phenotype. Stem cell transplantation should be easier than genetic engineering of superior phenotypes. It would be more like cloning a superstar.
Gradually replace existing stem cells in all body niches with stem cells cultured from the best genotypes of each race. Use drugs to increase senescent cell turn-over. Use growth factors and drugs to increase the rate of tissue replacement. Within a few years this would remodel the body and brain. Some differences fixed during development would remain, but much should improve. “Cosmetic surgery” for the masses.
DNA would no longer be destiny.
Continuing the thought…
The fitness value for a trait often follows a “bathtub” curve. E.g., body temperatures that are too low or too high lower fitness.
Ultimately traits are determined by the amount of specific proteins in specific cells. There are many ways that protein levels are regulated. E.g., gene transcription, mRNA splicing, mRNA translation, protein recycling. There are many biological processes that can affect each method of regulation. E.g., for gene transcription, promoters and inhibitors, copy number, etc.
So for a specific trait there may be hundreds of genetic factors working together to produce that trait’s fitness “bathtub” curve. In an individual the fitness value of a particular genetic factor depends on all the other factors that are present. If due to genetic drift, the diversity of some genetic factor is reduced and the population average for the trait is no longer at a fitness optimum then the frequencies of the other genetic factors will change through selection to compensate. (The genetic factors are simultaneously affecting many different traits each having its own fitness curve. So there will be ripple effects throughout the biological system.)
Over time, drift in sub populations “tests” many combinations of genetic factors. Robust genetic factor combinations that maintain homeostasis under a wide range of environments and genetic backgrounds evolve.
“complete lack of neuroscience and biochemistry”
Derbyshire mentioned several neuroscience talks including a couple at the biochemical level. Based on the few details he provided, I would have enjoyed the conference. (I would have skipped the talks on dog telepathy and the Mayan calendar.)
Razib, thanks for the link.
PS Did they have to use the term, “psi field”…the concept described is reasonable but the terminology just begs for abuse by wackos.
More thoughts:
Total population genetic diversity depends upon population size. Diversity in the specie population genome provides the raw genetic material by which selection produces adaptation to new environments. This could be important if the environment is rapidly changing. Also genetic diversity provides protection against pathogens. Small populations that survived for thousands of years could be wiped out when contact with other people introduced new pathogens too frequently.
To model the affect of small population size on a given trait you would also need to know what DNA affects that trait. Traits that depend upon only a few short DNA segments will behave differently compared to traits that depend on thousands of DNA segments of varying size spread across the genome. In a small population, selection might be sufficient to maintain the trait quality in the first case but not in the second. (E.g., consider the consequence of mutational meltdown in the trait, fertility.)
David, with regard to your questions here are some thoughts that may be relevant:
By “selection to work” I assume you mean that genome information is maintained. I.e., selection converts noise into information as fast as mutation + drift + changing_environment converts information into noise.
Mating patterns and number of offspring affect the amount of selection a population experiences. When only the best males mate then many slightly harmful mutations will be removed at each generation.
Due to crossover during meiosis, good alleles can accumulate on chromosome segments. Such segments will have a significant fitness advantage over competing segments. In this way good alleles can “cooperate” to eliminate harmful alleles. With larger populations there is higher probability of fortuitous crossovers producing high fitness chromosome segments. It is also more likely that linkage between a very beneficial allele and a harmful allele will be broken before the beneficial allele sweeps the population. I.e., harmful mutations aren’t spread as far by draft. Thus large populations are more efficient at removing moderately harmful mutations.
A changing environment can reduce genomic information as good adaptations become bad.
Species with large populations and many offspring can maintain high amounts of genomic information. However, the importance of the genomic information differs across the genetic elements (a power law?). As harmful mutations accumulate the specie becomes less adapted to its environment until a new balance is reached and the total genomic information is again maintained by selection in the population. Only the more important genomic information is preserved by selection. If a population was a very successful competitor in an environmental niche, a bottleneck might significantly lower specie adaptation without causing extinction. However, if the specie were barely surviving then a bottleneck event might be the end.
Depending on the specie biology, the environment, and the specie competitors, a reduction in genome information could lead to extinction.
“What if it’s advantageous in winter, and disadvantageous in summer?”
With their large populations and short generations, bacteria quickly adapt to their environment. I have wondered if bacteria with 20-minute doubling rates might adapt to daily variations in the environment. E.g., variants that thrive in high humidity would proliferate with early morning dew, other variants would thrive in the afternoon.
Perhaps fly populations would show seasonal allele frequency changes? Or maybe even mice?
http://hscweb3.hsc.usf.edu/health/now/?p=397
“When human umbilical cord blood cells (UCBC) were injected into aged laboratory animals, researchers at the University of South Florida (USF) found improvements in the microenvironment of the hippocampus region of the animals? brains and a subsequent rejuvenation of neural stem/progenitor cells.”
http://www.sciencedaily.com/releases/2008/02/080212131257.htm
“The chip, which is made of a plastic-like substance and covered with a glass lid, features a system of channels and wells that allow researchers to control the flow of specific chemical cocktails around single nerve cells.”
“How is it that chimeras don’t have autoimmune problems?”
The adaptive immune system learns what is “self”. T-cells mature in the thymus and those that react to proteins in the local environment are destroyed. Cells introduced into blastocyst produce proteins that are used in training T-cells in the thymus and are viewed as “self”.
Tissue taken from an early stage in fetal development may be transplanted into an adult without triggering rejection. (At least in pigs.) So early fetal tissue may actively reset the immune response.
In a mouse, type 1 diabetes has been cured by destroying the immune system and then transplanting healthy pancreatic beta islets and bone marrow. The reconstructed immune system tolerated both the transplanted tissue and the animal.
Rejection isn’t a problem with brain tissue. The brain produces chemical signals that repel most immune cells. (Some cancers can do the same trick.)
In the intestines, cells release chemical signals that repress immune response to the foods we eat. The placenta represses the mother’s immune system.
Interesting. A new mutation could spread over large geographical areas quickly with seeding. What size of seed would be optimum? A small tribe in which the mutation is already at high frequency might move deep into “unconquered” territory and act as a new wave center. (Presumably this would occur most often along rivers and coasts where long distance travel is easy, e.g., vikings.) Individual travelers/migrants would be less successful at establishing new “spreading” centers.
Reaction times of 0.15 seconds are common. That it the total time required for each neural stage to process the information and pass it on to the next stage. Detecting and responding involves neural circuits transversing the entire brain and passing through many neural layers. My guess is that internal processing within a layer and transmission to the next layer takes on the order of .01 seconds. With 15 stages you get 0.15 second RT. “Firing rate” signaling would then have to be several times faster than 100 Hz.
Perhaps, the spatial resonance frequency differences in the dendritic arbol act to add and subtract frequencies so that the signal transmission rate between layers can be much faster than the dendritic resonance frequencies? (Much as a fast information stream can ride upon a much slower frequency radio wave.)
(Another possibility is that “learned behaviors” tested in reaction time experiments bypass slow hippocampus neural circuits.)
While 3-20 Hz seems too slow to transmit new information, it does seem likely that such frequencies could serve to synchronize spatially separated brain regions.
“i was just thinking that if it is rare alleles of large effect, looking at high IQ families might be do the trick.”
Yes, several generations of assortative mating should have concentrated rare beneficial alleles. Use the children of high IQ parents as subjects.
Thoughts on the genetics of IQ’s > 145:
For moderately high IQ’s, I thought heritability increased with IQ. Would the same trend continue with very high IQ’s?
For high IQ, how much is due to stochastic events in development or the environment?
In the past I’ve attributed the fat right tail to the distribution being a sum of distributions from subpopulations with different means and variances and to assortative mating.
How would epistasis affect an IQ distribution?
One model might not explain all cases. I.e., families of geniuses vs. wild card genius from an average family.
“isn’t that what linkage studies are good at detecting? QTLs of large effect, low frequency (rare) in families?”
I can see that linkage studies should provide better statistics than association studies for correlating a haplotype to a trait. Being able to follow a rare allele from parents to offspring and only having to consider the genome segments that differ helps. But I think in a genome wide study you would still need multiple families with the same low frequency allele in order to establish a significant correlation with a complex trait such as intelligence. For alleles with frequencies around 1%, my guess is that you would need thousands of test subjects.
Even with thousands of subjects there might be problems. Population substructure or other types of non-causal correlation might mask weak signals.
“i recall seeing weird heritability numbers at 3+ sigs. this could be low N, but it could also be that the genetic architecture is just bizarro once you deal outside of the 97% intervals.”
I’ve never seen a discussion of heredity for IQ’s > 145. Anyone have a link?
Re: Granularity of trait differences: Low frequency alleles that cause large differences vs. high frequency alleles that cause small differences.
Is it possible that in the general population there are large numbers of low frequency alleles each of which causes a significant difference in IQ?
A genome wide study would require tens of thousands of subjects to reliably detect such low frequency alleles.
It would explain how siblings can differ widely in intelligence. (E.g., the three sigma difference between my brother and I.) Such siblings differences are very improbable if the typical allele accounts for no more than a fraction of an IQ point.
“In addition, I even wonder if there are genes really associated with ‘cognitive performance.’”
“Those are the sorts of comments made by people who, deep in their hearts, believe that human thought is a consequence of souls rather than neural computation.”
More likely the scientist is reflecting his knowledge that genome wide association and linkage studies have found no loci that account for more than a fraction of a percent of population intelligence variation. That may imply that several hundred or even thousands of loci of very small effect determine intelligence. If so many of those genes will have far more direct effect on traits other than intelligence. Suppose a gene strongly affects body size. In doing so an allele of the gene produces a slightly larger brain that is slightly more intelligent. Should that allele be viewed as a “body size” allele or an intelligence allele or neither?
(PS the author’s swipe at Watson was unjustified. My guess is that Watson’s beliefs are based on knowledge of the genetic basis for mental traits combined with knowledge from heredity studies combined with knowledge of failed attempts to environmentally intervene to improve intelligence, combined with empirical evidence from many different societies around the world.)
“Underlying this health/ mortality differential are significant social class differences in average IQ.”
Consider this toy model for an average community:
Over 90% were low class who were often starving before winter ended. They would have been malnourished and sickly.
The upper class would never starve and would regularly eat meat. They would be healthy and strong.
High status men would regularly father bastards in the lower classes. Also upper class families would expand and displace the lower classes.
Alleles would regularly flow from the upper class to the lower class. Strong selection pressure in the lower class would eliminate the low fitness alleles. In this model the lower class would average more high fitness alleles than the upper class. However, due to differences in nutrition and health, the upper class phenotype would be significantly better.
In this toy model, selection for intelligence might occur within the upper class but the between class genetic differences should have little affect on long term community genetics.
In a society with abundant, nutritious food and high social mobility, class differences may be largely genetic. It is not clear to me that this was commonly true in the Middle Ages.
Whoops, “Once the cold node acquired the mutation then it would in turn become hot.” should read, “Once a downstream hot node acquires the mutation its link-weight-sum changes and it becomes cold.”
Razib, I’m not sure I understand what you are getting from these graphs…
It appears that these graphs represent substructure population advantages not related to genetics. E.g., over time the children of the town’s wealthiest family may displace children of the poorer families. (See model F in figure 2.) The link weights represent the advantage of being born into the wealthy family. Then any mutation (not just a beneficial mutation) that originates in the wealthy family is more likely to fixate than one that originates in a poor family.
Other examples of such non-symmetric flow would be noble vs serfs, town vs. countryside, trade-route-nexus village vs. nearby villages, inhabitants of fertile land vs. non-fertile land.
I don’t really see how the graph weights could directly represent positive selection. The link weights would change as soon as a highly beneficial mutation occurred, a hot node under the old weights would become a cold node. Once the cold node acquired the mutation then it would in turn become hot.
What if we consider link weights that have a fixed substructure component and a variable “fitness” component. Suppose a beneficial mutation occurs at one of the nodes. The selective advantage of the allele would temporarily alter the link weight, i.e., nodes that have the allele would be slightly more likely to displace the offspring of those that don’t have the allele. Even if a node is hot and so is often replaced, the beneficial allele should still pass into a cold node and then sweep that cold node. I do see that when a beneficial mutation occurs in a wealthy family it has a much greater chance of surviving stochastic elimination than if it arose in a poor family.
The more complex “amplifying” graphs don’t seem realistic. I doubt substructures with many one-way flows have been common in human history. Even a little reverse flow would allow beneficial alleles to introgress.
These graph models seems more appropriate for studying drift (where the link weights don’t change) than for studying selection.
Guess I need to read the paper.
My mind kept nagging me about the term “supergene”. “Supergene” usually refers to a group of nearby genes that share an epistatic relationship. When considering long term stable genetic patterns this makes sense as the epistatic relationship keeps the linkage from breaking. My usage is slightly different.
The linkage is only maintained while the component genes have a fitness advantage over wild type. Once the supergene sweeps the populace there is no longer a fitness advantage in maintaining the linkage. New beneficial mutations can replace component genes without damaging an epistatic relationship. This type of “supergene” is a temporary “selection” structure, not an epistatically re-enforced, stable structure.
“do you think the rate of recombination is high enough? you’re basically talking about a supergene, right?”
Yes, supergene. “Super chromosome” is misleading since each chromosome will have multiple cross-over events during gamete formation. Loci that are widely separated on a chromosome are essentially inherited independently. The loci should be far enough apart that they will likely be united by an occasional rare recombination event but not so far apart that there is a high probability that common recombination events will separate the loci.
I’m guessing that the recombination rate is high enough to generate high fitness supergenes (when many simultaneous mutations with modest benefit are present in the population) and low enough that the supergenes will be fairly long. (Long supergenes would permit the combination of many modest benefit mutations into one high benefit supergene.) I’d like to see simulations of this process with different recombination rates and different levels of mutations of modest benefit.
Note that a mutation of moderate effect might act as the seed of a supergene. The supergene would slowly spread in a sub population and pick up nearby beneficial mutations of small effect. Overtime the fitness of the supergene would increase and it would spread more rapidly. (This same process would filter out slightly harmful mutations.)
“Does evolution actually show a tendency to reduce such adaptations to one of the various gene regulation methods you mentioned?”
(I going by personal recollection so I welcome corrections from knowledgeable readers.)
I believe there is evidence that in animals duplicate coding genes rarely retain function over long evolutionary time spans. They either evolve separate functions, one becomes a pseudo gene, or the extra gene is lost through a deletion event. (There are exceptions. Some coding genes have backups that can take-over function if the primary gene is knocked out.) (I’m mainly thinking about the patterning genes.)
I believe plants often retain duplicate functional coding genes. I don’t know if this is because plants experience far more duplication events or if something tends to preserve the functionality of duplications that do occur.
Fruit fly coding genes tend to have fewer introns that human genes. I recall that there is evidence that the flies have lost introns over hundreds of millions of years. Alternate splicing isn’t conserved as much as primary splicing. I don’t know that there is a consistent pattern. If specie generation time and population size changed significantly then the adaptations might reverse direction.
As for other regulatory mechanisms…I don’t know. Many have only been discovered in the last decade. I suspect there are others yet to be discovered.
I would also expect different patterns for different types of genes in the human genome. Housekeeping genes that have been conserved for many hundreds of millions of years may be regulated differently compared to “adaptive” genes that determine skeletal structures such as the jaw or teeth.
I wonder if our ability to track “single gene” or “few gene” traits distorts our perception of how most human adaptation occurs. A “single gene” trait that provides a significant fitness advantage would appear as a “super wave” spreading across a lake of chaotic ripples. In such cases a “single mutation” simulation might be pretty good. However, many traits will depend on hundreds or thousands of small beneficial mutations. In such cases the “single mutation” model will be inadequate.
A high rate of beneficial mutations may generate many simultaneous, beneficial mutations on the same type chromosome in a population. Recombination together with selection should produce “super fit” chromosomes that have accumulated multiple beneficial alleles. Such “super fit” chromosomes should spread beneficial alleles faster than predicted by “single mutation” models.
From your description this paper only looks at a single mutation model. (Or it assumes mutations are inherited independently.) How would the simulation results change if multiple mutations with recombination were included? Does anyone do that kind of simulation?
“Was the 2s limit reached relatively early on in the course of our history so that as R.A. Fisher might contend we can ignore substructure?”
My guess is that accelerated adaptation together with recombination means that substructure is important. With high levels of selection activity, population substructure and migration should be more important.
Also, it is not clear to me how important fixation time is when looking at a highly dynamic genome. More interesting is the probability of a beneficial allele attaining a moderate frequency in a sub population. At that point recombination and migration become important.
“I write computer programs for fun. I can recognize a kludge when I see one.”
Focus optimization on the constraining factor. Selection does that.
Here is an example: Your team is building the latest and greatest game. Your team decides to code in assembly language for optimal efficiency. It takes five years to develop and debug your game. Your competitors code in C. Their code is only half as fast but it only takes them one year to finish their game. By the time your game hits the market the game machine industry has moved on. The new consoles have far more power, sufficient to run inefficient C code just fine, but your assembly language code must now be completely rewritten for the the new console. The constraining factor wasn’t code efficiency but rather time to market.
Gene duplication might improve time-to-market adaptation. If so, the “kludge” is actually smart programming.
“My intuitive understanding is that there are gene regulation factors that could change the way a gene is expressed that could result in a phenotypical change with only very small genetic alterations, ones that likely don’t require more resources to be dedicated to genome upkeep.”
Yes. There is gene-silencing, histone-remodeling, transcription promotors and inhibitors, mRNA splicing, mRNA interference, mRNA bundling in complexes, mRNA recycling, control of ribosome translation into proteins, protein recycling, etc.
“And that are also very specific, and so difficult for the evolutionary process to develop.”
I believe this is a key point. Relatively small DNA seqments are involved and the precise DNA sequence is important. The probability of any specific beneficial change is very low. The probability that a copying error duplicates a gene or a regulatory element is much higher.
That is why I think duplication of regulatory elements may be important. They occur fairly frequently and, being short, require little “selection cost” to remain functional. There are many points in the DNA->RNA->Protein path where they could exert their influence.
“But exceptionally inefficient methods…”
The “method” is only inefficient if the “environmental constraint” persists unchanged for long periods of time. If the environment (including the rest of the genome) is changing rapidly then the “best method” is one than can keep up with the changes.
“copying the gene seems awfully crude and wasteful”
Metabolic cost of gene copying isn’t the problem. (Animal genomes aren’t that optimized.) The “cost” is that selection is necessary to remove disabling mutations. If the extra copy doesn’t significantly increases fitness it will most likely become a non-functional pseudo gene. (More rarely the mutated gene acquires a new function.)
Duplications are fairly common so copy number variation might be a quick way to adjust gene expression.
I’ve wondered if CNV of regulatory elements is common. Being fairly short DNA sequences, they would rarely be disabled by mutations. A number of functional copies could be spread throughout the genome. Selection could rapidly adjust the number of copies of the regulatory element in response to environmental changes. Thus gene expression could adapt quickly.
“One point bothers me: if the 99.9% estimate needs to be adjusted downwards because copy variation was overlooked, shouldn’t there be a similar downward adjustment for the 98.5% estimate of human-chimp genetic identity?”
Peter, I read an Venter interview in which he stated that the “human-chimp genetic identity” estimate should be lowered to 95% when CNV’s are included. (I tried Googling for the article but didn’t find it.) I wouldn’t read too much into the numbers since they are extrapolations from exactly one human diploid genome.
For those interested in this topic, these are just estimates from complete genome sequencing. There are also papers that look at protein differences, gene differences, gene expression differences, etc.
LEO CHALUPA
http://www.edge.org/q2008/q08_9.html#chalupa
“Here is a real puzzle to ponder: Every cell in your body, including all 100 billion neurons in your brain is in a constant process of breakdown and renewal. Your brain is different than the one you had a year or even a month ago, even without special brain exercises. So how is the constancy of one?s persona maintained? The answer to that question offers a far greater challenge to our understanding of the brain than the currently in vogue field of brain plasticity.”
More on human-chimp differences…
(Here is my take for what it’s worth.)
The above estimates say nothing about whether the observed SNP human-chimp genome differences cause phenotype differences.
The neutral rate of mutation with a 3×10**9 genome is 30 mutations per generation. 5,000,000 years is 250,000 generations or 7.5 million mutations. So, without considering selection, humans and chimps should differ by 15 million SNP’s. My guess is that less than 5% of the genome matters so less than 750,000 SNP’s could be significant. Many of those SNP’s would reduce fitness and so be eliminated. The divergence rate calculated from neutral SNP’s doesn’t say much about human-chimp differences.
If accelerated adaptation results in 1/2 beneficial mutations sweeping the human populace per year then over 100,000 years positive selection produced 50,000 additional adaptive human mutations. Compared to 15 million SNP’s it doesn’t sound like a lot. But the adaptive mutations cause phenotype differences. So the neutral rate of divergence between chimps and humans hasn’t changed much but the meaningful divergence has drastically accelerated.
“I was completely astonished to see Pagels’ claim that revised analysis shows that the genetic gap between different human peoples is roughly 1/3 the size of the genetic gap separating people from chimps.”
The old estimate of 99.9% was based on SNP variation in the Hapmap populations. Venter’s genome showed significant Copy Number Variations between his chromosome pairs. When including both SNP and CNV Venter’s chromosomes differed by 99.5%.
I liked Donald Hoffman’s essay,
http://www.edge.org/q2008/q08_1.html#hoffman
“I now think that perception is useful because it is not veridical. The argument that evolution favors veridical perceptions is wrong, both theoretically and empirically. It is wrong in theory, because natural selection hinges on reproductive fitness, not on truth, and the two are not the same: Reproductive fitness in a particular niche might, for instance, be enhanced by reducing expenditures of time and energy in perception; true perceptions, in consequence, might be less fit than niche-specific shortcuts.”
His conclusion may be generalized to human reasoning and beliefs.
AG: “I always wonder about why DNA code for each amino acid is same for all living creature on the earth.”
While the gist of your statement is right, it isn’t literally correct. Biology is full of exceptions.
http://en.wikipedia.org/wiki/Mitochondrion#Genome
“While slight variations on the standard code had been predicted earlier,[44] none were discovered until 1979 when researchers studying human mitochondrial genes discovered they used an alternative code.[45] Many slight variants have been discovered since,[46] including various alternative mitochondrial codes.[47] ”
Also some life forms use more than the standard 20 amino acids.
http://en.wikipedia.org/wiki/Genetic_code#Variations_to_the_standard_genetic_code
“In certain proteins, non-standard amino acids are substituted for standard stop codons, depending upon associated signal sequences in the messenger RNA: UGA can code for selenocysteine and UAG can code for pyrrolysine as discussed in the relevant articles. Selenocysteine is now viewed as the 21st amino acid, and pyrrolysine is viewed as the 22nd. A detailed description of variations in the genetic code can be found at the NCBI web site.”
“Despite the variations that exist, the genetic codes used by all known forms of life on Earth are very similar. Since there are many possible genetic codes that are thought to have similar utility to the one used by Earth life, the theory of evolution suggests that the genetic code was established very early in the history of life and meta-analysis of transfer RNA suggest it was established soon after the formation of earth.
One can ask the question: is the genetic code completely random, just one set of codon-amino acid correspondences that happened to establish itself and be “frozen in” early in evolution, although functionally any of the many other possible transcription tables would have done just as well? Already a cursory look at the table shows patterns that suggest that this is not the case.
There are three themes running through the many theories that seek to explain the evolution of the genetic code (and hence the origin of these patterns).[7] One is illustrated by recent aptamer experiments which show that some amino acids have a selective chemical affinity for the base triplets that code for them.[8] This suggests that the current, complex translation mechanism involving tRNA and associated enzymes may be a later development, and that originally, protein sequences were directly templated on base sequences. Another is that the standard genetic code that we see today grew from a simpler, earlier code through a process of “biosynthetic expansion”. Here the idea is that primordial life ‘discovered’ new amino acids (e.g. as by-products of metabolism) and later back-incorporated some of these into the machinery of genetic coding. Although much circumstantial evidence has been found to suggest that fewer different amino acids were used in the past than today,[9] precise and detailed hypotheses about exactly which amino acids entered the code in exactly what order has proved far more controversial.[10][11] A third theory is that natural selection has led to codon assignments of the genetic code that minimize the effects of mutations.[12].”
I think all three themes played a role in reducing the number of genetic codes. In addition, gene-swapping may have been important. Lifeforms that could share genes gained an edge. As with the VHS vs. Betamax format wars, the market would support only one winner.
David Boxenhorn: “I’ve been bringing up the importance of chaos, every once in a while, for years now. I don’t think anyone’s ever noticed…”
People often use “chaos” to imply that no useful modeling can be done. So I usually refer to nonlinear dynamic systems. Such systems can be difficult to model since initial errors in measurement can rapidly increase in an unbounded manner. In the real world there are often dispersive forces or “statistical” properties that constrain the system. Yes, in theory, the beat of a butterfly’s wing could cause a hurricane. In practice, very, very few wing beats even disturb pollen drifting a few feet away. The human brain is incredibly chaotic but we can model another person’s thoughts well enough to communicate.
“…it must be evolution mainly in genetic variants of pretty small effect. There is a very low upper bound on mutations that are both beneficial and of moderate or large effect, and as genetic evolution becomes more rapid it must pass a point at which it is increasingly reflective of smaller effect variants.”
What is your reasoning? I know of yeast culture adaptation experiments that show modest adaptation and an exponential decrease in effect size with time. However microbial populations in the wild are far less limited when developing resistance. (Larger populations or gene sharing within the microbial community?)
Unlike the laboratory yeast experiments, the human population has grown so fast, expanded into so many different climates, and experienced so many cultural changes that I wouldn’t expect the human genome to be close to any local adaptation peak.
My intuition is that for some traits your statement, “evolution mainly in genetic variants of pretty small effect”, is right. Here is a prior comment that explains my reasoning:
How many different types of mutation/trait relationships exist? These types should have different adaptive landscapes. Here are some example types:
Constrained by physical laws: Improves one trait but at a cost. (E.g., brain size vs. metabolic cost.)
Balanced: Improves one trait, harms a different trait. Balance point is determined by environment. (E.g., vitamin D production vs. UV protection.) In some cases if the traits are sufficiently important then the traits may eventually de-couple.
Unconstrained: For rapid adaptation to the environment, the trait is relatively free to change without affecting other traits. (E.g., some immune system genes. Skeletal adaptations such as jaw or tooth structure.)
Network constraints: The trait depends on a complex regulatory gene network, metabolic pathway, or peripheral nerve-brain-hormone feedback control. Many other traits are affected by the same systems. The trait will depend on hundreds or thousands of “nodes” and hundreds of other traits are likewise affected. The biological system has evolved to be robust and canalized. Trait values are constrained by multiple, redundant feedback controls. Genetic variation is often masked, not obviously appearing in the phenotype. Some highly connected or highly important nodes are severely constrained, conserved for hundreds of millions of years.
I find the network constraints type most interesting. Suppose having more intelligence increases fitness. There are potentially thousands of mutations that could increase intelligence. But a mutation of large effect might well disrupt other important traits. Instead I’d expect to see a long series of mutations of modest effect that slowly push the network in a direction that increases the desirable trait. Now consider the recent paper on accelerated human adaptation and imagine large numbers of mutations of modest effect arising and sweeping local populations. What does this suggest with regard to DNA variants associated with intelligence variance? Or groups that have a mental abilities, personalities, and behaviors shaped by different environments for thousands of years? (Gene flow between groups will have mitigated this process somewhat.)
How many different types of mutation/trait relationships exist? These types should have different adaptive landscapes. Here are some example types:
Constrained by physical laws: Improves one trait but at a cost. (E.g., brain size vs. metabolic cost.)
Balanced: Improves one trait, harms a different trait. Balance point is determined by environment. (E.g., vitamin D production vs. UV protection.) In some cases if the traits are sufficiently important then the traits may eventually de-couple.
Unconstrained: For rapid adaptation to the environment, the trait is relatively free to change without affecting other traits. (E.g., some immune system genes. Skeletal adaptations such as jaw or tooth structure.)
Network constraints: The trait depends on a complex regulatory gene network, metabolic pathway, or peripheral nerve-brain-hormone feedback control. Many other traits are affected by the same systems. The trait will depend on hundreds or thousands of “nodes” and hundreds of other traits are likewise affected. The biological system has evolved to be robust and canalized. Trait values are constrained by multiple, redundant feedback controls. Genetic variation is often masked, not obviously appearing in the phenotype. Some highly connected or highly important nodes are severely constrained, conserved for hundreds of millions of years.
I find the network constraints type most interesting. Suppose having more intelligence increases fitness. There are potentially thousands of mutations that could increase intelligence. But a mutation of large effect might well disrupt other important traits. Instead I’d expect to see a long series of mutations of modest effect that slowly push the network in a direction that increases the desirable trait. Now consider the recent paper on accelerated human adaptation and imagine large numbers of mutations of modest effect arising and sweeping local populations. What does this suggest with regard to DNA variants associated with intelligence variance? Or groups that have a mental abilities, personalities, and behaviors shaped by different environments for thousands of years? (Gene flow between groups will have mitigated this process somewhat.)
Here are some questions I have about the acceleration paper.
1) Does the LDD test miss copy number selection events?
Based on the Venter diploid genome, Europeans differ by about 1%. So there is more copy number variation than SNP variation. Potentially adaptation could be significantly higher than they claim. (They do claim that their estimate is conservative.)
2) How does this paper fit with the failure of large association and linkage studies to find loci of even modest effect for intelligence? Perhaps there are many good intelligence alleles of small affect that have partially swept different sub populations.
3) Could accelerated adaptation contribute to the Flynn Effect?
Assume that prior to fifty years ago, intelligence increased fitness. Based on the new paper there should be many new mutations that increase intelligence in various stages of sweeping different subpopulations. Assume that about fifty years ago the widespread use of birth control by upper and middle class women and a feminist movement that devalued motherhood together with a welfare state that encouraged poor women to have more children reversed the fitness benefit of intelligence.
Note that over many generations good alleles tend to accumulate on the same chromosome.
E.g., suppose on the same homologous chromosome allele A1 has a fitness of 1.1 compared to wild allele a1 and allele A2 has a fitness of 1.05 compared to wild allele a2 then these alleles will increase with rates 1.1 and 1.05 until they reach a significant allele frequency. At some point a single person will have homologous chromosomes with both the A1 and A2 alleles which recombine to produce a new chromosome with both the A1 and A2 alleles. This chromosome will have a fitness advantage of approximately 1.15 compared to a1a2 chromosomes. So the new A1A2 chromosome will sweep even faster. In this manner, good alleles tend to accumulate on sweeping chromosomes.
Now suppose A1 increases fertility and A2 increases intelligence. Up until fifty years ago the chromosome A1A2 would sweep at rate 1.15 and might be at an allele frequency of 50% in some sub population. However in the last fifty years intelligence became a disadvantage (Assume A2 now has fitness 0.95.) and the A1A2 chromosome now has a fitness of 1.05. It would continue to sweep and average intelligence would rise until recombination finally breaks the link between A1 and A2. So it is theoretically possible that many intelligence raising alleles became linked to other sweeping beneficial alleles with the net affect being an average increase in IQ even as the brightest people have fewer kids.
(My guess is that the Flynn Effect isn’t due to increasing frequencies of good intelligence alleles. The fitness penalty for intelligence in developed nations is just too great to be overcome by the genetic draft of other beneficial alleles. To really know what is happening we will have to identify the genome variants that contribute to g variation and see how their frequencies have changed over time in the population.)
An informative exchange between two experts, P’ter and Hawks, can be disrupted if too many others jump in. For that reason I’ve been reluctant to give my opinions or ask questions on P’ter’s threads. My contributions wouldn’t improve that discussion.
Here is my overview of the exchange…
P’ter agrees that the theoretical arguments based on population genetics, demographic history, and cultural history are very strong. P’ter is also very familiar with genetic research showing evidence of fairly recent selection in the human genome.
P’ter’s criticisms focus on the statistical analysis of the HaploMap data. He is not claiming that accelerated adaptation didn’t occur, only that there are problems with the evidence that the author’s presented.
Here are the issues that P’ter raises that I find most interesting:
1) The method used to detect selection events depends critically on a few parameters. I believe Hawks when he says that their results are fairly insensitive to algorithm settings but including a parameter sensitivity analysis in the appendix would make the paper stronger. Run a few simulations and show how the results change with different settings.
2) The HaploMap data used 30 “trios”, i.e., mother-father-child. Thus the data already included familial correlations. This may have distorted the LDD test results. This doesn’t mean the authors are wrong but it may distort the results. I think the authors need to address this issue.
3) P’ter isn’t satisfied with the way the authors treated varying recombination rates across the genome. The author’s excluded genome regions with the slowest recombination rates but treated other regions as having the same, constant rate of recombination. This is important because the LDD test depends strongly on local recombination rate. P’ter would like to compare their results to a “Null hypothesis” simulation that models varying recombination rates across the genome. I think this would be a useful exercise, perhaps someone will do it.
4) P’ter feels the demographic model was too simplistic. E.g., a more realistic model would include varying population sizes, bottlenecks, founder effects, and population substructure that match historical data. Hawks justified their simple model by noting that such factors affect the entire genome whereas “selection events” affect specific loci. P’ter, based on his experience in statistically detecting “selection” signals in the Hapmap data, believes that stochastic processes can generate “false positive” selection signatures that may distort the LDD test results.
I believe I understand P’ter’s concerns with regard to issues 3 and 4. However, I have a different take. The paper strongly contradicts conventional wisdom about recent human adaptation. Keeping the computer models simple and the simulations to a minimum makes the paper more accessible to a wide audience. Following papers will better model reality and give more precise estimates of how adaptation has changed over time. (This is an emotional issue. Suppose P’ter believes the claims but doesn’t believe that their statistical analysis proves their claims. If he spends considerable effort on simulation work to “fix” the flaws he gets little credit. Why should he do the work and then get no credit? It may not be fair but I believe that is the way science works.)
In summary, the theoretical arguments are strong and very likely to correctly predict accelerated human adaptation. Issue 1 could be dealt with by adding a parameter sensitivity study to the appendix. Issue 2 needs a substantive response from the authors. Issues 3 and 4 are worthwhile criticisms but I think it is a judgment call as to whether or not the authors used sufficiently realistic simulations. (Both P’ter and Hawks have far more experience and are far better qualified to make this judgment call than I.) Other researchers can now use more complex models to extend or refute their claims.
Does the LDD test miss copy number selection events?
re: “99.9 per cent of human DNA is shared by all”
Based on the Venter diploid genome, Europeans differ by about 1%.
“Within the human genome there are several different kinds of DNA variants. The most studied type is single nucleotide polymorphisms or SNPs, which are thought to be the essential variants implicated in human traits and disease susceptibility. A total of 4.1 million variants covering 12.3 million base pairs of DNA were uncovered in this analysis of Dr. Venter?s genome. Of the 4.1 million variations between chromosome sets, 3.2 million were SNPs. This is a typical number expected to be found in any other human genome, but there were at least 1.2 million variants that had not been described before. Surprisingly, nearly one million were different kinds of variants including: insertion/deletions (?indels?), copy number variants, block substitutions and segmental duplications.
While the SNP events outnumbered the non-SNP variants, the latter class involved a larger portion (74%) of the variable component of Dr. Venter’s genome. This data suggests that human-to-human variation is much greater than the 0.1% difference found in earlier genome sequencing projects. The new estimate based on this data is that genomes between individuals have at least 0.5% total genetic variation (or are 99.5% similar) The researchers suggest that much more research needs to be done on these non-SNP variants to better understand their role in individual genomics.”
http://www.jcvi.org/press/news/news_2007_09_03.php
Related GNXP post on copy number variation.
http://www.gnxp.com/blog/2006/11/hapmap-and-copy-number-variation.php
“I prefer books over new technologies like Kindle because they’re actually a bit easier on the eyes; if I’m staring at a screen too long it gets too uncomfortable to continue.”
The display will eventually be better than print. For some types of print it already is. The display can also change or magnify the font for easier reading. Eventually the book would have a camera that tracked your eye movements so that it could tell when you were tired or when some text held special interest. It could keep an outline of important topics for later referral or sharing with friends.
“I like being able to stick my thumb in a page and flip back & forth or pull another book off the shelf and compare the two at the same time,”
Use a split-screen with a different page displayed in each panel. Or buy multiple displays that act as a virtual desktop, each displaying a different book, page, diagram, or web page but all controlled by a virtual desktop OS that knows what going on in all the displays.
“The problem is that paper and ink are accessible by anyone…”
The book is only accessible if you have it with you. I’ve thrown away all of my old math texts…got tired of hauling heavy boxes of books each time I moved. Even when you have the book it may be hard to find the info in the book that you want. And if you want to share the info it is hard to make it accessible to others.
“books would still be accessible for rebuilding civilization”
Like those that were in the Library of Alexandria when it burned? Old books rot, old pictures fade. Digital books are stored redundantly in several different formats and in many different locations. After super AI’s wipe out the human race, the digital information will live on.
There is a downside. Garbage lives forever and pops up when least desired. My PhD thesis is now available online. I would have sworn that only my adviser would have access to that boring shit. The only copies were safely buried in my closet and the university library shelf.
re: Reading habits
In my youth I was a voracious reader…scifi, textbooks, science mags, news mags, newspapers, backs of cereal boxes. Lots of reading, most of it trash. Now I read at sites were the garbage has been filtered. Those sites lead to others. In the comments I can read dissenting views and follow links to related material. And there is Google search and Wikipedia. I read more now and the information quality is much higher.
For the intellectually curious the online world is a vast improvement.
Shakey: “In a completely amoral world where eugenics became the rule of law, theoretically the black populations could be bred to produce only high IQ members, and over time, average racial differences would cease.”
If a foundation offered young women access to a sperm bank and offered to pay child support if the donor father had an IQ over 130 would that be wrong? The women could choose the race, height, appearance, athletic ability, etc. The program would be available to women of all races and IQ’s, married or single. Would many women choose this option? If it became popular, would the state pick up the tab.
(Science and technology are advancing rapidly. The discovery of genes that influence “g” will lead to effective nutritional and drug programs. Gene engineering and stem cell treatments to enhance adult “g” will follow. In addition, cybernetic enhancement could augment anyone’s mental performance. The new gap will be between those who use the technologies and those who don’t.)
The media coverage of a potential genetic link between breastfeeding and IQ is good for scientists studying the genetics of intelligence. Researching gene-environment interactions is wise. Most people would love to find an environmental fix for the black-white IQ gap. If genetic research into IQ differences is seen to further that agenda it will gain far more support.
I just want the research done. I want to know how genes, cells, and brain structures interact with the environment to produce human thinking.
And there will be many benefits to that knowledge…preventing and healing mental diseases, improved mental performance due to better nutrition, drugs, and training.
Marc: “Even high-IQ individuals from low-IQ populations will have children that regress to their population’s mean.”
I believe children regress to their “breeding” population mean. The mean IQ of the Brahmin caste is high. The children of South Asian immigrants in the US seem to be doing well. Assortative mating that leads to class stratification might also segregate the breeding populations of a nation.
David B: “I don’t see how (or why!) anyone would visit the site 7 times a day on average over a 30-day period.”
Perhaps some people have GNXP as their homepage. Or they regularly use the links on the sidebar. E.g., I often go through the “GNXP Forum” and “John Hawks” links.
Money needn’t be a prime motivator. Nor sex. Nor power. Nor fame. Intellectual curiosity can suffice. There is also community. Sharing of powerful ideas. Intelligent peers. Being part of something greater than oneself.
The importance of stochastic processes is probably most obvious when modeling cancer cell evolution in tumors.
Looc: “A stochastic trigger could never be proven nor disproven. If a theory can’t be falsified (is that the right word?) it exists outside the realm of rational inquiry.”
If a stochastic process is responsible for significant variation then it would be irrational to ignore the stochastic contribution.
Genetically identical flatworms raised in the same environment show significant variance in lifespan. Models that only look at genetic or environmental differences won’t accurately model flatworm aging.
Phenotype variation due to stochastic development is the source of a calico cat’s coloring. Stochastic gene silencing in cell lines during development leads to different skin patches expressing different pigment genes. So in this example backtracking from skin coloring to cell line differences showed a stochastic cause.
How could a stochastic contribution be shown? Suppose that when a scientist clones a homosexual mouse and implants the embryos in genetically identical mothers that 20% of the offspring are homosexual and 80% are heterosexual. The scientist could then look for brain structural differences that underlie the behavioral difference. He might then trace back the developmental differences that led to different brain structures. Ultimately the cell line differences might be tracked back to a stochastically dependent development pathway.
Clearly genetic and environmental differences cause phenotype differences, but I don’t think stochastic processes should be ignored.
Looc: “Genetic or Environmental?”
Or stochastic. Chance events early in development might account for the phenotype difference in MZ twins. Different numbers of cells. Partial differentiation before the split. Different chemical gradients. Different access to nutrients in the womb.
Also, to the baby the womb is the environment but the womb depends on the mother’s genetics. If a mother’s immune system increasing attacks “male” proteins with each successive male pregnancy should that be considered environmental or should that be viewed as the genetics of the female immune system?
Looc: “2% of the general population also sounds much too high to be the result of a genetic mutation”
If a trait depends on one gene and that gene undergoes strong fitness-reducing mutations at a rate one in ten thousand generations then 2% would be very high. Selection would keep the non-fit allele frequency very low. (This assumes the “non-fit” allele really does reduce fitness. In the case of homosexuality the loss of male reproductive successive might be balanced by increased female reproductive success.)
However if the trait depends on 100 genes, each accruing mutations at that same rate, then the non-fit phenotype could appear much more frequently. The non-fit allele frequencies would still be very low but the non-fit trait could be relatively common.
My perspective:
Interactions vary in importance. I expect that genotype->phenotype interactions follow a power law, i.e., relatively few genetic changes are responsible for most phenotype variation. Incomplete models based on large and medium affect genetic differences should be good enough.
Important cellular systems are highly conserved. In many ways yeast cells are similar to human cells. Much of what is learned studying single-celled life will apply to humans.
Human cells seem to function just fine when transplanted into mice. So the regulatory systems haven’t diverged too much in 100 million years. What is learned studying mice will apply to humans.
Yes, inter-mixture of regulatory mechanisms yields a high dimensional space that may be impossible to model completely. However relatively slow mutation accumulation means that only a small subset of that space needs to be modeled when comparing humans to chimpanzees. There hasn’t been enough time to generate too many differences. Mapping what has changed to phenotype differences should be difficult but possible.
This modeling process does not require great insight or technological leaps. It take lots of work. (Obviously better tools would make that work go faster.)
“presumably this developmental switch happens for a reason”
Yes. I’d like to know what the trade offs are. Perhaps too much adaptation to new surroundings hampers long term skill development in adult mice. High level learning might depend on the stability of lower level learning.
It would be nice to have a “smart” pill that temporarily increased learning rate.
Cell lines accumulate DNA mutations during development. Look at all the differences between the two brother’s DNA samples. A few of those differences should also be present in the child and will indicate the father.
Make both brothers pay full child support and put half the money in an investment account. When sequencing has become cheap, determine the father and give the other brother his accumulated investment.
By combining the average disk structure data with dynamic tracking of individual molecules in the synapse (including molecule configuration changes) it might be possible to infer the dynamic behavior of the disk structure.
Very interesting.
?Why should mRNAs get expressed in response to learning anyway? Why involve the nucleus when the action is out at the synapses? I’m convinced that its important, but for my money, local protein synthesis from pre-existing RNAs seems a more specific response to changes in input patterns. Does anyone reading have an explanation for the need to produce new RNAs so far from the locus of plasticity??
Perhaps the nucleus integrates total neuron signals to control back-propagation of electrical signaling through the dentritic arbol? (First dendritic synapses are stimulated and left in an activated state. When the neuron ?decides? that synaptic connections should be strengthened, an electrical signal back-propagates to the dentrites causing the activated synapses to strengthen. The new mRNA?s might temporarily alter the back-propagation signaling.)
Mike, I find it interesting for two reasons:
First, a single gene coded for all the components needed to build a functional system.
Second, this ?swappable? light harvesting system is very common in the bacterial world.
To me this indicates that some bacteria genes may have evolved as independent units that have co-evolved with bacterial ?containers?. I?m thinking of modular software. The ?light harvesting? module evolved to become a single, swappable functional unit. The bacterial ?container? evolved so that it could use such swappable units. Thus the survival of the functional unit depends on all bacteria strains that can use the unit. The survival of the bacterial ?container? depends on all the swappable units that the strain can use.
Re: UV vision in birds
http://www.bio.bris.ac.uk/research/vision/4d.htm
“Bird colour vision differs from that of humans in two main ways. First, birds can see ultraviolet light. It appears that UV vision is a general property of diurnal birds, having been found in over 35 species using a combination of microspectrophotometry, electrophysiology, and behavioural methods. So, are birds like bees? Bees, like humans, have three receptor types, although unlike humans they are sensitive to ultraviolet light, with loss of sensitivity at the red end of the spectrum. This spectral range is achieved by having a cone type that is sensitive to UV wavelengths, and two that are sensitive to “human visible” wavelengths. Remember, because ‘colour’ is the result of differences in output of receptor types, this means that bees do not simply see additional ‘UV colours’, they will perceive even human-visible spectra in different hues to those which humans experience. Fortunately, as any nature film crew knows, we can gain an insight to the bee colour world by converting the blue, red and green channels of a video camera into UV, blue and green channels. Bees are trichromatic, like humans, so the three dimensions of bee colour can be mapped onto the three dimensions of human colour. With birds, and indeed many other non-mammalian vertebrates, life is not so simple. As well as seeing very well in the ultraviolet, all bird species that have been studied have at least four types of cone. They have four, not three, dimensional colour vision. Recent studies have confirmed tetra-chromacy in some fish and turtles, so perhaps we should not be surprised about this. It is mammals, including humans, that have poor colour vision! Whilst UV reception increases the range of wavelengths over which birds can see, increased dimensionality produces a qualitative change in the nature of colour perception that probably cannot be translated into human experience. Bird colours are not simply refinements of the hues that humans, or bees, see, these are hues unknown to any trichromat.”
Is the bird lens more transparent to UV than the human lens?
Re: UV and IR vision in butterflies
“Depending on the species, small butterflies have either apposition eyes or some similar type of eye optics. Butterflies of various colors and designs need to see a wide spectrum of colors in order to find food, survive and multiply. Butterflies may have up to four different pigments in their eyes, as compared to two or three in many other insects. As a result, some butterflies have wide-spectrum color vision allowing them to see UV light reflected from specific flowers. Others can also see near-infrared-light beyond human color vision limits. They seem to respond to image color more than image detail, but their eyes have enough resolution to see fine patterns in flowers, and to see other butterflies in order to fly together. Some butterflies can see 30 micron (.03mm) details on objects, while the human eye can see details only in the range of 100 microns (.1mm). One possible reason for this variation is the large difference in eye focal lengths. The butterfly’s eye, with short focal length, is able to focus closer than the human eye. Normally, human eyes can focus better at a longer distance, over a wider field, than butterfly eyes.”
Nickelplate: “Hm, does this mean we could insert a gene into humans giving us infrared and/or UV vision? I would totally go for that!”
My guess is that gene engineering could be used to give humans IR or UV vision. However, as ChairmanK points out, there would be tradeoffs. Instead of gene engineering human vision I’d opt for electronically enhanced vision that improved resolution, contrast, and magnification or frequency shifting for IR or UV vision.
Most interesting to me is the demonstration of mammal brain plasticity. Adaptable brain processing might permit biological engineering for enhanced mental abilities such as expanded working memory.
Bo,
I don’t know why the comment disappeared. It may have been a problem with the blogging comment software? As far as I’m aware none of the GNXP posters deleted the comment. I thought it was an informed comment.
bo: “weren’t there three post here yesterday?”
Yes. There was a criticism of the statistical relevance of the CHRM2 result.
Here is a related criticism of medical studies:
http://www.economist.com/science/displaystory
A scientist can observe whether a subject is awake or asleep or when the subject first becomes aware of a stimulus. The subject?s self-reported internal experience is associated with observable brain states (but is not an accurate representation of real events). The desktop is a useful analogy for explaining how a human?s internal representation of self may provide a simplified illusion that aids high-level attention and executive control.
The separate issue, the ?hard problem?, is whether the ?OS desktop? is self-aware. I believe progress will be made on this problem. Observable brain states will be correlated with a subject?s description of his internal experience. Eventually this will lead to a model that matches biological brain processes to self-awareness. I expect some biological processes will be replaced by microprocessors and self-awareness will continue. Scientists may not discover why certain forms of mental processing correlate with self-awareness but I suspect they will be able to artificially duplicate those processing patterns and there will be strong evidence that the resulting artificial beings are self-aware.
What if you could completely monitor and alter a subject?s brain function while the subject reported his internal experience. Perhaps you could bypass vocal communication and communicate directly with the right and left hemispheres. Perhaps the communication between the hemispheres could be temporarily suppressed. Would two conscious minds arise? Could you explore what brain regions support awareness? How many separately aware entities can exist in the same brain? How much communication between entities is required for the entities to merge into a single aware entity? Could a newly merged entity remember the prior separated states? Or would the separate memories be re-interpreted and integrated into a single shared experience?
A new evidence-based science would describe how awareness depends on brain computation. The range and variety of ?awareness? states could be explored. Eventually science might predict whether an insect or a computer is self-aware based on computational patterns.
Interesting link. Thanks.
Signalling sequences and labels are interesting. In the abstract, they provide addressing for information processing. In the physical, they are the keys and attachment points for entry, modification, and transportation. In evolution, they are subunits that can recombine or mutate to generate new functions.
The Real Richard Sharpe: “This has not been so in my lifetime, and I am 51 and was born and grew up in Australia.”
I’m 54 and grow up in the USA. By the early 70’s the legal barriers were gone but there were still many institutional barriers. There was also a social culture that re-enforced that women couldn’t or shouldn’t enter certain professions. Change was in the air but attitudes evolve slowly. (In some cases the old guard had to retire.)
My PhD thesis advisor was a female mathematician. She told me about her experiences being the only woman in a math department in the 70’s.
By the mid 80’s in the aerospace industry overt bias was rare. I recall a couple of instances in which an older manager didn’t give a brilliant young woman full responsibility. (The male PhD’s confronted him.)
Qrious: ?its funny how we find this covergent evolution in light skin (between Europeans and Asians) in lactose tolerance (in East Africans and Europes) but we never assume there are any convergent genes related to brain size and intelligence.?
There have been many posts and comments on GNXP concerning IQ and racial hybrids. In particular the possibility of getting the best alleles from different racial groups has been discussed. (E.g., IQ studies of Japanese-European children in Hawaii.)
There has also been discussion that different racial groups and different genders have evolved different brain architecture and function that result in differing mental strengths and weaknesses. (fMRI studies that show men and women use different brain regions to solve the same problem.)
The Real Richard Sharpe: ?Since statistically, she is less likely than he, and most people who comment on this site understand the larger male variance at the tails of the IQ distribution, one wonders at the use of the PC pronoun.?
Women and minorities were once excluded from many professions. Language helped maintain traditional roles. I believe biology is more important than culture in determining social roles but that does not mean that culture isn?t significant. So using ?she? instead of a generic ?he? may reduce gender barriers. Likewise, using ?he? when referring to a caregiver, a secretary, or a nurse.
I don?t favor quotas that ignore the biological reality of racial and gender differences. I do favor modest language alterations that help minimize stereotypes.
I wouldn?t go too far in ignoring statistical reality. Referring to a generic mother as a ?he? or referring to a generic rapist as a ?she? would be absurd.
“30% abort rate”
Perhaps many Downs babies are born without screening? Of the parents who do test for Downs, perhaps 90% choose to abort their Downs fetus and those aborted fetuses represent 30% of potential Downs babies?
“the proportion of superbrights is shrinking.”
Increased assortative mating over the last few decades should have significantly increased the proportion of superbrights.
Better nutrition and less disease should significantly increase the average IQ in nations such as China and India. (E.g., less iodine deficiency in China.)
The diets of pregnant women are improving (e.g., vitamin and omega 3 oil supplements) and baby food formulas are improving.
Fewer environmental contaminants such as lead.
Advanced communications has enriched the intellectual environment for people. Connected by the Internet, intelligent people can interact more often. I suspect this culturally enriched environment produces more superbrights. (In the past a superbright Chinese peasant had little opportunity for intellectual growth.)
Until recently there has been 3 point average increase per decade in measured IQ. (Flynn Effect)
I believe that dysgenics is decreasing the frequency of good “g” genes. (Based on female fertility rates compared to education level.) If the trend continued for many generations it would be worrisome. However, there are offsetting nutritional and cultural factors. I think that competition for the top slots at elite universities has significantly increased and that may indicate that the proportion of superbrights is increasing.
In the coming decades stem cell treatments will become common. In some cases donor cells will be used that are genetically distinct from the patient. As the technology matures, the donor cells will be genetically modified for enhanced effectiveness. Artificial chromosomes may be added. The typical human may become a chimera, composed of cells with different genetic lineages. Some cell types may be optimized for muscles and others for nerves. Original genetic material will determine less of who we are.
In the longer term, it will be possible to rebuild tissues. Neural pathways will be laid out, plasticity restored, and senses trained. Such technology will derive from procedures used to repair and rehabilitate brain damage. This should eventually lead to cures for those who are born deaf or blind.
The technology to remake humans should make eugenics less important.