The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
 James Thompson ArchiveBlogview
The Woodley Effect
Add fertilizer and yields are boosted, up to a plateau; ignore the quality of the seed and yields slowly decline.

dying plants Everyone knows about the Flynn Effect, but very few about the Woodley Effect.
When Woodley was working on his paper in 2013 “Were the Victorians cleverer than us? The decline in general intelligence estimated from a meta-analysis of the slowing of simple reaction time” I wrote to Charles Murray about his findings, and in his reply he asked: “So when are we going to get a reconciliation of the Flynn Effect and the Woodley Effect?” Thus, Murray has named both the apparent environmental rise in intelligence, and the possible fall in underlying genetic intelligence.

By analogy with agriculture, we could say that the Flynn Effect is about adding fertilizer to the soil, whereas the Woodley Effect is about noting the genetic quality of the plants. In my last post I described the current situation thus: The Flynn Effect co-exists with the Woodley Effect. Since roughly 1870 the Flynn Effect has been stronger, at an apparent 3 points per decade. The Woodley effect is weaker, at very roughly 1 point per decade. Think of Flynn as the soil fertilizer effect and Woodley as the plant genetics effect. The fertilizer effect seems to be fading away in rich countries, while continuing in poor countries, though not as fast as one would desire. The genetic effect seems to show a persistent gradual fall in underlying ability.

Woodley’s claim is based on a set of papers written since 2013, which have been recently reviewed by Sarraf.

https://drive.google.com/file/d/0B3c4TxciNeJZaEY0UjluV1djOG8/view?usp=sharing

The review is unusual, to say the least. It is rare to read so positive a judgment on a young researcher’s work, and it is extraordinary that one researcher has changed the debate about ability levels across generations, and all this in a few years since starting publishing in psychology.

The table in that review which summarizes the main findings is shown below. As you can see, the range of effects is very variable, so my rough estimate of 1 point per decade is a stab at calculating a median. It is certainly less than the Flynn Effect in the 20th Century, though it may now be part of the reason for the falling of that effect, now often referred to as a “negative Flynn effect”.

Woodley effect Sharraf

You can now see that calculating the rate of decline is somewhat difficult. Perhaps a median would be “less than 1 per decade”. The time spans vary, the measures also, though the latter variance is an advantage, in that it suggests a general underlying cause. However, the range of estimated decline is very large, from 0 to 4.8 per decade.

Here are the findings which I have arranged by generational decline (taken as 25 years).

  • Colour acuity, over 20 years (0.8 generation) 3.5 drop/decade.
  • 3D rotation ability, over 37 years (1.5 generations) 4.8 drop/decade.
  • Reaction times, females only, over 40 years (1.6 generations) 1.8 drop/decade.
  • Working memory, over 85 years (3.4 generations) 0.16 drop/decade.
  • Reaction times, over 120 years (4.8 generations) 0.57-1.21 drop/decade.
  • Fluctuating asymmetry, over 160 years (6.4 generations) 0.16 drop/decade.

Either the measures are considerably different, and do not tap the same underlying loss of mental ability, or the drop is unlikely to be caused by dysgenic decrements from one generation to another. Bar massive dying out of populations, changes do not come about so fast from one generation to the next. The drops in ability are real, but the reason for the falls are less clear. Gathering more data sets would probably clarify the picture, and there is certainly cause to argue that on various real measures there have been drops in ability. Whether this is dysgenics or some other insidious cause is not yet clear to me.

Sarraf ends on a glowing note:

Ultimately, I cannot give “Historical variability in heritable general intelligence” a higher recommendation. Not since The Bell Curve (Herrnstein &Murray, 1994) has a single work offered such immense psychometric revelations about advanced human societies and their pasts and futures.

My view is that whereas formerly the debate was only about the apparent rise in ability, discussions are now about the co-occurrence of two trends: the slowing down of the environmental gains and the apparent loss of genetic quality. In the way that James Flynn identified an environmental/cultural effect, Michael Woodley has identified a possible genetic effect, and certainly shown that on some measures we are doing less well than our ancestors.

How will they be reconciled? Time will tell, but here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

 
• Category: Science • Tags: Flynn Effect, IQ, Woodley Effect 
Email This Page to Someone

 Remember My Information



=>
147 Comments to "The Woodley Effect"
Commenters to Ignore...to FollowEndorsed Only
[]
  1. Bruce Charlton writes a lot about this, as an example of the mouse utopia experiment.

    Read More
    • Replies: @pyrrhus
    The Woodley effect is almost certainly caused by increasing mutational load, with probably some dysgenic breeding effects tossed in. The Flynn effect is caused, as Dr. Thompson suggests, by better growing conditions, and also, as Flynn speculated, increasing familiarity with the Raven IQ test and such. Strangely enough, standardized testing in the US indicates that the Flynn effect died 50 years ago, and has not been resuscitated....All broad based standard testing has shown significant declines, especially the college entrance tests, the SAT and ACT.
    ReplyAgree/Disagree/Etc.
    AgreeDisagreeLOLTroll
    These buttons register your public Agreement, Disagreement, Troll, or LOL with the selected comment. They are ONLY available to recent, frequent commenters who have saved their Name+Email using the 'Remember My Information' checkbox, and may also ONLY be used once per hour.
    Sharing Comment via Twitter
    http://www.unz.com/jthompson/the-woodley-effect/#comment-1764482
    More... This Commenter This Thread Hide Thread Display All Comments
  2. Sampling bias.

    Read More
    • Replies: @Michael A. Woodley of Menie
    Not an argument
    , @Anonymous
    No one who's actually bothered to read Woodley's papers and is minimally intelligent would think sampling bias explains away much of the g decline; I'm certain it could account for none of the decline on vocabulary measures sampled across literally billions of written words over a century and a half. Perhaps you'd care to systematically demonstrate that sampling bias can explain all or a significant proportion of the apparent loss of g?

    Jayman, I know you like to merely play scientist on your fatuous little blog, which has no impact on any field of study, and have no experience doing actual research, but you'll find that those in the >115 IQ range are not looking at bodies of high-level, well-validated pscyhometric work exclusively through the lens of elementary statistical shortcomings. That you do little but this should clue you in to something.
  3. Compelling viewing: Woodley with Molyneux

    Why Civilizations Rise and Fall | Michael Woodley of Menie and Stefan Molyneux
    December 12, 2016
    VIDEO (1h33m)

    Read More
    • Replies: @FKA Max
    Another great interview with Mr. Woodley [45min]:

    Are we getting smarter or dumber, or both? Frank Salter interviews Michael A. Woodley

    https://www.youtube.com/watch?v=Taw5O7-VKks

    Published on Jul 14, 2016

    HNN001 - According to the "Flynn Effect" humans are getting smarter and smarter. We know more than we ever did and score higher on IQ tests than our parents. But the number of geniuses is falling, as is mental speed, as measured by response-tests. What gives? Dr. Michael Woodley, interviewed here by Frank Salter, finds evidence that the English were smarter 100 years ago than they are today, based on response-test data collected from 1904. Dr Woodley concludes that our genetic potential is falling, perhaps due to the relaxation of Darwinian selection over the last century.


    The evolutionary reason for this may lie with the theory that geniuses have insights that advance the general population. “It’s paradoxical because you think the idea of evolution is procreation, and that might be true in a lot of cases,” he explains. “But what if the way you increase your genes is by benefitting the entire group, by giving them an innovation that allows them to grow and expand and colonise new countries?”

    The lack of common sense is in keeping with the idea that a genius exists as an asset to other people, and so: “They need to be looked after,” he says. “They are vulnerable and fragile.”
     
    - http://www.telegraph.co.uk/news/science/11232300/Why-do-geniuses-lack-common-sense.html
  4. JayMan! You show a lot of moxie, sticking your head out of your own threads. You do know you that here you can’t simply delete comments you don’t like, right?

    Read More
  5. @JayMan
    Sampling bias.

    Not an argument

    Read More
    • Replies: @JayMan

    Not an argument
     
    How is the ensuring the representativeness of your samples not of utmost importance, especially considering that you essentially have an impossible result?
    , @FKA Max
    Off topic:

    Mr. Woodley are you in any way connected to the Menie Estate in Scotland, now owned by Donald Trump?

    Menie House is a grand 14th-century country property surrounded by over 200 acres (0.81 km2) of private land, collectively known as the Menie Estate. The house was designed by the Aberdeen architect John Smith for George Turner around 1835. It is listed as category B by Historic Scotland.[5]
    [...]
    American billionaire Donald Trump purchased a large part of the estate in 2006.
     
    - https://en.wikipedia.org/wiki/Balmedie#Menie_Estate
  6. Anonymous says:     Show CommentNext New Comment
    @JayMan
    Sampling bias.

    No one who’s actually bothered to read Woodley’s papers and is minimally intelligent would think sampling bias explains away much of the g decline; I’m certain it could account for none of the decline on vocabulary measures sampled across literally billions of written words over a century and a half. Perhaps you’d care to systematically demonstrate that sampling bias can explain all or a significant proportion of the apparent loss of g?

    Jayman, I know you like to merely play scientist on your fatuous little blog, which has no impact on any field of study, and have no experience doing actual research, but you’ll find that those in the >115 IQ range are not looking at bodies of high-level, well-validated pscyhometric work exclusively through the lens of elementary statistical shortcomings. That you do little but this should clue you in to something.

    Read More
    • Agree: Bill
    • Replies: @JayMan

    Perhaps you’d care to systematically demonstrate that sampling bias can explain all or a significant proportion of the apparent loss of g?

    Jayman, I know you like to merely play scientist on your fatuous little blog, which has no impact on any field of study
     
    Perhaps you'd like to review your extensive contributions.
  7. @Michael A. Woodley of Menie
    Not an argument

    Not an argument

    How is the ensuring the representativeness of your samples not of utmost importance, especially considering that you essentially have an impossible result?

    Read More
  8. @Anonymous
    No one who's actually bothered to read Woodley's papers and is minimally intelligent would think sampling bias explains away much of the g decline; I'm certain it could account for none of the decline on vocabulary measures sampled across literally billions of written words over a century and a half. Perhaps you'd care to systematically demonstrate that sampling bias can explain all or a significant proportion of the apparent loss of g?

    Jayman, I know you like to merely play scientist on your fatuous little blog, which has no impact on any field of study, and have no experience doing actual research, but you'll find that those in the >115 IQ range are not looking at bodies of high-level, well-validated pscyhometric work exclusively through the lens of elementary statistical shortcomings. That you do little but this should clue you in to something.

    Perhaps you’d care to systematically demonstrate that sampling bias can explain all or a significant proportion of the apparent loss of g?

    Jayman, I know you like to merely play scientist on your fatuous little blog, which has no impact on any field of study

    Perhaps you’d like to review your extensive contributions.

    Read More
    • Replies: @Anonymous
    Unlike you, I have not made criticizing my cognitive betters while completely and obviously lacking the expertise to do so a hobby.

    You have entirely dodged the substance of my reply. That aside, since you've suggested it's clear that Woodley makes use of biased samples, why not demonstrate this? Why not submit the analysis to a psychometrics journal? People doubtlessly more accomplished in psychometrics than you, James Thompson, for example, are not dismissing the dysgenics research out of hand. So you're implying that you have some profound insight about this apparent phenomenon that some highly and relevantly learned people are missing. Why not show it? In light of your execrable blog posts, it's quite clear that you do not have the ability to do any such thing. Hence why you limit yourself to snotty, one-sentence responses. For God's sake, you tried refute dysgenics with bar charts. To all relevantly informed persons you've irredeemably humiliated yourself, time and again. Stop, for your own sake.
  9. though I do not know the cause of it.

    Might spending more time indoors, especially as children, be the cause and explanation?

    The Sun Is the Best Optometrist

    http://www.nytimes.com/2011/06/21/opinion/21wang.html

    Researchers suspect that bright outdoor light helps children’s developing eyes maintain the correct distance between the lens and the retina — which keeps vision in focus. Dim indoor lighting doesn’t seem to provide the same kind of feedback. As a result, when children spend too many hours inside, their eyes fail to grow correctly and the distance between the lens and retina becomes too long, causing far-away objects to look blurry.

    One study published in 2008 in the Archives of Ophthalmology compared 6- and 7-year-old children of Chinese ethnicity living in Sydney, Australia, with those living in Singapore. The rate of nearsightedness in Singapore (29 percent) was nearly nine times higher than in Sydney. The rates of nearsightedness among the parents of the two groups of children were similar, but the children in Sydney spent on average nearly 14 hours per week outside, compared with just three hours per week in Singapore.

    This could surely be a plausible explanation for the drops in reaction times, color acuity, etc.

    Similarly, a 2007 study by scholars at Ohio State University found that, among American children with two myopic parents, those who spent at least two hours per day outdoors were four times less likely to be nearsighted than those who spent less than one hour per day outside.

    Parents concerned about their children’s spending time playing instead of studying may be relieved to know that the common belief that “near work” — reading or computer use — leads to nearsightedness is incorrect. Among children who spend the same amount of time outside, the amount of near work has no correlation with nearsightedness. Hours spent indoors looking at a screen or book simply means less time spent outside, which is what really matters.

    This leads us to a recommendation that may satisfy tiger and soccer moms alike: if your child is going to stick his nose in a book this summer, get him to do it outdoors.

    Read More
    • Replies: @FKA Max
    Comparison of visual reaction time [VRT] in myopic subjects with emmetropic subjects
    http://www.ejmanager.com/mnstemps/28/28-1471505995.pdf


    Materials and Methods:
    The study was carried out among 112 first year medical students in the age group 18 to 20. 60 emmetropic subjects and 52 myopic subjects were involved in the study. The study was carried out with the help of discriminatory and choice reaction time apparatus. VRT was measured in milliseconds. For myopic subjects, VRT was taken before and after correction of their refractive error. Subjects were presented with two visual stimuli, red and green.
    Result:
    VRT is found to be significantly more in uncorrected myopic subjects as compared to emmetropic subjects for both red and green light stimuli. VRT is found to be significantly less in emmetropic subjects as compared to myopic subjects even after correcting the refracting error.
    Conclusion:
    The myopic people have greater reaction time than emmetropic people even though when their refractive error is corrected. This adds refractive error as a new member in the row of factors that affects the VRT.
     
    , @Anonymous
    Actually "near work" like reading and computer is the cause of nearsightedness. The reason time outdoors is associated with less nearsightedness is because the visual stimulus comes from light reflected off objects much further away when you're outside, and this light is much brighter and a more intense stimulus. The focal point inside your eye when light from further away passes through your lens is much closer to your lens and the front of the eye, and there is a natural feedback mechanism whereby the eye's ciliary muscles and extraocular muscles flex and adjust to maintain the normal spherical shape of the eye that meets the focal point.

    By contrast, when you're indoors, the furthest objects in your visual field are usually the walls of a room, which is much closer than the horizon and other far flung objects when outside. Furthermore, when you're inside, you're often doing near work like reading and using the computer and focusing on objects which are even closer. This puts the focal point of the image in your eye further back from the front of your eye, and the eye's natural feedback mechanism uses the eye muscles to elongate the eyeball and meet the focal point. Moreover, this is compounded by the use of glasses or contact lenses for myopes, which are prescribed so that the focal point of images infinitely far away meet the back of the eye. In practice, most myopes don't use glasses or contact lenses merely to look at images far away, but use them while doing near work, and these lenses project the focal point even further back, kicking off the eye's natural feedback mechanism again and elongating the eye further. This is what causes the progression of nearsightedness and stronger prescriptions over time.

    This understanding of the cause of nearsightedness allows for a simple cure which involves just reversing the process by moving the focal point closer to the front of the eye.

    https://www.youtube.com/watch?v=x5Efg42-Qn0
    , @Emil Kirkegaard
    Are children of myopic parents randomly selected to play inside or outside? No, of course not. This is mostly a function of these children's dispositions, including genetic dispositions. So, you have a confound.

    This is the problem with all of these supposedly environmental findings. They might just reflect heritability choices in the environments people seek out or end up in.

    https://www.cambridge.org/core/journals/psychological-medicine/article/div-classtitlegenetic-influences-on-measures-of-the-environment-a-systematic-reviewdiv/76ECA7D8F0F92906DBB2AAFBED720F0C
  10. (1) 3 points Flynn / -1 points Woodley per decade seems high. My impression is that it was more 1.3 points Flynn / -0.3 points Woodley per decade.

    As per Armstrong (and Woodley!), the 3 points Flynn / decade figure is largely based on tests with high rules dependence (which are substantially independent from general intelligence, which has also risen, but at a slower pace).

    (2) I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

    I agree.

    And without a major tech breakthrough, it will logically lead to the age of Malthusian industrialism.

    Read More
  11. Anonymous says:     Show CommentNext New Comment
    @JayMan

    Perhaps you’d care to systematically demonstrate that sampling bias can explain all or a significant proportion of the apparent loss of g?

    Jayman, I know you like to merely play scientist on your fatuous little blog, which has no impact on any field of study
     
    Perhaps you'd like to review your extensive contributions.

    Unlike you, I have not made criticizing my cognitive betters while completely and obviously lacking the expertise to do so a hobby.

    You have entirely dodged the substance of my reply. That aside, since you’ve suggested it’s clear that Woodley makes use of biased samples, why not demonstrate this? Why not submit the analysis to a psychometrics journal? People doubtlessly more accomplished in psychometrics than you, James Thompson, for example, are not dismissing the dysgenics research out of hand. So you’re implying that you have some profound insight about this apparent phenomenon that some highly and relevantly learned people are missing. Why not show it? In light of your execrable blog posts, it’s quite clear that you do not have the ability to do any such thing. Hence why you limit yourself to snotty, one-sentence responses. For God’s sake, you tried refute dysgenics with bar charts. To all relevantly informed persons you’ve irredeemably humiliated yourself, time and again. Stop, for your own sake.

    Read More
    • Replies: @Steel T Post
    "irredeemably"

    Are we deplorable too?
    , @JayMan
    Anonymous says:

    To all relevantly informed persons you’ve irredeemably humiliated yourself, time and again.
     
    (Italics mine)

    The above says it all. I'd thank you for your input but I don't see the point.
  12. @Thrasymachus
    Bruce Charlton writes a lot about this, as an example of the mouse utopia experiment.

    The Woodley effect is almost certainly caused by increasing mutational load, with probably some dysgenic breeding effects tossed in. The Flynn effect is caused, as Dr. Thompson suggests, by better growing conditions, and also, as Flynn speculated, increasing familiarity with the Raven IQ test and such. Strangely enough, standardized testing in the US indicates that the Flynn effect died 50 years ago, and has not been resuscitated….All broad based standard testing has shown significant declines, especially the college entrance tests, the SAT and ACT.

    Read More
    • Replies: @Santoculto
    The Woodley effect is almost certainly caused by increasing mutational load, with probably some dysgenic breeding effects tossed in.

    There are more mentally ill people this days than in the past*

    People are taller than before even they are more fattier...
    , @RW
    "Strangely enough, standardized testing in the US indicates that the Flynn effect died 50 years ago, and has not been resuscitated"

    Cite?
  13. @FKA Max

    though I do not know the cause of it.
     
    Might spending more time indoors, especially as children, be the cause and explanation?

    The Sun Is the Best Optometrist

    http://www.nytimes.com/2011/06/21/opinion/21wang.html

    Researchers suspect that bright outdoor light helps children’s developing eyes maintain the correct distance between the lens and the retina — which keeps vision in focus. Dim indoor lighting doesn’t seem to provide the same kind of feedback. As a result, when children spend too many hours inside, their eyes fail to grow correctly and the distance between the lens and retina becomes too long, causing far-away objects to look blurry.

    One study published in 2008 in the Archives of Ophthalmology compared 6- and 7-year-old children of Chinese ethnicity living in Sydney, Australia, with those living in Singapore. The rate of nearsightedness in Singapore (29 percent) was nearly nine times higher than in Sydney. The rates of nearsightedness among the parents of the two groups of children were similar, but the children in Sydney spent on average nearly 14 hours per week outside, compared with just three hours per week in Singapore.
     

    This could surely be a plausible explanation for the drops in reaction times, color acuity, etc.

    Similarly, a 2007 study by scholars at Ohio State University found that, among American children with two myopic parents, those who spent at least two hours per day outdoors were four times less likely to be nearsighted than those who spent less than one hour per day outside.
     


    Parents concerned about their children’s spending time playing instead of studying may be relieved to know that the common belief that “near work” — reading or computer use — leads to nearsightedness is incorrect. Among children who spend the same amount of time outside, the amount of near work has no correlation with nearsightedness. Hours spent indoors looking at a screen or book simply means less time spent outside, which is what really matters.

    This leads us to a recommendation that may satisfy tiger and soccer moms alike: if your child is going to stick his nose in a book this summer, get him to do it outdoors.
     

    Comparison of visual reaction time [VRT] in myopic subjects with emmetropic subjects

    http://www.ejmanager.com/mnstemps/28/28-1471505995.pdf

    Materials and Methods:
    The study was carried out among 112 first year medical students in the age group 18 to 20. 60 emmetropic subjects and 52 myopic subjects were involved in the study. The study was carried out with the help of discriminatory and choice reaction time apparatus. VRT was measured in milliseconds. For myopic subjects, VRT was taken before and after correction of their refractive error. Subjects were presented with two visual stimuli, red and green.
    Result:
    VRT is found to be significantly more in uncorrected myopic subjects as compared to emmetropic subjects for both red and green light stimuli. VRT is found to be significantly less in emmetropic subjects as compared to myopic subjects even after correcting the refracting error.
    Conclusion:
    The myopic people have greater reaction time than emmetropic people even though when their refractive error is corrected. This adds refractive error as a new member in the row of factors that affects the VRT.

    Read More
    • Replies: @Matthew Sarraf
    This is an interesting finding. It does nothing to bring into doubt Dr. Woodley of Menie's dysgenic theory, however.

    A paper published shortly after my review, "Selection against variants in the genome associated with educational attainment," finds a substantial decrease of an educational attainment polygenic score in Icelanders over time (the study examines genetic data from 129,808 Icelanders born between 1910 and 1990; this is without question a representative sample): http://www.pnas.org/content/114/5/E727.abstract. Dr. Woodley of Menie notified me of this research and pointed out, as I anticipated upon first hearing of the paper, that the equation that the authors use to convert the polygenic score decline to a per decade IQ point decline, 0.038 x (30/3.74) = 0.30 IQ points, assumes an unrealistically low additive heritability of IQ: 30%. The adult additive heritability of IQ is typically pegged at 80-85%, with the additive heritability of g likely at 85-87%. Thus the Icelandic data in fact indicate a genotypic g decline of 0.81-0.88 points per decade (on an IQ scale; I am using 80% as a conservative estimate of the additive heritability of g and 87% as a realistic estimate to arrive at the 0.81-0.88 range). While already quite close to Dr. Woodley of Menie's estimated g decline of 1-1.5 points per decade, this is only the decline in g from genetic selection. Once we include Dr. Woodley of Menie and Mr. Fernandes' estimated decline in g from mutation accumulation and other sources of damage to developmental stability (in the paper cited as Woodley of Menie & Fernandes, 2016b in my review), 0.16 points per decade, the overall per decade g decline rises to 0.97-1.04 points. Particular demographic changes may add another 0.25 points of g lost per decade, bringing the overall estimated decline in g to 1.22-1.29 points per decade, entirely consistent with what Dr. Woodley of Menie has been saying for years. In any case, the decrease in g due to genetic selection, the reality of which is confirmed in the Iceland paper about as directly as possible, is nearly a full point alone. So we find a diminution of g in the 1-1.5 points per decade range without availing ourselves of reaction time data, and a loss of g per decade nearly in that range even if we assume that only genetic selection is depressing g. Assume, arguendo, that the decadal reduction of g has been the mere 0.81 points per decade arrived at above with the 80% heritability estimate. Ignore all other possible contributory factors. 0.81 points of g lost a decade from 1850 to 2010 would amount to a total reduction of 12.96 points -- quite alarming for a very conservative estimate!

    I have not yet been able to read the study on myopia and visual reaction time (VRT) in detail. But even if myopia goes with longer VRT and myopia is becoming more prevalent (which it is), this would have no bearing on the secular trend toward greater auditory reaction time that Dr. Woodley of Menie and his colleagues have found. I doubt if the changing prevalence of myopia can explain more than a small fraction of the increase in VRT that Dr. Woodley of Menie and his colleagues have documented. Even when significant slowing is added to Galton's VRT samples, the remaining retardation of VRT indicates a g loss of ~10 points (on an IQ scale). As I argue in my review, attacking the dysgenic theory by picking at individual data sets and indicators is unlikely to bear fruit -- the nomological net of evidence for the theory is very robust, especially now that we have the aforementioned genetic selection data, and so is not likely to be undone without a parsimonious alternative explanation of declines in the various indicators that together seem to have nothing in common apart from a relation to g. If myopia decreases color acuity, increasing rates of myopia may explain why the estimate of dysgenesis on g from color acuity is much too high. But I am optimistic that some of the decline in color acuity is due to temporal reduction of g. Note that I do not suggest in my review that Dr. Woodley of Menie's research has made certain the precise magnitude of declines in g, only that it has shown that significant declines in g have been almost certainly occurring. With good genetic selection data now at hand, we are moving in on a more concrete estimate, which is probably in the 1-1.5 points of g lost per decade range that, as previously stated, Dr. Woodley of Menie predicted years ago.
  14. Anonymous says:     Show CommentNext New Comment
    @FKA Max

    though I do not know the cause of it.
     
    Might spending more time indoors, especially as children, be the cause and explanation?

    The Sun Is the Best Optometrist

    http://www.nytimes.com/2011/06/21/opinion/21wang.html

    Researchers suspect that bright outdoor light helps children’s developing eyes maintain the correct distance between the lens and the retina — which keeps vision in focus. Dim indoor lighting doesn’t seem to provide the same kind of feedback. As a result, when children spend too many hours inside, their eyes fail to grow correctly and the distance between the lens and retina becomes too long, causing far-away objects to look blurry.

    One study published in 2008 in the Archives of Ophthalmology compared 6- and 7-year-old children of Chinese ethnicity living in Sydney, Australia, with those living in Singapore. The rate of nearsightedness in Singapore (29 percent) was nearly nine times higher than in Sydney. The rates of nearsightedness among the parents of the two groups of children were similar, but the children in Sydney spent on average nearly 14 hours per week outside, compared with just three hours per week in Singapore.
     

    This could surely be a plausible explanation for the drops in reaction times, color acuity, etc.

    Similarly, a 2007 study by scholars at Ohio State University found that, among American children with two myopic parents, those who spent at least two hours per day outdoors were four times less likely to be nearsighted than those who spent less than one hour per day outside.
     


    Parents concerned about their children’s spending time playing instead of studying may be relieved to know that the common belief that “near work” — reading or computer use — leads to nearsightedness is incorrect. Among children who spend the same amount of time outside, the amount of near work has no correlation with nearsightedness. Hours spent indoors looking at a screen or book simply means less time spent outside, which is what really matters.

    This leads us to a recommendation that may satisfy tiger and soccer moms alike: if your child is going to stick his nose in a book this summer, get him to do it outdoors.
     

    Actually “near work” like reading and computer is the cause of nearsightedness. The reason time outdoors is associated with less nearsightedness is because the visual stimulus comes from light reflected off objects much further away when you’re outside, and this light is much brighter and a more intense stimulus. The focal point inside your eye when light from further away passes through your lens is much closer to your lens and the front of the eye, and there is a natural feedback mechanism whereby the eye’s ciliary muscles and extraocular muscles flex and adjust to maintain the normal spherical shape of the eye that meets the focal point.

    By contrast, when you’re indoors, the furthest objects in your visual field are usually the walls of a room, which is much closer than the horizon and other far flung objects when outside. Furthermore, when you’re inside, you’re often doing near work like reading and using the computer and focusing on objects which are even closer. This puts the focal point of the image in your eye further back from the front of your eye, and the eye’s natural feedback mechanism uses the eye muscles to elongate the eyeball and meet the focal point. Moreover, this is compounded by the use of glasses or contact lenses for myopes, which are prescribed so that the focal point of images infinitely far away meet the back of the eye. In practice, most myopes don’t use glasses or contact lenses merely to look at images far away, but use them while doing near work, and these lenses project the focal point even further back, kicking off the eye’s natural feedback mechanism again and elongating the eye further. This is what causes the progression of nearsightedness and stronger prescriptions over time.

    This understanding of the cause of nearsightedness allows for a simple cure which involves just reversing the process by moving the focal point closer to the front of the eye.

    Read More
  15. @FKA Max
    Comparison of visual reaction time [VRT] in myopic subjects with emmetropic subjects
    http://www.ejmanager.com/mnstemps/28/28-1471505995.pdf


    Materials and Methods:
    The study was carried out among 112 first year medical students in the age group 18 to 20. 60 emmetropic subjects and 52 myopic subjects were involved in the study. The study was carried out with the help of discriminatory and choice reaction time apparatus. VRT was measured in milliseconds. For myopic subjects, VRT was taken before and after correction of their refractive error. Subjects were presented with two visual stimuli, red and green.
    Result:
    VRT is found to be significantly more in uncorrected myopic subjects as compared to emmetropic subjects for both red and green light stimuli. VRT is found to be significantly less in emmetropic subjects as compared to myopic subjects even after correcting the refracting error.
    Conclusion:
    The myopic people have greater reaction time than emmetropic people even though when their refractive error is corrected. This adds refractive error as a new member in the row of factors that affects the VRT.
     

    This is an interesting finding. It does nothing to bring into doubt Dr. Woodley of Menie’s dysgenic theory, however.

    A paper published shortly after my review, “Selection against variants in the genome associated with educational attainment,” finds a substantial decrease of an educational attainment polygenic score in Icelanders over time (the study examines genetic data from 129,808 Icelanders born between 1910 and 1990; this is without question a representative sample): http://www.pnas.org/content/114/5/E727.abstract. Dr. Woodley of Menie notified me of this research and pointed out, as I anticipated upon first hearing of the paper, that the equation that the authors use to convert the polygenic score decline to a per decade IQ point decline, 0.038 x (30/3.74) = 0.30 IQ points, assumes an unrealistically low additive heritability of IQ: 30%. The adult additive heritability of IQ is typically pegged at 80-85%, with the additive heritability of g likely at 85-87%. Thus the Icelandic data in fact indicate a genotypic g decline of 0.81-0.88 points per decade (on an IQ scale; I am using 80% as a conservative estimate of the additive heritability of g and 87% as a realistic estimate to arrive at the 0.81-0.88 range). While already quite close to Dr. Woodley of Menie’s estimated g decline of 1-1.5 points per decade, this is only the decline in g from genetic selection. Once we include Dr. Woodley of Menie and Mr. Fernandes’ estimated decline in g from mutation accumulation and other sources of damage to developmental stability (in the paper cited as Woodley of Menie & Fernandes, 2016b in my review), 0.16 points per decade, the overall per decade g decline rises to 0.97-1.04 points. Particular demographic changes may add another 0.25 points of g lost per decade, bringing the overall estimated decline in g to 1.22-1.29 points per decade, entirely consistent with what Dr. Woodley of Menie has been saying for years. In any case, the decrease in g due to genetic selection, the reality of which is confirmed in the Iceland paper about as directly as possible, is nearly a full point alone. So we find a diminution of g in the 1-1.5 points per decade range without availing ourselves of reaction time data, and a loss of g per decade nearly in that range even if we assume that only genetic selection is depressing g. Assume, arguendo, that the decadal reduction of g has been the mere 0.81 points per decade arrived at above with the 80% heritability estimate. Ignore all other possible contributory factors. 0.81 points of g lost a decade from 1850 to 2010 would amount to a total reduction of 12.96 points — quite alarming for a very conservative estimate!

    I have not yet been able to read the study on myopia and visual reaction time (VRT) in detail. But even if myopia goes with longer VRT and myopia is becoming more prevalent (which it is), this would have no bearing on the secular trend toward greater auditory reaction time that Dr. Woodley of Menie and his colleagues have found. I doubt if the changing prevalence of myopia can explain more than a small fraction of the increase in VRT that Dr. Woodley of Menie and his colleagues have documented. Even when significant slowing is added to Galton’s VRT samples, the remaining retardation of VRT indicates a g loss of ~10 points (on an IQ scale). As I argue in my review, attacking the dysgenic theory by picking at individual data sets and indicators is unlikely to bear fruit — the nomological net of evidence for the theory is very robust, especially now that we have the aforementioned genetic selection data, and so is not likely to be undone without a parsimonious alternative explanation of declines in the various indicators that together seem to have nothing in common apart from a relation to g. If myopia decreases color acuity, increasing rates of myopia may explain why the estimate of dysgenesis on g from color acuity is much too high. But I am optimistic that some of the decline in color acuity is due to temporal reduction of g. Note that I do not suggest in my review that Dr. Woodley of Menie’s research has made certain the precise magnitude of declines in g, only that it has shown that significant declines in g have been almost certainly occurring. With good genetic selection data now at hand, we are moving in on a more concrete estimate, which is probably in the 1-1.5 points of g lost per decade range that, as previously stated, Dr. Woodley of Menie predicted years ago.

    Read More
    • Replies: @James Thompson
    Thank you for your detailed comment.
    , @Wizard of Oz
    Thank you for that and I invite your attention to #36,
    , @utu

    "as I anticipated upon first hearing of the paper, that the equation that the authors use to convert the polygenic score decline to a per decade IQ point decline,

    0.038 x (30/3.74) = 0.30
     
    IQ points, assumes an unrealistically low additive heritability of IQ: 30%."
     
    And it did not occur to you that the equation cannot be possibly correct. Heritability, say 30% is expressed as a fraction of variance. Variance is in different units than a mean. The trend, say ∆IQ is measured as changes of mean. You compare means at different times and get ∆IQ and then divide by the time interval. There is no way that there would be a linear proportion linking mean and variance to express the trend.
    , @Craken
    My understanding is that additive (ie narrow sense) heritability is about .6 and broad sense heritability is about .8-.85.

    Your "conservative estimate" of a decline of 12.96 IQ points would, putting aside the Flynn effect, produce a 92% decline in people with IQs over 145. The decline in people with IQs over 160 would be about 96%. This strains credibility on its face and for several reasons. First, the Flynn effect mainly impacted the lower echelons of IQ. Both math and science continue to progress. Also, this rate of decline does not accord with demographic records of differential fertility among different IQ strata.

    On the other hand, if this sort of hyperbole functions to Crack the Western establishment out of its nihilistic slumbers, it may be that the end justifies the means...
  16. Reaction times slowing over the last 120 years. People no longer riding horses?

    Read More
  17. @Anonymous
    Unlike you, I have not made criticizing my cognitive betters while completely and obviously lacking the expertise to do so a hobby.

    You have entirely dodged the substance of my reply. That aside, since you've suggested it's clear that Woodley makes use of biased samples, why not demonstrate this? Why not submit the analysis to a psychometrics journal? People doubtlessly more accomplished in psychometrics than you, James Thompson, for example, are not dismissing the dysgenics research out of hand. So you're implying that you have some profound insight about this apparent phenomenon that some highly and relevantly learned people are missing. Why not show it? In light of your execrable blog posts, it's quite clear that you do not have the ability to do any such thing. Hence why you limit yourself to snotty, one-sentence responses. For God's sake, you tried refute dysgenics with bar charts. To all relevantly informed persons you've irredeemably humiliated yourself, time and again. Stop, for your own sake.

    “irredeemably”

    Are we deplorable too?

    Read More
  18. Victorians were smarter than “us”, current Englishmen, but they were extremely negleted by their government, especially the poorest.

    It is as if you have sturdy plants growing on a hostile terrain and nowadays you have weaker plants growing on fertile ground.

    If intelligence is inevitably genetic and runs within families, more in some than in others, it would be interesting and obvious to do genealogical / biographical research especially on individuals who score high on IQ tests today, literalizing this research [about their fertility rates of course}

    I do not know how my ancestors were in terms of intelligence, but I know that my mother who is a very intelligent woman (general and verbal high intelligence) and came from a poor (but not very poor) family married my father , Who came from a more middle class family, full of siblings with intelligence at least above average. How the end of arranged marriages in the West may have affected, for better or for worse, the level and quality of people’s intelligence *

    Earlier people seem to marry more like social castes in the West. Nowadays people marry who they are interested in and also tend to divorce with great frequency. Or I’m completely wrong and I do not know what I’m talking about (most likely).

    It is clear that a better infrastructure, a much more generous state of well-being, more leftist ideologies practically created an ‘African’, ‘savannah’ environment: more promiscuous, more disorganized, less Darwinian and this ”r-environment” select for r-people, without any orderness.

    It’s as if you have a plant and always cut the weeds before they grow and take over the plant. Now you no longer cut weeds and how they have a parasitic nature …

    Read More
  19. @pyrrhus
    The Woodley effect is almost certainly caused by increasing mutational load, with probably some dysgenic breeding effects tossed in. The Flynn effect is caused, as Dr. Thompson suggests, by better growing conditions, and also, as Flynn speculated, increasing familiarity with the Raven IQ test and such. Strangely enough, standardized testing in the US indicates that the Flynn effect died 50 years ago, and has not been resuscitated....All broad based standard testing has shown significant declines, especially the college entrance tests, the SAT and ACT.

    The Woodley effect is almost certainly caused by increasing mutational load, with probably some dysgenic breeding effects tossed in.

    There are more mentally ill people this days than in the past*

    People are taller than before even they are more fattier…

    Read More
    • Replies: @RaceRealist88
    It's because of diet. The industrial revolution, as well as agriculture, gave us sufficient kcal to grow. On the flip side, the obesity and height increases both follow each other. Some researchers argue that a high rate of obesity is a forgone conclusion in first world countries due to the availability of food.

    https://notpoliticallycorrect.me/2017/01/15/agriculture-and-diseases-of-civilization/

    Furthermore, n-6 is positively correlated with obesity hole n-3 is positively correlated with intelligence. The American diet has 26 times more n-6 than n-3. I don't even need to tell you the consequences for obesity and intelligence there.

    https://notpoliticallycorrect.me/2017/01/21/fatty-acids-and-pisa-math-performance/
  20. @Matthew Sarraf
    This is an interesting finding. It does nothing to bring into doubt Dr. Woodley of Menie's dysgenic theory, however.

    A paper published shortly after my review, "Selection against variants in the genome associated with educational attainment," finds a substantial decrease of an educational attainment polygenic score in Icelanders over time (the study examines genetic data from 129,808 Icelanders born between 1910 and 1990; this is without question a representative sample): http://www.pnas.org/content/114/5/E727.abstract. Dr. Woodley of Menie notified me of this research and pointed out, as I anticipated upon first hearing of the paper, that the equation that the authors use to convert the polygenic score decline to a per decade IQ point decline, 0.038 x (30/3.74) = 0.30 IQ points, assumes an unrealistically low additive heritability of IQ: 30%. The adult additive heritability of IQ is typically pegged at 80-85%, with the additive heritability of g likely at 85-87%. Thus the Icelandic data in fact indicate a genotypic g decline of 0.81-0.88 points per decade (on an IQ scale; I am using 80% as a conservative estimate of the additive heritability of g and 87% as a realistic estimate to arrive at the 0.81-0.88 range). While already quite close to Dr. Woodley of Menie's estimated g decline of 1-1.5 points per decade, this is only the decline in g from genetic selection. Once we include Dr. Woodley of Menie and Mr. Fernandes' estimated decline in g from mutation accumulation and other sources of damage to developmental stability (in the paper cited as Woodley of Menie & Fernandes, 2016b in my review), 0.16 points per decade, the overall per decade g decline rises to 0.97-1.04 points. Particular demographic changes may add another 0.25 points of g lost per decade, bringing the overall estimated decline in g to 1.22-1.29 points per decade, entirely consistent with what Dr. Woodley of Menie has been saying for years. In any case, the decrease in g due to genetic selection, the reality of which is confirmed in the Iceland paper about as directly as possible, is nearly a full point alone. So we find a diminution of g in the 1-1.5 points per decade range without availing ourselves of reaction time data, and a loss of g per decade nearly in that range even if we assume that only genetic selection is depressing g. Assume, arguendo, that the decadal reduction of g has been the mere 0.81 points per decade arrived at above with the 80% heritability estimate. Ignore all other possible contributory factors. 0.81 points of g lost a decade from 1850 to 2010 would amount to a total reduction of 12.96 points -- quite alarming for a very conservative estimate!

    I have not yet been able to read the study on myopia and visual reaction time (VRT) in detail. But even if myopia goes with longer VRT and myopia is becoming more prevalent (which it is), this would have no bearing on the secular trend toward greater auditory reaction time that Dr. Woodley of Menie and his colleagues have found. I doubt if the changing prevalence of myopia can explain more than a small fraction of the increase in VRT that Dr. Woodley of Menie and his colleagues have documented. Even when significant slowing is added to Galton's VRT samples, the remaining retardation of VRT indicates a g loss of ~10 points (on an IQ scale). As I argue in my review, attacking the dysgenic theory by picking at individual data sets and indicators is unlikely to bear fruit -- the nomological net of evidence for the theory is very robust, especially now that we have the aforementioned genetic selection data, and so is not likely to be undone without a parsimonious alternative explanation of declines in the various indicators that together seem to have nothing in common apart from a relation to g. If myopia decreases color acuity, increasing rates of myopia may explain why the estimate of dysgenesis on g from color acuity is much too high. But I am optimistic that some of the decline in color acuity is due to temporal reduction of g. Note that I do not suggest in my review that Dr. Woodley of Menie's research has made certain the precise magnitude of declines in g, only that it has shown that significant declines in g have been almost certainly occurring. With good genetic selection data now at hand, we are moving in on a more concrete estimate, which is probably in the 1-1.5 points of g lost per decade range that, as previously stated, Dr. Woodley of Menie predicted years ago.

    Thank you for your detailed comment.

    Read More
    • Replies: @Wizard of Oz
    At least 15 years ago I did a back of the envelope + Excel calculation which was based on 100 avg IQ, sd 15, all under 85 IQ women have three children at age 21 and all over IQ 115 women have 2 children at age 33, no allowance for older fathers' mutations and no other relevant fertility figures. On standard heritability assumptions (which I leave unstated because my memory of the other figures may be very slightly astray and it doesn't really effect my point) I calculated that there would be a loss of 2 points of IQ in 100 years which, absent some sort of caste system would mean a significant loss of people with IQs over 140. So.... to me it is just common sense, as it was to J.M.Keynes and other members of the Malthusian Sociery, that modern peoples have been breeding dysgenically for g and any other heritable traits that are conducive to material success or achievement of social status. It is alarming to read here of plausible estimates which are much more damaging to what most would regard, however euphemistically expressed, as the quality of our populations. If the world remains peaceful then I daresay we can meet democracies' essential condition of ever rising wellbeing thanks to the millions of still unexploited good brains in Asia but the evident defects of nominal democracies in crumbling and unequal Europe and the US suggest that a lot of us may be reconciling ourselves to some form of government by an elite modelled on China or Japan with some consoling flim-flam about Plato's Guardians.
    Comment?

    And if I may attempt to describe a future acceptable to those of us looking forward with some foreboding but not personally threatened with disaster in health, wealth or peace and quiet and invite criticism, comment and counter suggestions I would say it is a world where healthy longevity for our great grandchildren has continued to improve, there is no danger of their not having enough healthy (and enjoyable) food to eat, the wonders of the natural environment are not less,give-and-take, for them as they are for First World backpackers and prosperous retired persons today and leisure for enjoying things of the mind is not much less than Keynes overoptimistically imagined in 1930. I fear that half a billion early vasectomies in sub-Saharan Africa might be a necessary condition, especially for the environment, but otherwise I retain a glimmer of optimism.

    , @Matthew Sarraf
    Thank you for featuring my review on your blog.
  21. @Anonymous
    Unlike you, I have not made criticizing my cognitive betters while completely and obviously lacking the expertise to do so a hobby.

    You have entirely dodged the substance of my reply. That aside, since you've suggested it's clear that Woodley makes use of biased samples, why not demonstrate this? Why not submit the analysis to a psychometrics journal? People doubtlessly more accomplished in psychometrics than you, James Thompson, for example, are not dismissing the dysgenics research out of hand. So you're implying that you have some profound insight about this apparent phenomenon that some highly and relevantly learned people are missing. Why not show it? In light of your execrable blog posts, it's quite clear that you do not have the ability to do any such thing. Hence why you limit yourself to snotty, one-sentence responses. For God's sake, you tried refute dysgenics with bar charts. To all relevantly informed persons you've irredeemably humiliated yourself, time and again. Stop, for your own sake.

    Anonymous says:

    To all relevantly informed persons you’ve irredeemably humiliated yourself, time and again.

    (Italics mine)

    The above says it all. I’d thank you for your input but I don’t see the point.

    Read More
    • Replies: @Anonymous
    "I don’t see the point."

    A chronic problem of yours, sadly.

    Let's try this again. Can you demonstrate that Woodley's findings are compromised by sampling bias? Or can you only assert that they are without argument like a brainless poseur?

  22. @Santoculto
    The Woodley effect is almost certainly caused by increasing mutational load, with probably some dysgenic breeding effects tossed in.

    There are more mentally ill people this days than in the past*

    People are taller than before even they are more fattier...

    It’s because of diet. The industrial revolution, as well as agriculture, gave us sufficient kcal to grow. On the flip side, the obesity and height increases both follow each other. Some researchers argue that a high rate of obesity is a forgone conclusion in first world countries due to the availability of food.

    https://notpoliticallycorrect.me/2017/01/15/agriculture-and-diseases-of-civilization/

    Furthermore, n-6 is positively correlated with obesity hole n-3 is positively correlated with intelligence. The American diet has 26 times more n-6 than n-3. I don’t even need to tell you the consequences for obesity and intelligence there.

    https://notpoliticallycorrect.me/2017/01/21/fatty-acids-and-pisa-math-performance/

    Read More
    • Replies: @iffen
    The American diet has 26 times more n-6 than n-3.

    Maybe the imbalance has a lead-type effect, or maybe there is some other pervasive substance that produces a lead-type effect and we can't see it yet.

  23. @RaceRealist88
    It's because of diet. The industrial revolution, as well as agriculture, gave us sufficient kcal to grow. On the flip side, the obesity and height increases both follow each other. Some researchers argue that a high rate of obesity is a forgone conclusion in first world countries due to the availability of food.

    https://notpoliticallycorrect.me/2017/01/15/agriculture-and-diseases-of-civilization/

    Furthermore, n-6 is positively correlated with obesity hole n-3 is positively correlated with intelligence. The American diet has 26 times more n-6 than n-3. I don't even need to tell you the consequences for obesity and intelligence there.

    https://notpoliticallycorrect.me/2017/01/21/fatty-acids-and-pisa-math-performance/

    The American diet has 26 times more n-6 than n-3.

    Maybe the imbalance has a lead-type effect, or maybe there is some other pervasive substance that produces a lead-type effect and we can’t see it yet.

    Read More
    • Replies: @RaceRealist88
    Maybe. I've been reading into this the past few weeks to make a substantial post on it. Read this.

    https://www.hindawi.com/journals/jnme/2012/539426/

    There is a metabolic competition between n-3 and n-6 intake. Our hunter gatherer ancestors had a to 1 ratio. We do not. That's a cause of some of the diseases of civilization. That article I linked should get you started.
    , @Daniel Chieh
    Wouldn't it be hilarious if plastic turns out to be extraordinarily toxic? We have found our own lead.
  24. @iffen
    The American diet has 26 times more n-6 than n-3.

    Maybe the imbalance has a lead-type effect, or maybe there is some other pervasive substance that produces a lead-type effect and we can't see it yet.

    Maybe. I’ve been reading into this the past few weeks to make a substantial post on it. Read this.

    https://www.hindawi.com/journals/jnme/2012/539426/

    There is a metabolic competition between n-3 and n-6 intake. Our hunter gatherer ancestors had a to 1 ratio. We do not. That’s a cause of some of the diseases of civilization. That article I linked should get you started.

    Read More
    • Replies: @iffen
    Our hunter gatherer ancestors had a to 1 ratio. We do not

    Even our farm raised fish are grain fed.
  25. @iffen
    The American diet has 26 times more n-6 than n-3.

    Maybe the imbalance has a lead-type effect, or maybe there is some other pervasive substance that produces a lead-type effect and we can't see it yet.

    Wouldn’t it be hilarious if plastic turns out to be extraordinarily toxic? We have found our own lead.

    Read More
    • Replies: @RaceRealist88
    Plastic that has a chemical called BPA has deleterious effects on the body.

    Urinary BPA levels are significantly correlated with obesity.

    http://jamanetwork.com/journals/jama/fullarticle/1360865#32768198

    Moreover, BPA consumption for pregnant women has extremely negative impacts on a developing fetus. Read this.

    https://notpoliticallycorrect.me/2016/05/19/science-daily-moms-exposure-to-bpa-during-pregnancy-can-put-her-baby-on-course-to-obesity/

    The more BPA a baby is exposed to in vitro, the higher chance they have of becoming obese.

    This doesn't even touch the feminizing effects of BPA on makes. I'll go in depth on that later.
  26. @Daniel Chieh
    Wouldn't it be hilarious if plastic turns out to be extraordinarily toxic? We have found our own lead.

    Plastic that has a chemical called BPA has deleterious effects on the body.

    Urinary BPA levels are significantly correlated with obesity.

    http://jamanetwork.com/journals/jama/fullarticle/1360865#32768198

    Moreover, BPA consumption for pregnant women has extremely negative impacts on a developing fetus. Read this.

    https://notpoliticallycorrect.me/2016/05/19/science-daily-moms-exposure-to-bpa-during-pregnancy-can-put-her-baby-on-course-to-obesity/

    The more BPA a baby is exposed to in vitro, the higher chance they have of becoming obese.

    This doesn’t even touch the feminizing effects of BPA on makes. I’ll go in depth on that later.

    Read More
  27. @RaceRealist88
    Maybe. I've been reading into this the past few weeks to make a substantial post on it. Read this.

    https://www.hindawi.com/journals/jnme/2012/539426/

    There is a metabolic competition between n-3 and n-6 intake. Our hunter gatherer ancestors had a to 1 ratio. We do not. That's a cause of some of the diseases of civilization. That article I linked should get you started.

    Our hunter gatherer ancestors had a to 1 ratio. We do not

    Even our farm raised fish are grain fed.

    Read More
    • Replies: @RaceRealist88
    Yes. Cereal grains attribute to the extreme lopsidedness of the n-3/n-6 ratio. Diet is imperative to a high-functioning brain.

    Great article here:

    http://www.direct-ms.org/pdf/NutritionFats/Yehuda%20Omega%203%206%20ratio.pdf

    The ratio of n-6 to n-3 from the years 1935 to 1939 were 8.4 to 1, whereas from the years 1935 to 1985, the ratio increased to about 10 percent.

    http://cyber.sci-hub.bz/MTAuMTA4MC8wNzMxNTcyNC4xOTkyLjEwNzE4MjMx/10.1080%4007315724.1992.10718231.pdf

    Twenty percent of our kcal consumed per day comes from soybean oil, 9 percent from linolic acids.

    https://www.cnpp.usda.gov/sites/default/files/nutrient_content_of_the_us_food_supply/FoodSupply1909-2004Report.pdf

    N-6 also contributes to obesity. Any wonder why we keep getting fatter as a country (though the rate has been decreasing since 05, something they don't tell you)?

  28. Anonymous says:     Show CommentNext New Comment
    @JayMan
    Anonymous says:

    To all relevantly informed persons you’ve irredeemably humiliated yourself, time and again.
     
    (Italics mine)

    The above says it all. I'd thank you for your input but I don't see the point.

    “I don’t see the point.”

    A chronic problem of yours, sadly.

    Let’s try this again. Can you demonstrate that Woodley’s findings are compromised by sampling bias? Or can you only assert that they are without argument like a brainless poseur?

    Read More
  29. UNITS DEFINITION PLEASE

    3.5 drop/decade, 4.8 drop/decade, 1.8 drop/decade, 0.16 drop/decade, 0.57-1.21 drop/decade, 0.16 drop/decade.

    Are these common units? Like RT that is in [ms] is converted to some other units? How the conversions are done? On what basis the units are somewhat conflated with g. And, yes I must ask what are the units of g?

    Read More
    • Replies: @James Thompson
    As in the paper linked.
    , @EH
    They are = standard deviations * 15, so basically = IQ points.

    "And, yes I must ask what are the units of g?"

    IQ points with a s.d. of 15 are the standard unit for measurements of g. IQ points are on an equal-interval scale (at least for small numbers, less than a standard deviation or two). You can add and subtract them, but not multiply or divide them by each other. Basically like working with Centigrade or Fahrenheit. They're also not equal-interval out past 30 points from the average and become steadily less so the farther out you go because the real distribution of g has fatter tails than the normal distribution. Their biggest drawback of IQ is that it is not a measure of intelligence properly speaking, but of the rarity of intelligence relative to a given age, so an IQ 100 9 year old is not as capable of answering questions as a 29 year old with the same IQ. The size of the unit IQ points is theoretically the same, that is, a 110 IQ 9 y.o. would be as much smarter than a 100 IQ 9 y.o. as the same case but with 29 year olds. (In practice, 9 year-olds have a tighter distribution, but this sort of comparison between ages is seldom if ever used.)


    Given the right sort of test (~measured item difficulties can be graphed as a straight line) there is a transformation of the raw scores that gives you a ratio scale like Kelvin, with an absolute zero, which allows all arithmetic operations, thus letting you say: "A is 10% smarter than B". This is called a Rasch measure. The only arbitrary choice is the size of the unit. Riverside publishing's Stanford-Binet CSS (Change-sensitive score) and Woodcock-Johnson "W" scales set the size of their Rasch unit by reference to the average 10-year old who is assigned a CSS of 500. Adults are around 510. The form of the CSS vs. age graph is logarithmic, rising quickly at first, them leveling off. IIRC, the s.d. for the FSIQ (full-scale, whole-test) CSS scale for adults is about 8.5 CSS points, or roughly 6 CSS at age 9, earlier ages have wider distributions. (Each subtest also has its own CSS with the same 500 @ age 10 anchor. Actually every question has a a CSS score on the same scale which denotes its difficulty - when difficulty = ability, the chance of getting the item correct is 50%).

    So the percentage variation in human intelligence is low (~10% difference within the middle 99.9% of the adult population), but expressed as age differences, +2 s.d. people are smarter at age 9 than the average adult, while -2 s.d. adults are only as smart as the average 6 year old.

    Most research on human intelligence does not require a ratio scale so IQ is good enough for those purposes. Rasch / ratio scales are more rigorously defined, though, and allow doing some things that IQ can't do, or at least makes more difficult and error-prone.

  30. Thanks again to Prof. Thompson and commenters.

    Are there any studies of how bright people and less bright people are regarded for their intelligence alone by the general public? Or, how bright people regard less bright people, or vice versa? For example, is Bob, 137 IQ, perceived as more trustworthy or more likeable than Sam, 98 IQ? I don’t have any preconceived notions here of any weight, either.

    Read More
    • Replies: @anon
    my guess would be people are prejudiced against outliers at either end of the curve

    (probably for unconscious reasons which make sense at a probability level)
    , @James Thompson
    Prof Adrian Furnham has done some work on public perceptions of intelligence, but mostly using family members for the estimates.

    http://www.unz.com/jthompson/so-you-think-youre-intelligent/


    However, your question is a good one, because it would be interesting to see how well the general public estimate mental ability. My impression is that estimates would be done almost exclusively on the basis of verbal skills, because those are so obvious, and would be distorted by how much confidence the target person showed in self-presentation. People kindly assume that a person who is very confident about their opinions must have done the necessary homework. I will have a hunt sometime to see what else is available on public perceptions of ability.
  31. @iffen
    Our hunter gatherer ancestors had a to 1 ratio. We do not

    Even our farm raised fish are grain fed.

    Yes. Cereal grains attribute to the extreme lopsidedness of the n-3/n-6 ratio. Diet is imperative to a high-functioning brain.

    Great article here:

    http://www.direct-ms.org/pdf/NutritionFats/Yehuda%20Omega%203%206%20ratio.pdf

    The ratio of n-6 to n-3 from the years 1935 to 1939 were 8.4 to 1, whereas from the years 1935 to 1985, the ratio increased to about 10 percent.

    http://cyber.sci-hub.bz/MTAuMTA4MC8wNzMxNTcyNC4xOTkyLjEwNzE4MjMx/10.1080%4007315724.1992.10718231.pdf

    Twenty percent of our kcal consumed per day comes from soybean oil, 9 percent from linolic acids.

    https://www.cnpp.usda.gov/sites/default/files/nutrient_content_of_the_us_food_supply/FoodSupply1909-2004Report.pdf

    N-6 also contributes to obesity. Any wonder why we keep getting fatter as a country (though the rate has been decreasing since 05, something they don’t tell you)?

    Read More
    • Replies: @Anonymous
    Interesting. Thanks for the info.

    Can you write a blog post on your blog about diet and nutrition strategies for balancing out the ratio?
  32. Anonymous says:     Show CommentNext New Comment
    @RaceRealist88
    Yes. Cereal grains attribute to the extreme lopsidedness of the n-3/n-6 ratio. Diet is imperative to a high-functioning brain.

    Great article here:

    http://www.direct-ms.org/pdf/NutritionFats/Yehuda%20Omega%203%206%20ratio.pdf

    The ratio of n-6 to n-3 from the years 1935 to 1939 were 8.4 to 1, whereas from the years 1935 to 1985, the ratio increased to about 10 percent.

    http://cyber.sci-hub.bz/MTAuMTA4MC8wNzMxNTcyNC4xOTkyLjEwNzE4MjMx/10.1080%4007315724.1992.10718231.pdf

    Twenty percent of our kcal consumed per day comes from soybean oil, 9 percent from linolic acids.

    https://www.cnpp.usda.gov/sites/default/files/nutrient_content_of_the_us_food_supply/FoodSupply1909-2004Report.pdf

    N-6 also contributes to obesity. Any wonder why we keep getting fatter as a country (though the rate has been decreasing since 05, something they don't tell you)?

    Interesting. Thanks for the info.

    Can you write a blog post on your blog about diet and nutrition strategies for balancing out the ratio?

    Read More
    • Replies: @RaceRealist88
    No problem. Sure I can do that by the weekend. It's really just eating a lot of seafood and less processed carbs and grains. I'll publish by the end of the week. A high-quality diet is imperative for proper brain functioning. I've been reading into nootropics and I was thinking: a bodybuilders diet and lifestyle with the supplementation is a great model to follow.

    For instance creatine is a nootropic, it's a brain protectant. Going to write about that as well since it's right up my alley other nutrition and strength training.

    , @Steel T Post
    On soy:

    The Ploy of Soy
    by Sally Fallon and Mary G. Enig, PhD
    westonaprice.org/health-topics/the-ploy-of-soy/
     
    On balancing fats:

    The Skinny on Fats
    by Sally Fallon and Mary G. Enig, PhD
    westonaprice.org/know-your-fats/the-skinny-on-fats/
     
  33. anon says:     Show CommentNext New Comment
    @JackOH
    Thanks again to Prof. Thompson and commenters.

    Are there any studies of how bright people and less bright people are regarded for their intelligence alone by the general public? Or, how bright people regard less bright people, or vice versa? For example, is Bob, 137 IQ, perceived as more trustworthy or more likeable than Sam, 98 IQ? I don't have any preconceived notions here of any weight, either.

    my guess would be people are prejudiced against outliers at either end of the curve

    (probably for unconscious reasons which make sense at a probability level)

    Read More
  34. anon says:     Show CommentNext New Comment

    here is a prediction. I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries

    if the Flynn effect correlated to increased seafood consumption in the 3rd world and the flat-lined populations were those who have long reached saturation in the specific nutrients found in seafood

    then showing that correlation would strengthen the case for biological IQ

    Read More
  35. @utu
    UNITS DEFINITION PLEASE

    3.5 drop/decade, 4.8 drop/decade, 1.8 drop/decade, 0.16 drop/decade, 0.57-1.21 drop/decade, 0.16 drop/decade.

    Are these common units? Like RT that is in [ms] is converted to some other units? How the conversions are done? On what basis the units are somewhat conflated with g. And, yes I must ask what are the units of g?

    As in the paper linked.

    Read More
    • Replies: @utu
    But the conversion factors and how were they derived are not explained in the review by Sarraf. Perhaps they can be found in Woodley's book.

    If you have two tests X1 and X2 when applied to the same population you can get correlation R and slope of linear regression straight line S=dX1/dX2. So if X1=IQ in IQ points and X2=RT in milliseconds mechanically one can convert changes in RT scores (∆RT) to changes in IQ scores (∆IQ) following ∆RT/∆IQ=S=dX1/dX2 proportion. However when correlation R is small this is pretty meaningless and if used it amounts to

    mathematical charlatanry
     
    .

    Sarraf wrote "simple reaction times, a decent proxy for g." How can he write it with a straight face when reaction time has very low correlation (it is not even 0.3) with IQ?

    The bottom line is that it all comes down to IQ tests, right? Everything is converted to the scale implicitly defined by IQ tests. This includes g. It is interesting that Sarraf (I am not sure about Woodley as I did not read his book) managed to reach the Mount Everest of g reification. g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges. How does Sarraf use g in his text? Let me count the ways:

    "a decent proxy for g", "the loss of g in the West", "reductions in g", "with the diminution of g even", "the integrity of genetic factors that underlie g", "suggesting that diminishing g is pervasive", 'assert that “dysgenesis” on g may also explain “anti-Flynn effects,”'

    All those statements containing "g" are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds. There are the deceivers and there are the believers. But the best, most effective deceivers are the believers. Why seemingly intelligent people (I listened to Woodley on YT and he clearly is very intelligent) let themselves to be fooled? Is Woodley a charlatan or a fool?
  36. @JackOH
    Thanks again to Prof. Thompson and commenters.

    Are there any studies of how bright people and less bright people are regarded for their intelligence alone by the general public? Or, how bright people regard less bright people, or vice versa? For example, is Bob, 137 IQ, perceived as more trustworthy or more likeable than Sam, 98 IQ? I don't have any preconceived notions here of any weight, either.

    Prof Adrian Furnham has done some work on public perceptions of intelligence, but mostly using family members for the estimates.

    http://www.unz.com/jthompson/so-you-think-youre-intelligent/

    However, your question is a good one, because it would be interesting to see how well the general public estimate mental ability. My impression is that estimates would be done almost exclusively on the basis of verbal skills, because those are so obvious, and would be distorted by how much confidence the target person showed in self-presentation. People kindly assume that a person who is very confident about their opinions must have done the necessary homework. I will have a hunt sometime to see what else is available on public perceptions of ability.

    Read More
    • Replies: @Wizard of Oz
    Well yes my quibbles were sheathed when i went back and read "general public". More segmented sections of the generality might be interesting to study separately. How smart do you have to be to notice for example that one smart coleaugue is usuallt first to mention a problem with some figures - or with the draft of a clause in a proposed contract or legislation?
    , @JackOH
    Thanks, Prof. Thompson. A week ago I'd remarked idly about my trying to imagine an "IQ-centric" political party, political faction, or what-have-you. Call it the Achiever Party, or something, with an Achievement Foundation appended to it. The idea would be to explore whether it's even possible to explicitly politicize IQ in a measured, temperate way.

    My personal feeling is that Shockley, Rushton, and other intelligence researchers got bashed about, a bit unfairly in my opinion, is that they didn't recognize the political backdrop of an ever-expanding political franchise, Black upheaval, Marxist revolution, etc. , which, I think, has put IQ in a bad spot. My idea of an Achievement Foundation would be to rescue IQ from the shadows.

    As a casual, non-expert reader I may be getting something wrong here. I am learning something from your articles and the comments below. Many thanks.
  37. Reading Victorian literature, one certainly gets the feeling that compared to writers of today, the British of the 1800s were ‘brighter’ – that they had more fire inside. To an extent the same is true of writers of the late Roman Republic (in the original Latin, translation is too dependent on the skills of the translator), but the constricted sample size there is an issue – it could just be an artifact of only the best Roman writers surviving down to the present day. I can’t read ancient Greek, but judging by their other cultural achievements, 5th-4th century BC Greece may have seen a similar superabundance of brightness.

    There certainly seems no reason not to believe that group mean human intelligence can vary significantly over time. Is an underlying 15 point IQ drop in 150 years credible? 15 points is almost the US black-white population gap (typically measured at ca 17 points). I guess with Flynn Effect compensation it might just about be; but comparing literary, scientific and other achievements of the mid 1800s to those of today my guess would be that the rate of decline is actually about half that, or about 0.5/decade.

    Read More
  38. @James Thompson
    Thank you for your detailed comment.

    At least 15 years ago I did a back of the envelope + Excel calculation which was based on 100 avg IQ, sd 15, all under 85 IQ women have three children at age 21 and all over IQ 115 women have 2 children at age 33, no allowance for older fathers’ mutations and no other relevant fertility figures. On standard heritability assumptions (which I leave unstated because my memory of the other figures may be very slightly astray and it doesn’t really effect my point) I calculated that there would be a loss of 2 points of IQ in 100 years which, absent some sort of caste system would mean a significant loss of people with IQs over 140. So…. to me it is just common sense, as it was to J.M.Keynes and other members of the Malthusian Sociery, that modern peoples have been breeding dysgenically for g and any other heritable traits that are conducive to material success or achievement of social status. It is alarming to read here of plausible estimates which are much more damaging to what most would regard, however euphemistically expressed, as the quality of our populations. If the world remains peaceful then I daresay we can meet democracies’ essential condition of ever rising wellbeing thanks to the millions of still unexploited good brains in Asia but the evident defects of nominal democracies in crumbling and unequal Europe and the US suggest that a lot of us may be reconciling ourselves to some form of government by an elite modelled on China or Japan with some consoling flim-flam about Plato’s Guardians.
    Comment?

    And if I may attempt to describe a future acceptable to those of us looking forward with some foreboding but not personally threatened with disaster in health, wealth or peace and quiet and invite criticism, comment and counter suggestions I would say it is a world where healthy longevity for our great grandchildren has continued to improve, there is no danger of their not having enough healthy (and enjoyable) food to eat, the wonders of the natural environment are not less,give-and-take, for them as they are for First World backpackers and prosperous retired persons today and leisure for enjoying things of the mind is not much less than Keynes overoptimistically imagined in 1930. I fear that half a billion early vasectomies in sub-Saharan Africa might be a necessary condition, especially for the environment, but otherwise I retain a glimmer of optimism.

    Read More
    • Replies: @dc.sunsets
    the tl;dr version of your comment to me: "Idiocracy is not a comedy."
  39. @James Thompson
    Prof Adrian Furnham has done some work on public perceptions of intelligence, but mostly using family members for the estimates.

    http://www.unz.com/jthompson/so-you-think-youre-intelligent/


    However, your question is a good one, because it would be interesting to see how well the general public estimate mental ability. My impression is that estimates would be done almost exclusively on the basis of verbal skills, because those are so obvious, and would be distorted by how much confidence the target person showed in self-presentation. People kindly assume that a person who is very confident about their opinions must have done the necessary homework. I will have a hunt sometime to see what else is available on public perceptions of ability.

    Well yes my quibbles were sheathed when i went back and read “general public”. More segmented sections of the generality might be interesting to study separately. How smart do you have to be to notice for example that one smart coleaugue is usuallt first to mention a problem with some figures – or with the draft of a clause in a proposed contract or legislation?

    Read More
  40. @Matthew Sarraf
    This is an interesting finding. It does nothing to bring into doubt Dr. Woodley of Menie's dysgenic theory, however.

    A paper published shortly after my review, "Selection against variants in the genome associated with educational attainment," finds a substantial decrease of an educational attainment polygenic score in Icelanders over time (the study examines genetic data from 129,808 Icelanders born between 1910 and 1990; this is without question a representative sample): http://www.pnas.org/content/114/5/E727.abstract. Dr. Woodley of Menie notified me of this research and pointed out, as I anticipated upon first hearing of the paper, that the equation that the authors use to convert the polygenic score decline to a per decade IQ point decline, 0.038 x (30/3.74) = 0.30 IQ points, assumes an unrealistically low additive heritability of IQ: 30%. The adult additive heritability of IQ is typically pegged at 80-85%, with the additive heritability of g likely at 85-87%. Thus the Icelandic data in fact indicate a genotypic g decline of 0.81-0.88 points per decade (on an IQ scale; I am using 80% as a conservative estimate of the additive heritability of g and 87% as a realistic estimate to arrive at the 0.81-0.88 range). While already quite close to Dr. Woodley of Menie's estimated g decline of 1-1.5 points per decade, this is only the decline in g from genetic selection. Once we include Dr. Woodley of Menie and Mr. Fernandes' estimated decline in g from mutation accumulation and other sources of damage to developmental stability (in the paper cited as Woodley of Menie & Fernandes, 2016b in my review), 0.16 points per decade, the overall per decade g decline rises to 0.97-1.04 points. Particular demographic changes may add another 0.25 points of g lost per decade, bringing the overall estimated decline in g to 1.22-1.29 points per decade, entirely consistent with what Dr. Woodley of Menie has been saying for years. In any case, the decrease in g due to genetic selection, the reality of which is confirmed in the Iceland paper about as directly as possible, is nearly a full point alone. So we find a diminution of g in the 1-1.5 points per decade range without availing ourselves of reaction time data, and a loss of g per decade nearly in that range even if we assume that only genetic selection is depressing g. Assume, arguendo, that the decadal reduction of g has been the mere 0.81 points per decade arrived at above with the 80% heritability estimate. Ignore all other possible contributory factors. 0.81 points of g lost a decade from 1850 to 2010 would amount to a total reduction of 12.96 points -- quite alarming for a very conservative estimate!

    I have not yet been able to read the study on myopia and visual reaction time (VRT) in detail. But even if myopia goes with longer VRT and myopia is becoming more prevalent (which it is), this would have no bearing on the secular trend toward greater auditory reaction time that Dr. Woodley of Menie and his colleagues have found. I doubt if the changing prevalence of myopia can explain more than a small fraction of the increase in VRT that Dr. Woodley of Menie and his colleagues have documented. Even when significant slowing is added to Galton's VRT samples, the remaining retardation of VRT indicates a g loss of ~10 points (on an IQ scale). As I argue in my review, attacking the dysgenic theory by picking at individual data sets and indicators is unlikely to bear fruit -- the nomological net of evidence for the theory is very robust, especially now that we have the aforementioned genetic selection data, and so is not likely to be undone without a parsimonious alternative explanation of declines in the various indicators that together seem to have nothing in common apart from a relation to g. If myopia decreases color acuity, increasing rates of myopia may explain why the estimate of dysgenesis on g from color acuity is much too high. But I am optimistic that some of the decline in color acuity is due to temporal reduction of g. Note that I do not suggest in my review that Dr. Woodley of Menie's research has made certain the precise magnitude of declines in g, only that it has shown that significant declines in g have been almost certainly occurring. With good genetic selection data now at hand, we are moving in on a more concrete estimate, which is probably in the 1-1.5 points of g lost per decade range that, as previously stated, Dr. Woodley of Menie predicted years ago.

    Thank you for that and I invite your attention to #36,

    Read More
  41. @James Thompson
    Prof Adrian Furnham has done some work on public perceptions of intelligence, but mostly using family members for the estimates.

    http://www.unz.com/jthompson/so-you-think-youre-intelligent/


    However, your question is a good one, because it would be interesting to see how well the general public estimate mental ability. My impression is that estimates would be done almost exclusively on the basis of verbal skills, because those are so obvious, and would be distorted by how much confidence the target person showed in self-presentation. People kindly assume that a person who is very confident about their opinions must have done the necessary homework. I will have a hunt sometime to see what else is available on public perceptions of ability.

    Thanks, Prof. Thompson. A week ago I’d remarked idly about my trying to imagine an “IQ-centric” political party, political faction, or what-have-you. Call it the Achiever Party, or something, with an Achievement Foundation appended to it. The idea would be to explore whether it’s even possible to explicitly politicize IQ in a measured, temperate way.

    My personal feeling is that Shockley, Rushton, and other intelligence researchers got bashed about, a bit unfairly in my opinion, is that they didn’t recognize the political backdrop of an ever-expanding political franchise, Black upheaval, Marxist revolution, etc. , which, I think, has put IQ in a bad spot. My idea of an Achievement Foundation would be to rescue IQ from the shadows.

    As a casual, non-expert reader I may be getting something wrong here. I am learning something from your articles and the comments below. Many thanks.

    Read More
    • Replies: @Steel T Post
    Intelligence is merely a scheming tool. The higher one's IQ, the more scheming one is; e.g. the Jews. They're better at stealing star energy from other organisms.

    We’re chemical machines, that have been built over 4 billion years, and we’ve been tested in what can be called quite accurately a ‘Gladiator War’; where the machines went into the battle and if you won your DNA replicated, and that’s all it was was a war. And as time evolved the war became more complex. As the organisms evolved they get more and more elaborate tools.
    […]
    Our intelligence only exists because it was a scheming tool, because it made us better at stealing star energy from other organisms. That’s the only reason why it exists. And it still remains its only function.

    Gladiator War (Graphic Content)
    youtube.com/watch?v=bK2a-1K0Sdg
     
    Jews are already a high-IQ party, typically called "Neo-Cons" in the US. Do you really want to advance more of that?
    , @Bill
    You'd do much better forming a secret society. For reasons which are obvious.

    Actually, once you realize that, you might wonder whether somebody else might have thought of that already.
  42. @Anatoly Karlin
    (1) 3 points Flynn / -1 points Woodley per decade seems high. My impression is that it was more 1.3 points Flynn / -0.3 points Woodley per decade.

    As per Armstrong (and Woodley!), the 3 points Flynn / decade figure is largely based on tests with high rules dependence (which are substantially independent from general intelligence, which has also risen, but at a slower pace).

    (2) I think that the Flynn effect will fade in wealthy countries, persist with fading effect in poor countries, and that the Woodley effect will continue, though I do not know the cause of it.

    I agree.

    And without a major tech breakthrough, it will logically lead to the age of Malthusian industrialism.

    Cochran disagrees.

    Read More
  43. @JackOH
    Thanks, Prof. Thompson. A week ago I'd remarked idly about my trying to imagine an "IQ-centric" political party, political faction, or what-have-you. Call it the Achiever Party, or something, with an Achievement Foundation appended to it. The idea would be to explore whether it's even possible to explicitly politicize IQ in a measured, temperate way.

    My personal feeling is that Shockley, Rushton, and other intelligence researchers got bashed about, a bit unfairly in my opinion, is that they didn't recognize the political backdrop of an ever-expanding political franchise, Black upheaval, Marxist revolution, etc. , which, I think, has put IQ in a bad spot. My idea of an Achievement Foundation would be to rescue IQ from the shadows.

    As a casual, non-expert reader I may be getting something wrong here. I am learning something from your articles and the comments below. Many thanks.

    Intelligence is merely a scheming tool. The higher one’s IQ, the more scheming one is; e.g. the Jews. They’re better at stealing star energy from other organisms.

    We’re chemical machines, that have been built over 4 billion years, and we’ve been tested in what can be called quite accurately a ‘Gladiator War’; where the machines went into the battle and if you won your DNA replicated, and that’s all it was was a war. And as time evolved the war became more complex. As the organisms evolved they get more and more elaborate tools.
    […]
    Our intelligence only exists because it was a scheming tool, because it made us better at stealing star energy from other organisms. That’s the only reason why it exists. And it still remains its only function.

    Gladiator War (Graphic Content)
    youtube.com/watch?v=bK2a-1K0Sdg

    Jews are already a high-IQ party, typically called “Neo-Cons” in the US. Do you really want to advance more of that?

    Read More
    • Replies: @JackOH
    Well, I'm not sure what "star energy" is, but I think it may be worth our while to understand explicitly what the political implications are of IQ, whether in abundance or in deficiency. By political implications, I suppose I also mean political possibilities. Are we missing out on ways to better America because high IQ is viewed by some folks as presumptively, oh, elitist, racist, and the like?

    BTW-I'm not at all sure what a careful, sober study of IQ's political implications would conclude.

    , @dc.sunsets
    Hence my preference for tribalism writ large.

    I'd prefer to live in a society populated largely by my distant cousins (people of primarily Germanic & English ancestry), as many of whom as possible came from the WASP populace of pre-1965 or even pre-1900 America...many of whom were Episcopalians (and if I recall correctly, in a study of the various Christian denominations' mean IQ's, Episcopalians roundly trounced the Ashkenazim with means of around 120 and 111, respectively.)

    I'd also prefer to segregate away from people genetically predisposed to clannishness (h/t to Jayman et.al.) because me and mine are far too trusting to share a single polity with people more devoted to cunning and sub-group nepotism than are we.

    Yes, it's my furry little fantasy. High IQ mean, low clannishness, low time preference, high propensity to the Commonwealth Civilization that defined the successes of the Anglosphere and Northern Europe these past several hundred years.

    I see no way to get there from here (the clock cannot be reversed), so my expectations are for a very long period of extraordinary difficulty, more like conditions extant during the 14th century, during which all the Natural Selective forces held in abeyance these 50 or 100 years burst their dam and the Four Horsemen ride with abandon until balance is restored.
  44. @Wizard of Oz
    At least 15 years ago I did a back of the envelope + Excel calculation which was based on 100 avg IQ, sd 15, all under 85 IQ women have three children at age 21 and all over IQ 115 women have 2 children at age 33, no allowance for older fathers' mutations and no other relevant fertility figures. On standard heritability assumptions (which I leave unstated because my memory of the other figures may be very slightly astray and it doesn't really effect my point) I calculated that there would be a loss of 2 points of IQ in 100 years which, absent some sort of caste system would mean a significant loss of people with IQs over 140. So.... to me it is just common sense, as it was to J.M.Keynes and other members of the Malthusian Sociery, that modern peoples have been breeding dysgenically for g and any other heritable traits that are conducive to material success or achievement of social status. It is alarming to read here of plausible estimates which are much more damaging to what most would regard, however euphemistically expressed, as the quality of our populations. If the world remains peaceful then I daresay we can meet democracies' essential condition of ever rising wellbeing thanks to the millions of still unexploited good brains in Asia but the evident defects of nominal democracies in crumbling and unequal Europe and the US suggest that a lot of us may be reconciling ourselves to some form of government by an elite modelled on China or Japan with some consoling flim-flam about Plato's Guardians.
    Comment?

    And if I may attempt to describe a future acceptable to those of us looking forward with some foreboding but not personally threatened with disaster in health, wealth or peace and quiet and invite criticism, comment and counter suggestions I would say it is a world where healthy longevity for our great grandchildren has continued to improve, there is no danger of their not having enough healthy (and enjoyable) food to eat, the wonders of the natural environment are not less,give-and-take, for them as they are for First World backpackers and prosperous retired persons today and leisure for enjoying things of the mind is not much less than Keynes overoptimistically imagined in 1930. I fear that half a billion early vasectomies in sub-Saharan Africa might be a necessary condition, especially for the environment, but otherwise I retain a glimmer of optimism.

    the tl;dr version of your comment to me: “Idiocracy is not a comedy.”

    Read More
  45. @Anonymous
    Interesting. Thanks for the info.

    Can you write a blog post on your blog about diet and nutrition strategies for balancing out the ratio?

    No problem. Sure I can do that by the weekend. It’s really just eating a lot of seafood and less processed carbs and grains. I’ll publish by the end of the week. A high-quality diet is imperative for proper brain functioning. I’ve been reading into nootropics and I was thinking: a bodybuilders diet and lifestyle with the supplementation is a great model to follow.

    For instance creatine is a nootropic, it’s a brain protectant. Going to write about that as well since it’s right up my alley other nutrition and strength training.

    Read More
  46. @Anonymous
    Interesting. Thanks for the info.

    Can you write a blog post on your blog about diet and nutrition strategies for balancing out the ratio?

    On soy:

    The Ploy of Soy
    by Sally Fallon and Mary G. Enig, PhD
    westonaprice.org/health-topics/the-ploy-of-soy/

    On balancing fats:

    The Skinny on Fats
    by Sally Fallon and Mary G. Enig, PhD
    westonaprice.org/know-your-fats/the-skinny-on-fats/

    Read More
  47. Boring article, but interesting feedback. Especially Jay man throwing little pieces of shit onto the wall to see what sticks, and everyone else pissed off and piling on Jay man in response. The most influential information provided in the article or the feedback by far was the nearsightedness study comparing children in Sydney and Singapore. Now that’s a scientific study. I have yet to see any study on human intelligence or measurement of human intelligence that is not full of human bias. Even IQ tests are full of human bias. You want to tell me that someone whose intelligence is best expressed when sitting on their butt in a classroom is not going to design a test that measures intelligence based on an environment where one is sitting on their butt in a classroom. The same kid who scores in the 99th percentile on standardized intelligence tests in school struggles with the concept of running the bases in baseball, while the kid who gets the concept of running the bases gets diagnosed with ADD and prescribed Ritalin in the classroom. Environment is king. And whenever you see studies or tests that show declining intelligence, chances are that this is the direct result of substituting a less effective environment for a more effective one. As for genetics, genetic traits are the accumulation of environmental adjustments. What truly scientific studies like the nearsightedness study show is how fragile genetic traits are. Take the individual that is considered genetically superior today and throw him into a toxic environment and see how long that supposed genetic superiority lasts. Like the Eddie Murphy movie “Trading Places.”

    Read More
    • Replies: @dc.sunsets
    That you cite a piece of Hollywood fiction as an example for a real world argument speaks volumes about your position.

    I find two kinds of people who argue about the validity of IQ measures as an indicator of life outcomes: 1) people who aren't very bright, envy those whose success rests on higher intelligence, resent them for it and seek to attribute all success to luck/nepotism/privilege of some unfair sort, and 2) people who are very bright and lack much contact with the real world where most people are dull as a 2x4 and some of them are too stupid to survive four weeks without constant help (and the only activity at which they excel is sexual reproduction.)

    My 4th grade teacher wife can easily assess the underlying intelligence of her students and do so entirely independent of whether the kid is on ADHD meds or if the kid is simply lazy and lacking in self-discipline. She doesn't need an IQ test to do so. Life is an IQ test.
    , @Daniel Chieh
    Disagree.

    Without being as vain as to post my IQ or anything, I would agree with you only so far that environment matters on a temporary basis. I've been ultimately successful in almost any field I was in and I happened to be both in pretty violent, physical arenas as well as in intellectual, socially complex office environments.

    I didn't have my intelligence measured until much later in life, but it showed that I used the same rough methodology in tackling both. I would analytically focus and even hyper-focus on a few areas that were relevant to my field, basically trying to find the 20% in the 80/20% Pareto distribution. For example, when I was in the field, I tried to classify every risk and as much as possible, mathematically determine their likelihood. This proved to be an useful force multiplier.

    In the office, the same efforts at pattern recognition and fondness of analysis, desire to use tools both physical and mental, also led to a consistent pattern of success. Its not at all that things I touch turn to gold, hardly! But I would say that so as long as any environment has any consistent patterns at all, then IQ has a role to play.

    I'm curious what Authenticmensajazzman has to say, as I suspect his IQ to be higher than mine.

    , @Anonymous Nephew
    "Take the individual that is considered genetically superior today and throw him into a toxic environment and see how long that supposed genetic superiority lasts."

    In the UK that argument is often rendered as "Put him in Somalia and see how he gets on when tribal militia raid his village". But surely the point is - is a society of English people or a society of Somalis more likely to be one in which you have to worry about raids by tribal militia?

    The great Harry Hutton's take:

    “It hurts when they don’t accept you, but I have many English friends,” said Chris McShane (26), who fled New Zealand when soldiers burned his village. “I’ll never forget the first time some English people invited me to their house. They served lamb from a 'supermarket'. In New Zealand if we want to eat lamb we have to strangle it ourselves.”

    “I came to Britain to seek a better life for my children.” He dreams of returning to his homeland one day, when the situation is more stable. “But Britain is my home now.”
     
    Getting back to topic, and with due trepidation/humility in such knowledgeable company, it would be interesting to try and tease out the effects (if any) on average UK IQ of

    1967 Abortion Act - something like 200,000 a year in England and Wales (as against about 600,000 live births) - are there any profiles of who has them? I'd imagine the profiles would have changed since 1967 as well (anecdotal, but I know one very high IQ girl who had three in the 1970s (and is childless)). More anecdote - don't NI schoolkids (no abortion there or none til recently) get better test scores than the rest of the UK?

    Number of women in higher education over time (and their lower TFR) - this again will have changed a lot in 50 years.

    Effects of benefit system and what Steve Sailer calls "affordable family formation" - it seems to me that for the last 30 years the only people who could afford large families (4+) were either the well off (usually with stay at home mum) or the benefit-aided. Probably more of the latter than the former, too.
  48. A. Dysgenic breeding patterns (and fads.)
    B. Lack of selection pressure due to welfare state mitigation of the normal consequences for utter stupidity.
    C. Dietary changes.
    D. Constant exposure to the dancing primary colors, sounds and such of TV, etc., such that parents use electronic baby-sitting devices to relieve themselves of the burden of child-rearing.
    E. Environmental contamination (e.g., BPA, Roundup, etc.)
    F. Yet unknown unknowns.

    Whatever the cause(s) of decline in population mean intelligence, if my wife’s experiences in teaching 4th graders since 1983 are any indication, we’re heading for a truly apocalyptic, dystopian future.

    Yes, the sample size is tiny and unrepresentative, but her classroom is increasingly saturated with 9/10 year olds who cannot process even the simplest thoughts. We’re talking a “get off the tracks, there’s a locomotive coming” level of “intelligence” is entirely absent. We talk about g but a level of mental processing far more fundamental seems to be evaporating.

    I used to be on the Julian Simon side of the Simon/Erlich debate, but now I believe that reality can be papered over for very long periods (e.g., the USSR’s persistence for nearly 70 years after Mises irrefutably proved the impossibility of resource allocation absent market prices.)

    I now see that eliminating Nature’s viciousness in culling those incapable of self-help looked wise and empathic but in fact it simply built a dam, and the consequences are filling a reservoir behind it. Erlich (and Malthus) weren’t wrong. There’s a “conservation of mass” kind of condition involved here. Consequences delayed, but not denied forever.

    Man is not apart from nature. We are subject to all the same Natural Laws as every other animal on the planet. Whenever it appears we repealed one of those laws, all that’s operative is that we’re fooling ourselves. This is why the term “social engineering” is wrong. Engineering works with the laws of physics, materials and nature.

    “Social alchemy” might be more apt. The alchemists are part of our ruling theocracy, so it’s no surprise that IQ research, seeking to understand some of those natural laws, is blasphemy.

    Read More
  49. @Steel T Post
    Intelligence is merely a scheming tool. The higher one's IQ, the more scheming one is; e.g. the Jews. They're better at stealing star energy from other organisms.

    We’re chemical machines, that have been built over 4 billion years, and we’ve been tested in what can be called quite accurately a ‘Gladiator War’; where the machines went into the battle and if you won your DNA replicated, and that’s all it was was a war. And as time evolved the war became more complex. As the organisms evolved they get more and more elaborate tools.
    […]
    Our intelligence only exists because it was a scheming tool, because it made us better at stealing star energy from other organisms. That’s the only reason why it exists. And it still remains its only function.

    Gladiator War (Graphic Content)
    youtube.com/watch?v=bK2a-1K0Sdg
     
    Jews are already a high-IQ party, typically called "Neo-Cons" in the US. Do you really want to advance more of that?

    Well, I’m not sure what “star energy” is, but I think it may be worth our while to understand explicitly what the political implications are of IQ, whether in abundance or in deficiency. By political implications, I suppose I also mean political possibilities. Are we missing out on ways to better America because high IQ is viewed by some folks as presumptively, oh, elitist, racist, and the like?

    BTW-I’m not at all sure what a careful, sober study of IQ’s political implications would conclude.

    Read More
    • Replies: @Steel T Post
    By using the term "star energy," he acknowledges that all energy available on Earth comes from stars, the vast majority from our local star. (Even radioactive heavy elements that provide energy were forged in the bowels of exploding stars.)

    As for a better America, it matters who defines what "better" means, e.g.:

    "The life of an Indian is a continual holiday, compared with the poor of Europe; and, on the other hand it appears to be abject when compared to the rich." -Thomas Paine (Agrarian Justice, 1795)
     
    Would a continual holiday be "better," or worse? Would improved cardiovascular health be better than sitting in an office to make money to pay for gasoline and a car that goes no faster—when considering all the time spent buying and maintaining an enormously expensive and energy-intensive automobile—than we can walk?

    Now, if by better we mean "stealing star energy," then we've got ourselves beaucoup better. And that indeed is the purpose of life according to the “Maximum Power Principle” (Odum, 1995; Lotka, 1922) that states that living organisms will organize to increase power generation by degrading more energy.
  50. @Steel T Post
    Intelligence is merely a scheming tool. The higher one's IQ, the more scheming one is; e.g. the Jews. They're better at stealing star energy from other organisms.

    We’re chemical machines, that have been built over 4 billion years, and we’ve been tested in what can be called quite accurately a ‘Gladiator War’; where the machines went into the battle and if you won your DNA replicated, and that’s all it was was a war. And as time evolved the war became more complex. As the organisms evolved they get more and more elaborate tools.
    […]
    Our intelligence only exists because it was a scheming tool, because it made us better at stealing star energy from other organisms. That’s the only reason why it exists. And it still remains its only function.

    Gladiator War (Graphic Content)
    youtube.com/watch?v=bK2a-1K0Sdg
     
    Jews are already a high-IQ party, typically called "Neo-Cons" in the US. Do you really want to advance more of that?

    Hence my preference for tribalism writ large.

    I’d prefer to live in a society populated largely by my distant cousins (people of primarily Germanic & English ancestry), as many of whom as possible came from the WASP populace of pre-1965 or even pre-1900 America…many of whom were Episcopalians (and if I recall correctly, in a study of the various Christian denominations’ mean IQ’s, Episcopalians roundly trounced the Ashkenazim with means of around 120 and 111, respectively.)

    I’d also prefer to segregate away from people genetically predisposed to clannishness (h/t to Jayman et.al.) because me and mine are far too trusting to share a single polity with people more devoted to cunning and sub-group nepotism than are we.

    Yes, it’s my furry little fantasy. High IQ mean, low clannishness, low time preference, high propensity to the Commonwealth Civilization that defined the successes of the Anglosphere and Northern Europe these past several hundred years.

    I see no way to get there from here (the clock cannot be reversed), so my expectations are for a very long period of extraordinary difficulty, more like conditions extant during the 14th century, during which all the Natural Selective forces held in abeyance these 50 or 100 years burst their dam and the Four Horsemen ride with abandon until balance is restored.

    Read More
    • Agree: Steel T Post
    • Replies: @Steel T Post
    I concur with your "little fantasy," and am still living it, or what's left of it, in rural "red state" America settled by very neat and orderly and high-trust Anglo-Saxons. My acquaintances in big cities still can't believe the large amounts of commerce done here on nothing but a phone call or handshake.

    My little fantasy includes living more simply too, for many reasons, the largest reason being to stop attracting parasites. Parasites--especially the clannish sort--are much too attracted to ostentatious displays of plenty.

    When goods increase, they are increased that eat them: and what good is there to the owners thereof, saving the beholding of them with their eyes? -Ecclesiastes 5:11
     
  51. @Clearpoint
    Boring article, but interesting feedback. Especially Jay man throwing little pieces of shit onto the wall to see what sticks, and everyone else pissed off and piling on Jay man in response. The most influential information provided in the article or the feedback by far was the nearsightedness study comparing children in Sydney and Singapore. Now that's a scientific study. I have yet to see any study on human intelligence or measurement of human intelligence that is not full of human bias. Even IQ tests are full of human bias. You want to tell me that someone whose intelligence is best expressed when sitting on their butt in a classroom is not going to design a test that measures intelligence based on an environment where one is sitting on their butt in a classroom. The same kid who scores in the 99th percentile on standardized intelligence tests in school struggles with the concept of running the bases in baseball, while the kid who gets the concept of running the bases gets diagnosed with ADD and prescribed Ritalin in the classroom. Environment is king. And whenever you see studies or tests that show declining intelligence, chances are that this is the direct result of substituting a less effective environment for a more effective one. As for genetics, genetic traits are the accumulation of environmental adjustments. What truly scientific studies like the nearsightedness study show is how fragile genetic traits are. Take the individual that is considered genetically superior today and throw him into a toxic environment and see how long that supposed genetic superiority lasts. Like the Eddie Murphy movie "Trading Places."

    That you cite a piece of Hollywood fiction as an example for a real world argument speaks volumes about your position.

    I find two kinds of people who argue about the validity of IQ measures as an indicator of life outcomes: 1) people who aren’t very bright, envy those whose success rests on higher intelligence, resent them for it and seek to attribute all success to luck/nepotism/privilege of some unfair sort, and 2) people who are very bright and lack much contact with the real world where most people are dull as a 2×4 and some of them are too stupid to survive four weeks without constant help (and the only activity at which they excel is sexual reproduction.)

    My 4th grade teacher wife can easily assess the underlying intelligence of her students and do so entirely independent of whether the kid is on ADHD meds or if the kid is simply lazy and lacking in self-discipline. She doesn’t need an IQ test to do so. Life is an IQ test.

    Read More
    • Replies: @Santoculto
    ''Life is an IQ test.''

    Specially the life of WORKER, ;)


    The Flynn Effect is dependent on the IQ tests and it is assumed that

    QI = intelligence

    We are not necessarily analyzing whether or not human intelligence has increased, but why the increase in IQ tests, which until now have not been adequately explained, is a real / genotypic increase or is it an artificial increase caused by changes or technical improvements In cognitive tests, as well as in greater rigor.

    IQ-enthusiasts like to say that

    IQ causes [worldly] success

    Indeed IQ is one of the factors that cause success, I think many of those who discuss the influence ''of IQ'' in-life outcomes are criticizing this hbd tendency to transform correlations to causalities, especially with respect to IQ.

    If IQ is a cognitive aspect of intelligence or better, of the human being, so IQ explain a fraction of this conjugation and not a pure causality: IQ causes success.

    You can not base a possible development of a certain study if this basis is not yet fully understood, and this is what happens with the flynn effect.

    , @Clearpoint
    So anyone who disagrees with your position on IQ is either 1) an angry, jealous idiot, or 2) a social recluse. Your response speaks volumes about your arrogance.
  52. @dc.sunsets
    That you cite a piece of Hollywood fiction as an example for a real world argument speaks volumes about your position.

    I find two kinds of people who argue about the validity of IQ measures as an indicator of life outcomes: 1) people who aren't very bright, envy those whose success rests on higher intelligence, resent them for it and seek to attribute all success to luck/nepotism/privilege of some unfair sort, and 2) people who are very bright and lack much contact with the real world where most people are dull as a 2x4 and some of them are too stupid to survive four weeks without constant help (and the only activity at which they excel is sexual reproduction.)

    My 4th grade teacher wife can easily assess the underlying intelligence of her students and do so entirely independent of whether the kid is on ADHD meds or if the kid is simply lazy and lacking in self-discipline. She doesn't need an IQ test to do so. Life is an IQ test.

    ”Life is an IQ test.”

    Specially the life of WORKER, ;)

    The Flynn Effect is dependent on the IQ tests and it is assumed that

    QI = intelligence

    We are not necessarily analyzing whether or not human intelligence has increased, but why the increase in IQ tests, which until now have not been adequately explained, is a real / genotypic increase or is it an artificial increase caused by changes or technical improvements In cognitive tests, as well as in greater rigor.

    IQ-enthusiasts like to say that

    IQ causes [worldly] success

    Indeed IQ is one of the factors that cause success, I think many of those who discuss the influence ”of IQ” in-life outcomes are criticizing this hbd tendency to transform correlations to causalities, especially with respect to IQ.

    If IQ is a cognitive aspect of intelligence or better, of the human being, so IQ explain a fraction of this conjugation and not a pure causality: IQ causes success.

    You can not base a possible development of a certain study if this basis is not yet fully understood, and this is what happens with the flynn effect.

    Read More
    • Replies: @Daniel Chieh
    I would rebut and say that IQ measures a number of cognitive aptitudes that are correlated with the manipulation of information and ability to observe patterns, and that they are generally thus the skill we call "intelligence" in humans. While the ability to solve problems is no guarantee of future success - there's also effort and all, it is correlated and probably has at least some causation.
  53. Adding another possible cause to my list in comment #50 above:

    http://www.ecowatch.com/yale-vaccine-study-kennedy-2246059411.html

    G. Peak obsession with vaccines as an end-all, be-all, downside-free fix for any infectious disease for which the Pharmaceutical Firms of the World could swat up a product.

    Anything our Theocrats deemed a sacrament of their cult religion was pushed to absurdity while skeptics were tarred with the Theocracy’s personification of evil (Satan, in Christianity): HITLER.

    Climate change “denialists” = Vaccine “denialists” = Holocaust “denialists.”

    “Denialists,” i.e., people who directly embrace Hitlerian (Satanic) beliefs.

    It would be fascinating to assess the vaccine status of subjects in IQ testing, but of course despite many billions of dollars spent on research, obtaining funding to examine links between vaccines and any negative outcome is as likely as obtaining funds to study the relationship between race and violent crime. Some questions are not allowed to be asked, much less answered.

    [PS: You can always tell if a person is a Zealot of the Theocracy if they invoke their personification of evil in any diatribe. Our Theocrats all seem to be stuck in the early 1940's.]

    Read More
  54. @Clearpoint
    Boring article, but interesting feedback. Especially Jay man throwing little pieces of shit onto the wall to see what sticks, and everyone else pissed off and piling on Jay man in response. The most influential information provided in the article or the feedback by far was the nearsightedness study comparing children in Sydney and Singapore. Now that's a scientific study. I have yet to see any study on human intelligence or measurement of human intelligence that is not full of human bias. Even IQ tests are full of human bias. You want to tell me that someone whose intelligence is best expressed when sitting on their butt in a classroom is not going to design a test that measures intelligence based on an environment where one is sitting on their butt in a classroom. The same kid who scores in the 99th percentile on standardized intelligence tests in school struggles with the concept of running the bases in baseball, while the kid who gets the concept of running the bases gets diagnosed with ADD and prescribed Ritalin in the classroom. Environment is king. And whenever you see studies or tests that show declining intelligence, chances are that this is the direct result of substituting a less effective environment for a more effective one. As for genetics, genetic traits are the accumulation of environmental adjustments. What truly scientific studies like the nearsightedness study show is how fragile genetic traits are. Take the individual that is considered genetically superior today and throw him into a toxic environment and see how long that supposed genetic superiority lasts. Like the Eddie Murphy movie "Trading Places."

    Disagree.

    Without being as vain as to post my IQ or anything, I would agree with you only so far that environment matters on a temporary basis. I’ve been ultimately successful in almost any field I was in and I happened to be both in pretty violent, physical arenas as well as in intellectual, socially complex office environments.

    I didn’t have my intelligence measured until much later in life, but it showed that I used the same rough methodology in tackling both. I would analytically focus and even hyper-focus on a few areas that were relevant to my field, basically trying to find the 20% in the 80/20% Pareto distribution. For example, when I was in the field, I tried to classify every risk and as much as possible, mathematically determine their likelihood. This proved to be an useful force multiplier.

    In the office, the same efforts at pattern recognition and fondness of analysis, desire to use tools both physical and mental, also led to a consistent pattern of success. Its not at all that things I touch turn to gold, hardly! But I would say that so as long as any environment has any consistent patterns at all, then IQ has a role to play.

    I’m curious what Authenticmensajazzman has to say, as I suspect his IQ to be higher than mine.

    Read More
    • Replies: @dc.sunsets
    At the risk of contradicting myself, I think your success was both IQ and non-IQ based, with the latter being (in my opinion) a talent for recognizing the rules of a game and (perhaps just as importantly) enjoying the game itself.

    I enjoy metaphors, crude as they may be. I think IQ (as tested in traditional IQ tests and the many accepted surrogates) is horsepower, but other talents on the human spectrum like determination, grit, self-discipline, curiosity, competitiveness, etc., are the transmission. All the horsepower in the world can't get to the pavement without the transmission. And a great transmission with nothing to drive it doesn't move the vehicle far.

    PS: Don't make my mistake and overthink IQ. I qualified to join Colloquy Society (four times more selective than Mensa) and discovered in that admittedly small sample sized group people varied markedly in the non-IQ parameters in ways that produced an amazing spectrum of real-world accomplishment and good sense (or lack thereof.) I have little doubt that you could query the Triple Nine, Prometheus or Mega Societies' members and reach the same conclusions (although there are only 26 members in Mega---1 in 1 million qualification---so sample size problems only rise.)
  55. The Woodley effect seems much easier to perceive and can be explained by the demographic transition stages because it is the upper social classes that tend to

    Reduce your fertility first to add more wealth [Catholic priest's syndrome,;)]

    If we could visualize by social class, proxy for higher mean intelligence, the age pyramids, I do not doubt that we would see the higher social classes presenting a typical age pyramid of a first-world country, in which the demographic transition [population reduction spiral] is already In an advanced state. On the other hand the lower social classes are the slowest to reduce their fertility. Aging and reduction of families occurs earlier in the middle and especially the upper classes while it is more time consuming in the lower classes.

    We can see these differences through the global demographic scenario of the human macro-races in which the more cognitively intelligent, the East Asians, already exhibit strong aging and shrinkage of their families, while less intelligent macro-races are still in the early Stages of the demographic transition.

    This fertility differential in social classes, especially after World War II, may be, even if imperfectly, partially transferred to the fertility differential among the cognitive classes, if both tend to overlap and be stable at long term.

    https://ecodebate.com.br/foto/150506.gif

    Age pyramids by social classe in Brazil

    Read More
  56. @JackOH
    Well, I'm not sure what "star energy" is, but I think it may be worth our while to understand explicitly what the political implications are of IQ, whether in abundance or in deficiency. By political implications, I suppose I also mean political possibilities. Are we missing out on ways to better America because high IQ is viewed by some folks as presumptively, oh, elitist, racist, and the like?

    BTW-I'm not at all sure what a careful, sober study of IQ's political implications would conclude.

    By using the term “star energy,” he acknowledges that all energy available on Earth comes from stars, the vast majority from our local star. (Even radioactive heavy elements that provide energy were forged in the bowels of exploding stars.)

    As for a better America, it matters who defines what “better” means, e.g.:

    “The life of an Indian is a continual holiday, compared with the poor of Europe; and, on the other hand it appears to be abject when compared to the rich.” -Thomas Paine (Agrarian Justice, 1795)

    Would a continual holiday be “better,” or worse? Would improved cardiovascular health be better than sitting in an office to make money to pay for gasoline and a car that goes no faster—when considering all the time spent buying and maintaining an enormously expensive and energy-intensive automobile—than we can walk?

    Now, if by better we mean “stealing star energy,” then we’ve got ourselves beaucoup better. And that indeed is the purpose of life according to the “Maximum Power Principle” (Odum, 1995; Lotka, 1922) that states that living organisms will organize to increase power generation by degrading more energy.

    Read More
  57. @Daniel Chieh
    Disagree.

    Without being as vain as to post my IQ or anything, I would agree with you only so far that environment matters on a temporary basis. I've been ultimately successful in almost any field I was in and I happened to be both in pretty violent, physical arenas as well as in intellectual, socially complex office environments.

    I didn't have my intelligence measured until much later in life, but it showed that I used the same rough methodology in tackling both. I would analytically focus and even hyper-focus on a few areas that were relevant to my field, basically trying to find the 20% in the 80/20% Pareto distribution. For example, when I was in the field, I tried to classify every risk and as much as possible, mathematically determine their likelihood. This proved to be an useful force multiplier.

    In the office, the same efforts at pattern recognition and fondness of analysis, desire to use tools both physical and mental, also led to a consistent pattern of success. Its not at all that things I touch turn to gold, hardly! But I would say that so as long as any environment has any consistent patterns at all, then IQ has a role to play.

    I'm curious what Authenticmensajazzman has to say, as I suspect his IQ to be higher than mine.

    At the risk of contradicting myself, I think your success was both IQ and non-IQ based, with the latter being (in my opinion) a talent for recognizing the rules of a game and (perhaps just as importantly) enjoying the game itself.

    I enjoy metaphors, crude as they may be. I think IQ (as tested in traditional IQ tests and the many accepted surrogates) is horsepower, but other talents on the human spectrum like determination, grit, self-discipline, curiosity, competitiveness, etc., are the transmission. All the horsepower in the world can’t get to the pavement without the transmission. And a great transmission with nothing to drive it doesn’t move the vehicle far.

    PS: Don’t make my mistake and overthink IQ. I qualified to join Colloquy Society (four times more selective than Mensa) and discovered in that admittedly small sample sized group people varied markedly in the non-IQ parameters in ways that produced an amazing spectrum of real-world accomplishment and good sense (or lack thereof.) I have little doubt that you could query the Triple Nine, Prometheus or Mega Societies’ members and reach the same conclusions (although there are only 26 members in Mega—1 in 1 million qualification—so sample size problems only rise.)

    Read More
  58. @Santoculto
    ''Life is an IQ test.''

    Specially the life of WORKER, ;)


    The Flynn Effect is dependent on the IQ tests and it is assumed that

    QI = intelligence

    We are not necessarily analyzing whether or not human intelligence has increased, but why the increase in IQ tests, which until now have not been adequately explained, is a real / genotypic increase or is it an artificial increase caused by changes or technical improvements In cognitive tests, as well as in greater rigor.

    IQ-enthusiasts like to say that

    IQ causes [worldly] success

    Indeed IQ is one of the factors that cause success, I think many of those who discuss the influence ''of IQ'' in-life outcomes are criticizing this hbd tendency to transform correlations to causalities, especially with respect to IQ.

    If IQ is a cognitive aspect of intelligence or better, of the human being, so IQ explain a fraction of this conjugation and not a pure causality: IQ causes success.

    You can not base a possible development of a certain study if this basis is not yet fully understood, and this is what happens with the flynn effect.

    I would rebut and say that IQ measures a number of cognitive aptitudes that are correlated with the manipulation of information and ability to observe patterns, and that they are generally thus the skill we call “intelligence” in humans. While the ability to solve problems is no guarantee of future success – there’s also effort and all, it is correlated and probably has at least some causation.

    Read More
    • Replies: @dc.sunsets
    You might enjoy some of the g-related discussions found in Linda Gottfredson's CV
    https://www1.udel.edu/educ/gottfredson/reprints/index.html


    This is one from New Scientist:
    https://www1.udel.edu/educ/gottfredson/reprints/2011InstantExpertIntelligence.pdf
    , @Santoculto
    Yes, IQ correlate with intelligence, but in practice we see that it is not as simple as that. As I said before, IQ measures our ability to find patterns in decontextualized issues. When we are actually using our intelligence to find patterns within real contexts, we tend to be much less precise, or rather, people who score high on cognitive tests are often not equally accurate because there are other factors in the middle of this path between perceive patterns and especially the most factually correct patterns.

    Revulsky if is their surname already told us the ''higher IQ idiots'', people with great intelligence but no-wisdom.

    Between perception and understanding there is acceptance.

    All correlation is a form of incomplete causality, which is not universal but which in some way contributes to a certain end.

    IQ also cause success, but it's not just it that cause, so it correlate more, but I think we can say yes that IQ also cause success. No problem there.

    When I say that IQ measures more the intelligence of the worker, it is because we have been selected through our abilities or levels of ability to work. On the other hand, no human society has predominantly selected the types that are most capable in their pure capacities of thought to think.

    So in order to find patterns and especially in our areas of expertise there is in fact a trend of rigid hierarchy of IQ, where the higher levels will be the priore the most apt for the job. However, to analyze what is most alien to our areas we tend to be much less intelligent.

  59. 12.02.2017 America’s Civil War Has Begun with Balkanization to Follow

    USA in the World Soon after taking office, President Trump issued an executive order banning legal residents of the United States and unnamed “others” based on their place of birth in seven nations cited as “dangerous.” All named nations are predominately Muslim but make up only a small minority of Islamic nations.

    http://journal-neo.org/2017/02/12/america-s-civil-war-has-begun-with-balkanization-to-follow/

    Read More
    • Replies: @dc.sunsets
    "Can't we all get along?" was, like "We are the World," a symptom of high social mood.

    There is a reason the spectrum of human social behaviors assort in parallel; they depend from the same cause.

    People are never more optimistic than at tops, when they expect rising conditions to continue forever. Americans were just this optimistic after World War 2, and the skyrocketing lifestyles of the 1950's and early 1960's came from social mood optimism that embraced the Space Race, Star Trek, and various Denial-of-Nature Utopian social projects like the Civil Rights Act, The Hart-Celler Immigration Act, the War On Poverty, Medicare/Medicaid, et. al.

    Starting in 1982, Americans took prosperity for granted, and put their purchases of it on the National Credit Card. It has been a moon-shot of optimistic assumptions ever since, and by many measures this optimism peaked 17 years ago or more. Only an even larger spasm of borrowing kept the game afloat.

    Trump's election came as a signal that the doubling-down on Utopian Political Policies under Obama were the blow-off top of that much longer social mood rally. Trump's election is a sign of mood's change in trend, a trend that has vastly further (and longer) to play out. Trump is but a transition figure. Leftism enjoyed a monopoly since at least the Wilson Administration, and it embedded innumerable conditions that people will no longer tolerate going forward.

    Section 8 was the dealer shuffling all the dissimilar cards by geography, insuring that people who could barely tolerate each other in the best, most optimistic of times would be living in close proximity when the urge to rip others to shreds inevitably developed.

    What's coming is a resumption of the English Civil War, of the 30-Years War, of the strife and difficulty of Europe's 14th century, all rolled into one. On one hand will be the Parliamentarians, who want the central state to continue and increase wealth redistribution and promoting Holy Diversity. On the other, those who will no longer submit to these impositions.

    I side with the latter.
    , @Daniel Chieh
    Can our gentle hosts please look into reviewing the posting rights of this gentleman? He basically seems intent on spamming. A short review of his posting history reveals that almost every single post of his links outside of Unz.
  60. @James Thompson
    As in the paper linked.

    But the conversion factors and how were they derived are not explained in the review by Sarraf. Perhaps they can be found in Woodley’s book.

    If you have two tests X1 and X2 when applied to the same population you can get correlation R and slope of linear regression straight line S=dX1/dX2. So if X1=IQ in IQ points and X2=RT in milliseconds mechanically one can convert changes in RT scores (∆RT) to changes in IQ scores (∆IQ) following ∆RT/∆IQ=S=dX1/dX2 proportion. However when correlation R is small this is pretty meaningless and if used it amounts to

    mathematical charlatanry

    .

    Sarraf wrote “simple reaction times, a decent proxy for g.” How can he write it with a straight face when reaction time has very low correlation (it is not even 0.3) with IQ?

    The bottom line is that it all comes down to IQ tests, right? Everything is converted to the scale implicitly defined by IQ tests. This includes g. It is interesting that Sarraf (I am not sure about Woodley as I did not read his book) managed to reach the Mount Everest of g reification. g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges. How does Sarraf use g in his text? Let me count the ways:

    “a decent proxy for g”, “the loss of g in the West”, “reductions in g”, “with the diminution of g even”, “the integrity of genetic factors that underlie g”, “suggesting that diminishing g is pervasive”, ‘assert that “dysgenesis” on g may also explain “anti-Flynn effects,”’

    All those statements containing “g” are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds. There are the deceivers and there are the believers. But the best, most effective deceivers are the believers. Why seemingly intelligent people (I listened to Woodley on YT and he clearly is very intelligent) let themselves to be fooled? Is Woodley a charlatan or a fool?

    Read More
    • Replies: @James Thompson
    "g is a general factor, extracted from the correlation matrix of a battery of mental ability tests by a number of different methods of factor analysis and according to different models of the factor structure of abilities."

    So begins Chapter 4 "Models and characteristics of g" in Jensen's "The g factor" Praeger, 1998. I think that is a good starting point for discussions about conceptualizing intelligence. The book is still valuable for understanding many psychometric concepts. Jensen's "Bias in Mental Testing" 1980 is also valuable in explaining test validity and predictive power.

    The best introduction to what g means in real life is Linda Gottfredson's "Why g matters: the complexity of everyday life.

    https://www1.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf
    , @Matthew Sarraf
    I have read over all of your comments on this post. It is evident that correcting your misunderstandings would require the administration of introductory courses in statistics and psychometrics (at minimum). This is not the place to receive such education.
    , @Bill

    g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges . . . All those statements containing “g” are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds.
     
    I think it is you who have fooled yourself. Invisible entities are ubiquitous in science. Are electrons real? Are quarks real? What the hell is a magnetic field, what does it smell like, and is its sister pretty? To be clear, I believe in cloud chambers and circuits: it's electrons I question.

    Positivism (and its variants) deals with this problem by making the test of good science not "are the hypothesized entities real" but "if we assume the hypothesized entities, does that help us predict and control the world" IQ does this. So, it gets to be real. Like electrons.

    The problem with Astrology is not that it posits invisible entities. The problem with Astrology is that it sucks at prediction and control.

    Or, to come at it an entirely different way, do you believe in carbon dating? Suppose, like Young Earth Creationists, you really didn't want to believe in carbon dating. Would carbon dating survive the sort of shielding skepticism you are happily deploying against g? Not a chance. After all, scientists were not there 10,000 years ago measuring c12/c14 proportions or, if you want to get really crazy, measuring the decay rate of C14. Plus, there are all sorts of anomalies with carbon dating when it is actually used. Shielding skepticism only gets deployed against things we don't want to believe.
    , @Emil Kirkegaard
    The g loading of a test is dependent on the battery of other tests in which it is extracted, but not very much so in most cases. See:

    http://www.sciencedirect.com/science/article/pii/S0160289607000931
    (there are a number of other earlier studies with other methods that found similar results)

    g itself is usually just measured in standardized units. You of course know this. Sometimes it is useful to distinguish explicitly between the trait and the factor since they can differ. Let's call the factor for g and the trait for GCA, general cognitive ability.

    Optimally, one switch to using a ratio scale for GCA, but there seems to be little progress towards this goal. At least, as explicitly stated to be working towards that goal. Presumably, one could build a ratio scale measurement using appropriate brain measurements. This would not help with estimating historical GCA declines since we lack detailed brain measurements from back then. One will have to rely on crude measures (reaction time, visual acuity etc.) or genetic data. The latter is more plausible, but will not capture any environmental changes in GCA.

    Note that a simple measure may be a good proxy for the mean level of GCA when using aggregate data, while not being so at the individual level. This is what Woodley et al. argues for with reaction time etc. To demonstrate this is rather trivial, so I leave that task to the reader. But it remains an assumption that is hard to test.

    , @candid_observer
    God only knows how many people over the previous century have imagined that they have uncovered some elementary mistake behind the theory of g which undermines it entirely -- going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi.

    And all of these claims have come to ruin -- exactly as one would expect, given that the good number of truly outstanding intellects who have contributed to the theory of g would not, in aggregate, be exactly likely to make and perpetuate elementary errors.

    Point is: if you think you've found an elementary error in the theory of g, then almost certainly it's you who have made one.

    g may have its problems -- but they are sophisticated and subtle, not trivial and obvious.

  61. @Daniel Chieh
    I would rebut and say that IQ measures a number of cognitive aptitudes that are correlated with the manipulation of information and ability to observe patterns, and that they are generally thus the skill we call "intelligence" in humans. While the ability to solve problems is no guarantee of future success - there's also effort and all, it is correlated and probably has at least some causation.

    You might enjoy some of the g-related discussions found in Linda Gottfredson’s CV

    https://www1.udel.edu/educ/gottfredson/reprints/index.html

    This is one from New Scientist:

    https://www1.udel.edu/educ/gottfredson/reprints/2011InstantExpertIntelligence.pdf

    Read More
    • Replies: @Daniel Chieh
    Thank you, I will investigate and comment on them when I can.
  62. @dc.sunsets
    Hence my preference for tribalism writ large.

    I'd prefer to live in a society populated largely by my distant cousins (people of primarily Germanic & English ancestry), as many of whom as possible came from the WASP populace of pre-1965 or even pre-1900 America...many of whom were Episcopalians (and if I recall correctly, in a study of the various Christian denominations' mean IQ's, Episcopalians roundly trounced the Ashkenazim with means of around 120 and 111, respectively.)

    I'd also prefer to segregate away from people genetically predisposed to clannishness (h/t to Jayman et.al.) because me and mine are far too trusting to share a single polity with people more devoted to cunning and sub-group nepotism than are we.

    Yes, it's my furry little fantasy. High IQ mean, low clannishness, low time preference, high propensity to the Commonwealth Civilization that defined the successes of the Anglosphere and Northern Europe these past several hundred years.

    I see no way to get there from here (the clock cannot be reversed), so my expectations are for a very long period of extraordinary difficulty, more like conditions extant during the 14th century, during which all the Natural Selective forces held in abeyance these 50 or 100 years burst their dam and the Four Horsemen ride with abandon until balance is restored.

    I concur with your “little fantasy,” and am still living it, or what’s left of it, in rural “red state” America settled by very neat and orderly and high-trust Anglo-Saxons. My acquaintances in big cities still can’t believe the large amounts of commerce done here on nothing but a phone call or handshake.

    My little fantasy includes living more simply too, for many reasons, the largest reason being to stop attracting parasites. Parasites–especially the clannish sort–are much too attracted to ostentatious displays of plenty.

    When goods increase, they are increased that eat them: and what good is there to the owners thereof, saving the beholding of them with their eyes? -Ecclesiastes 5:11

    Read More
  63. @Daniel Chieh
    I would rebut and say that IQ measures a number of cognitive aptitudes that are correlated with the manipulation of information and ability to observe patterns, and that they are generally thus the skill we call "intelligence" in humans. While the ability to solve problems is no guarantee of future success - there's also effort and all, it is correlated and probably has at least some causation.

    Yes, IQ correlate with intelligence, but in practice we see that it is not as simple as that. As I said before, IQ measures our ability to find patterns in decontextualized issues. When we are actually using our intelligence to find patterns within real contexts, we tend to be much less precise, or rather, people who score high on cognitive tests are often not equally accurate because there are other factors in the middle of this path between perceive patterns and especially the most factually correct patterns.

    Revulsky if is their surname already told us the ”higher IQ idiots”, people with great intelligence but no-wisdom.

    Between perception and understanding there is acceptance.

    All correlation is a form of incomplete causality, which is not universal but which in some way contributes to a certain end.

    IQ also cause success, but it’s not just it that cause, so it correlate more, but I think we can say yes that IQ also cause success. No problem there.

    When I say that IQ measures more the intelligence of the worker, it is because we have been selected through our abilities or levels of ability to work. On the other hand, no human society has predominantly selected the types that are most capable in their pure capacities of thought to think.

    So in order to find patterns and especially in our areas of expertise there is in fact a trend of rigid hierarchy of IQ, where the higher levels will be the priore the most apt for the job. However, to analyze what is most alien to our areas we tend to be much less intelligent.

    Read More
  64. @dc.sunsets
    You might enjoy some of the g-related discussions found in Linda Gottfredson's CV
    https://www1.udel.edu/educ/gottfredson/reprints/index.html


    This is one from New Scientist:
    https://www1.udel.edu/educ/gottfredson/reprints/2011InstantExpertIntelligence.pdf

    Thank you, I will investigate and comment on them when I can.

    Read More
  65. @Agent76
    12.02.2017 America’s Civil War Has Begun with Balkanization to Follow

    USA in the World Soon after taking office, President Trump issued an executive order banning legal residents of the United States and unnamed “others” based on their place of birth in seven nations cited as “dangerous.” All named nations are predominately Muslim but make up only a small minority of Islamic nations.

    http://journal-neo.org/2017/02/12/america-s-civil-war-has-begun-with-balkanization-to-follow/

    “Can’t we all get along?” was, like “We are the World,” a symptom of high social mood.

    There is a reason the spectrum of human social behaviors assort in parallel; they depend from the same cause.

    People are never more optimistic than at tops, when they expect rising conditions to continue forever. Americans were just this optimistic after World War 2, and the skyrocketing lifestyles of the 1950′s and early 1960′s came from social mood optimism that embraced the Space Race, Star Trek, and various Denial-of-Nature Utopian social projects like the Civil Rights Act, The Hart-Celler Immigration Act, the War On Poverty, Medicare/Medicaid, et. al.

    Starting in 1982, Americans took prosperity for granted, and put their purchases of it on the National Credit Card. It has been a moon-shot of optimistic assumptions ever since, and by many measures this optimism peaked 17 years ago or more. Only an even larger spasm of borrowing kept the game afloat.

    Trump’s election came as a signal that the doubling-down on Utopian Political Policies under Obama were the blow-off top of that much longer social mood rally. Trump’s election is a sign of mood’s change in trend, a trend that has vastly further (and longer) to play out. Trump is but a transition figure. Leftism enjoyed a monopoly since at least the Wilson Administration, and it embedded innumerable conditions that people will no longer tolerate going forward.

    Section 8 was the dealer shuffling all the dissimilar cards by geography, insuring that people who could barely tolerate each other in the best, most optimistic of times would be living in close proximity when the urge to rip others to shreds inevitably developed.

    What’s coming is a resumption of the English Civil War, of the 30-Years War, of the strife and difficulty of Europe’s 14th century, all rolled into one. On one hand will be the Parliamentarians, who want the central state to continue and increase wealth redistribution and promoting Holy Diversity. On the other, those who will no longer submit to these impositions.

    I side with the latter.

    Read More
  66. @Matthew Sarraf
    This is an interesting finding. It does nothing to bring into doubt Dr. Woodley of Menie's dysgenic theory, however.

    A paper published shortly after my review, "Selection against variants in the genome associated with educational attainment," finds a substantial decrease of an educational attainment polygenic score in Icelanders over time (the study examines genetic data from 129,808 Icelanders born between 1910 and 1990; this is without question a representative sample): http://www.pnas.org/content/114/5/E727.abstract. Dr. Woodley of Menie notified me of this research and pointed out, as I anticipated upon first hearing of the paper, that the equation that the authors use to convert the polygenic score decline to a per decade IQ point decline, 0.038 x (30/3.74) = 0.30 IQ points, assumes an unrealistically low additive heritability of IQ: 30%. The adult additive heritability of IQ is typically pegged at 80-85%, with the additive heritability of g likely at 85-87%. Thus the Icelandic data in fact indicate a genotypic g decline of 0.81-0.88 points per decade (on an IQ scale; I am using 80% as a conservative estimate of the additive heritability of g and 87% as a realistic estimate to arrive at the 0.81-0.88 range). While already quite close to Dr. Woodley of Menie's estimated g decline of 1-1.5 points per decade, this is only the decline in g from genetic selection. Once we include Dr. Woodley of Menie and Mr. Fernandes' estimated decline in g from mutation accumulation and other sources of damage to developmental stability (in the paper cited as Woodley of Menie & Fernandes, 2016b in my review), 0.16 points per decade, the overall per decade g decline rises to 0.97-1.04 points. Particular demographic changes may add another 0.25 points of g lost per decade, bringing the overall estimated decline in g to 1.22-1.29 points per decade, entirely consistent with what Dr. Woodley of Menie has been saying for years. In any case, the decrease in g due to genetic selection, the reality of which is confirmed in the Iceland paper about as directly as possible, is nearly a full point alone. So we find a diminution of g in the 1-1.5 points per decade range without availing ourselves of reaction time data, and a loss of g per decade nearly in that range even if we assume that only genetic selection is depressing g. Assume, arguendo, that the decadal reduction of g has been the mere 0.81 points per decade arrived at above with the 80% heritability estimate. Ignore all other possible contributory factors. 0.81 points of g lost a decade from 1850 to 2010 would amount to a total reduction of 12.96 points -- quite alarming for a very conservative estimate!

    I have not yet been able to read the study on myopia and visual reaction time (VRT) in detail. But even if myopia goes with longer VRT and myopia is becoming more prevalent (which it is), this would have no bearing on the secular trend toward greater auditory reaction time that Dr. Woodley of Menie and his colleagues have found. I doubt if the changing prevalence of myopia can explain more than a small fraction of the increase in VRT that Dr. Woodley of Menie and his colleagues have documented. Even when significant slowing is added to Galton's VRT samples, the remaining retardation of VRT indicates a g loss of ~10 points (on an IQ scale). As I argue in my review, attacking the dysgenic theory by picking at individual data sets and indicators is unlikely to bear fruit -- the nomological net of evidence for the theory is very robust, especially now that we have the aforementioned genetic selection data, and so is not likely to be undone without a parsimonious alternative explanation of declines in the various indicators that together seem to have nothing in common apart from a relation to g. If myopia decreases color acuity, increasing rates of myopia may explain why the estimate of dysgenesis on g from color acuity is much too high. But I am optimistic that some of the decline in color acuity is due to temporal reduction of g. Note that I do not suggest in my review that Dr. Woodley of Menie's research has made certain the precise magnitude of declines in g, only that it has shown that significant declines in g have been almost certainly occurring. With good genetic selection data now at hand, we are moving in on a more concrete estimate, which is probably in the 1-1.5 points of g lost per decade range that, as previously stated, Dr. Woodley of Menie predicted years ago.

    “as I anticipated upon first hearing of the paper, that the equation that the authors use to convert the polygenic score decline to a per decade IQ point decline,

    0.038 x (30/3.74) = 0.30

    IQ points, assumes an unrealistically low additive heritability of IQ: 30%.”

    And it did not occur to you that the equation cannot be possibly correct. Heritability, say 30% is expressed as a fraction of variance. Variance is in different units than a mean. The trend, say ∆IQ is measured as changes of mean. You compare means at different times and get ∆IQ and then divide by the time interval. There is no way that there would be a linear proportion linking mean and variance to express the trend.

    Read More
  67. @Agent76
    12.02.2017 America’s Civil War Has Begun with Balkanization to Follow

    USA in the World Soon after taking office, President Trump issued an executive order banning legal residents of the United States and unnamed “others” based on their place of birth in seven nations cited as “dangerous.” All named nations are predominately Muslim but make up only a small minority of Islamic nations.

    http://journal-neo.org/2017/02/12/america-s-civil-war-has-begun-with-balkanization-to-follow/

    Can our gentle hosts please look into reviewing the posting rights of this gentleman? He basically seems intent on spamming. A short review of his posting history reveals that almost every single post of his links outside of Unz.

    Read More
  68. Steve Sailer has an idea that the Flynn effect is part due to the fact that our day to day life emphasizes pattern recognition, because it is that part of the IQ test that appears to be improving, the other parts are flat.

    By extension, all three of the major drops in the above list — 3D ability, colo(u)r acuity, and reaction times — are abilities keyed to real world, doing things contexts. In short, I think if more people starting in youth were more accustomed to making things, repairing things, and moving things they would have sharper abilities in these contexts. In short, I would suggest that while IQ has a large genetic component, how you develop skills (even just observational ones) at an early age can be of some importance for an IQ score.

    On the other hand, the idea that dysgenic breeding will lower the genetic quality of humans also has some merit. However, the idea is 200 years old. It was even sometimes argued that the development of things such as the smallpox vaccine would lead to a weakening of the race. To be sure, if we were has hard-nosed about shutting down human frailty as the Greeks, Romans, and Carthaginians were, The problem — if it is a problem — is that Christian ethics is devoted to the survival of the weakest among us.

    Read More
  69. @utu
    But the conversion factors and how were they derived are not explained in the review by Sarraf. Perhaps they can be found in Woodley's book.

    If you have two tests X1 and X2 when applied to the same population you can get correlation R and slope of linear regression straight line S=dX1/dX2. So if X1=IQ in IQ points and X2=RT in milliseconds mechanically one can convert changes in RT scores (∆RT) to changes in IQ scores (∆IQ) following ∆RT/∆IQ=S=dX1/dX2 proportion. However when correlation R is small this is pretty meaningless and if used it amounts to

    mathematical charlatanry
     
    .

    Sarraf wrote "simple reaction times, a decent proxy for g." How can he write it with a straight face when reaction time has very low correlation (it is not even 0.3) with IQ?

    The bottom line is that it all comes down to IQ tests, right? Everything is converted to the scale implicitly defined by IQ tests. This includes g. It is interesting that Sarraf (I am not sure about Woodley as I did not read his book) managed to reach the Mount Everest of g reification. g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges. How does Sarraf use g in his text? Let me count the ways:

    "a decent proxy for g", "the loss of g in the West", "reductions in g", "with the diminution of g even", "the integrity of genetic factors that underlie g", "suggesting that diminishing g is pervasive", 'assert that “dysgenesis” on g may also explain “anti-Flynn effects,”'

    All those statements containing "g" are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds. There are the deceivers and there are the believers. But the best, most effective deceivers are the believers. Why seemingly intelligent people (I listened to Woodley on YT and he clearly is very intelligent) let themselves to be fooled? Is Woodley a charlatan or a fool?

    “g is a general factor, extracted from the correlation matrix of a battery of mental ability tests by a number of different methods of factor analysis and according to different models of the factor structure of abilities.”

    So begins Chapter 4 “Models and characteristics of g” in Jensen’s “The g factor” Praeger, 1998. I think that is a good starting point for discussions about conceptualizing intelligence. The book is still valuable for understanding many psychometric concepts. Jensen’s “Bias in Mental Testing” 1980 is also valuable in explaining test validity and predictive power.

    The best introduction to what g means in real life is Linda Gottfredson’s “Why g matters: the complexity of everyday life.

    https://www1.udel.edu/educ/gottfredson/reprints/1997whygmatters.pdf

    Read More
  70. @pyrrhus
    The Woodley effect is almost certainly caused by increasing mutational load, with probably some dysgenic breeding effects tossed in. The Flynn effect is caused, as Dr. Thompson suggests, by better growing conditions, and also, as Flynn speculated, increasing familiarity with the Raven IQ test and such. Strangely enough, standardized testing in the US indicates that the Flynn effect died 50 years ago, and has not been resuscitated....All broad based standard testing has shown significant declines, especially the college entrance tests, the SAT and ACT.

    “Strangely enough, standardized testing in the US indicates that the Flynn effect died 50 years ago, and has not been resuscitated”

    Cite?

    Read More
    • Replies: @RaceRealist88
    I believe he's referring to the reversal of the FLynn Effect since the 90s. It's been occurring in first world countries, most notably France, for well over a decade. See Woodley of Menie and Dutton 2015 and Lynn and Dutton 2015 for more information.
  71. On the other hand, the idea that dysgenic breeding will lower the genetic quality of humans also has some merit. However, the idea is 200 years old.

    Every collective human belief ends up just being a fad.

    Today’s “blank slate” and “magic dirt” beliefs are fads. Most macroeconomic theory looks, to me, like a fad. “All kids can be rocket scientists” is today’s fad, the same as the fad doubting that each of us is born with a relatively fixed set-point for ability.

    I used to think that when these fads ran their course, we’d see a return to more sane, rational beliefs. I now realize that it’s actually fads, all the way down.

    We are leaving the fad of believing people aren’t responsible for their condition in life, which underlies vast income redistribution from the haves to the have-nots. We’re leaving a time of great empathy for those less fortunate, from those whose great fortune was believed unlimited.

    We are leaving a fad belief in unlimited resources.

    Interest in eugenics, interest in seeing people fully accountable for their condition, interest in protecting the scarce resources in our personal hands against those who are perceived to be parasites, all of these look highly likely to be the NEXT fad, for a very long time, until their time too shall pass and a new underlying fad takes over.

    Humans as a collective look exactly like hive insects and our collective behavior obeys rules just as inviolable.

    Read More
  72. @utu
    But the conversion factors and how were they derived are not explained in the review by Sarraf. Perhaps they can be found in Woodley's book.

    If you have two tests X1 and X2 when applied to the same population you can get correlation R and slope of linear regression straight line S=dX1/dX2. So if X1=IQ in IQ points and X2=RT in milliseconds mechanically one can convert changes in RT scores (∆RT) to changes in IQ scores (∆IQ) following ∆RT/∆IQ=S=dX1/dX2 proportion. However when correlation R is small this is pretty meaningless and if used it amounts to

    mathematical charlatanry
     
    .

    Sarraf wrote "simple reaction times, a decent proxy for g." How can he write it with a straight face when reaction time has very low correlation (it is not even 0.3) with IQ?

    The bottom line is that it all comes down to IQ tests, right? Everything is converted to the scale implicitly defined by IQ tests. This includes g. It is interesting that Sarraf (I am not sure about Woodley as I did not read his book) managed to reach the Mount Everest of g reification. g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges. How does Sarraf use g in his text? Let me count the ways:

    "a decent proxy for g", "the loss of g in the West", "reductions in g", "with the diminution of g even", "the integrity of genetic factors that underlie g", "suggesting that diminishing g is pervasive", 'assert that “dysgenesis” on g may also explain “anti-Flynn effects,”'

    All those statements containing "g" are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds. There are the deceivers and there are the believers. But the best, most effective deceivers are the believers. Why seemingly intelligent people (I listened to Woodley on YT and he clearly is very intelligent) let themselves to be fooled? Is Woodley a charlatan or a fool?

    I have read over all of your comments on this post. It is evident that correcting your misunderstandings would require the administration of introductory courses in statistics and psychometrics (at minimum). This is not the place to receive such education.

    Read More
    • Replies: @utu
    I knew I was onto something (see my comment #68). Apparently neither you nor Kong et al.

    http://www.pnas.org/content/114/5/E727.abstract
     
    developed the habit of using dimensional analysis to check the plausibility of equations. The used to teach it in high school physics or chemistry class.

    The equation 0.038 × (30/3.74) = 0.30 you cited from Kong is incorrect. Kong made a mistake which neither you nor Woodley did catch. The mean cannot be scaled with variance. Mean can be scaled with the square root of variance. Got it? Therefore the correct equation is:

    0.038 × (30/3.74)^(1/2) = 0.11
     
    and subsequently your recalculations for variance 80% and 87% yield

    0.038*(80/3.74)^(1/2)=0.17
    0.038*(87/3.74)^(1/2)=0.18
     
    I am surprised that Kong made this silly mistake (perhaps too many coauthors) particularly in light that just one paragraph earlier (page 4) he wrote a correct equation of the same type and even justified it:

    Thus, if POLYfull is assumed to account for 30% of the variance of EDU, then its estimated rate of change, by extrapolation, is −0.010 × (30/3.74)^(1/2)= −0.028 SUs per decade.
     
    Basically when you want to partition variable (in this case it is a mean) according to existing variance partition you must use standard deviations not variances in your proportion: X1:X2 =SD1:SD2 and not X1:X2=V1:V2.

    Listen Matthew, it's never too late too learn. And I agree that this is not the best place to receive education. But I could not resits. Your job, your livelihood could depend on it. Don't forget to share your newly acquired knowledge with Kong, his N-coauthors and with Woodley.
  73. @RW
    "Strangely enough, standardized testing in the US indicates that the Flynn effect died 50 years ago, and has not been resuscitated"

    Cite?

    I believe he’s referring to the reversal of the FLynn Effect since the 90s. It’s been occurring in first world countries, most notably France, for well over a decade. See Woodley of Menie and Dutton 2015 and Lynn and Dutton 2015 for more information.

    Read More
    • Replies: @James Thompson
    Here are some results for France and Germany, though data collection for the latter was mostly in Austria

    http://www.unz.com/jthompson/is-france-sinking-even-further
    http://www.unz.com/jthompson/deutschland-uber-alles-dann-unter-allen
  74. @Clearpoint
    Boring article, but interesting feedback. Especially Jay man throwing little pieces of shit onto the wall to see what sticks, and everyone else pissed off and piling on Jay man in response. The most influential information provided in the article or the feedback by far was the nearsightedness study comparing children in Sydney and Singapore. Now that's a scientific study. I have yet to see any study on human intelligence or measurement of human intelligence that is not full of human bias. Even IQ tests are full of human bias. You want to tell me that someone whose intelligence is best expressed when sitting on their butt in a classroom is not going to design a test that measures intelligence based on an environment where one is sitting on their butt in a classroom. The same kid who scores in the 99th percentile on standardized intelligence tests in school struggles with the concept of running the bases in baseball, while the kid who gets the concept of running the bases gets diagnosed with ADD and prescribed Ritalin in the classroom. Environment is king. And whenever you see studies or tests that show declining intelligence, chances are that this is the direct result of substituting a less effective environment for a more effective one. As for genetics, genetic traits are the accumulation of environmental adjustments. What truly scientific studies like the nearsightedness study show is how fragile genetic traits are. Take the individual that is considered genetically superior today and throw him into a toxic environment and see how long that supposed genetic superiority lasts. Like the Eddie Murphy movie "Trading Places."

    “Take the individual that is considered genetically superior today and throw him into a toxic environment and see how long that supposed genetic superiority lasts.”

    In the UK that argument is often rendered as “Put him in Somalia and see how he gets on when tribal militia raid his village”. But surely the point is – is a society of English people or a society of Somalis more likely to be one in which you have to worry about raids by tribal militia?

    The great Harry Hutton’s take:

    “It hurts when they don’t accept you, but I have many English friends,” said Chris McShane (26), who fled New Zealand when soldiers burned his village. “I’ll never forget the first time some English people invited me to their house. They served lamb from a ‘supermarket’. In New Zealand if we want to eat lamb we have to strangle it ourselves.”

    “I came to Britain to seek a better life for my children.” He dreams of returning to his homeland one day, when the situation is more stable. “But Britain is my home now.”

    Getting back to topic, and with due trepidation/humility in such knowledgeable company, it would be interesting to try and tease out the effects (if any) on average UK IQ of

    1967 Abortion Act – something like 200,000 a year in England and Wales (as against about 600,000 live births) – are there any profiles of who has them? I’d imagine the profiles would have changed since 1967 as well (anecdotal, but I know one very high IQ girl who had three in the 1970s (and is childless)). More anecdote – don’t NI schoolkids (no abortion there or none til recently) get better test scores than the rest of the UK?

    Number of women in higher education over time (and their lower TFR) – this again will have changed a lot in 50 years.

    Effects of benefit system and what Steve Sailer calls “affordable family formation” – it seems to me that for the last 30 years the only people who could afford large families (4+) were either the well off (usually with stay at home mum) or the benefit-aided. Probably more of the latter than the former, too.

    Read More
  75. @Michael A. Woodley of Menie
    Not an argument

    Off topic:

    Mr. Woodley are you in any way connected to the Menie Estate in Scotland, now owned by Donald Trump?

    Menie House is a grand 14th-century country property surrounded by over 200 acres (0.81 km2) of private land, collectively known as the Menie Estate. The house was designed by the Aberdeen architect John Smith for George Turner around 1835. It is listed as category B by Historic Scotland.[5]
    [...]
    American billionaire Donald Trump purchased a large part of the estate in 2006.

    https://en.wikipedia.org/wiki/Balmedie#Menie_Estate

    Read More
  76. @utu
    UNITS DEFINITION PLEASE

    3.5 drop/decade, 4.8 drop/decade, 1.8 drop/decade, 0.16 drop/decade, 0.57-1.21 drop/decade, 0.16 drop/decade.

    Are these common units? Like RT that is in [ms] is converted to some other units? How the conversions are done? On what basis the units are somewhat conflated with g. And, yes I must ask what are the units of g?

    They are = standard deviations * 15, so basically = IQ points.

    “And, yes I must ask what are the units of g?”

    IQ points with a s.d. of 15 are the standard unit for measurements of g. IQ points are on an equal-interval scale (at least for small numbers, less than a standard deviation or two). You can add and subtract them, but not multiply or divide them by each other. Basically like working with Centigrade or Fahrenheit. They’re also not equal-interval out past 30 points from the average and become steadily less so the farther out you go because the real distribution of g has fatter tails than the normal distribution. Their biggest drawback of IQ is that it is not a measure of intelligence properly speaking, but of the rarity of intelligence relative to a given age, so an IQ 100 9 year old is not as capable of answering questions as a 29 year old with the same IQ. The size of the unit IQ points is theoretically the same, that is, a 110 IQ 9 y.o. would be as much smarter than a 100 IQ 9 y.o. as the same case but with 29 year olds. (In practice, 9 year-olds have a tighter distribution, but this sort of comparison between ages is seldom if ever used.)

    Given the right sort of test (~measured item difficulties can be graphed as a straight line) there is a transformation of the raw scores that gives you a ratio scale like Kelvin, with an absolute zero, which allows all arithmetic operations, thus letting you say: “A is 10% smarter than B”. This is called a Rasch measure. The only arbitrary choice is the size of the unit. Riverside publishing’s Stanford-Binet CSS (Change-sensitive score) and Woodcock-Johnson “W” scales set the size of their Rasch unit by reference to the average 10-year old who is assigned a CSS of 500. Adults are around 510. The form of the CSS vs. age graph is logarithmic, rising quickly at first, them leveling off. IIRC, the s.d. for the FSIQ (full-scale, whole-test) CSS scale for adults is about 8.5 CSS points, or roughly 6 CSS at age 9, earlier ages have wider distributions. (Each subtest also has its own CSS with the same 500 @ age 10 anchor. Actually every question has a a CSS score on the same scale which denotes its difficulty – when difficulty = ability, the chance of getting the item correct is 50%).

    So the percentage variation in human intelligence is low (~10% difference within the middle 99.9% of the adult population), but expressed as age differences, +2 s.d. people are smarter at age 9 than the average adult, while -2 s.d. adults are only as smart as the average 6 year old.

    Most research on human intelligence does not require a ratio scale so IQ is good enough for those purposes. Rasch / ratio scales are more rigorously defined, though, and allow doing some things that IQ can’t do, or at least makes more difficult and error-prone.

    Read More
    • Replies: @James Thompson
    Thank you very much for your comments which I enjoyed reading.
    , @utu
    "They are = standard deviations * 15, so basically = IQ points."

    Yes, but it is rather quick and dirty method. Sometimes it cannot be justified. Particularly for g. Because g expressed in IQ points by this method will have the same SD as IQ. This would mean that the variance of g accounts also for the variance in IQ that is attributed to environment. I do not think that the proponents of g construct would like it.

    If you do linear regression between IQ and some other test X you get equation IQ=A*X (+B). If correlation R between IQ and X is large then you can use the coefficient A to express X in IQ points. The coefficient A is not equal to the ratio of standard deviations of IQ and X, respectively. How do you justify if correlation R is small like R=0.3 for IQ and RT?

    From factor analysis you get equation: IQ=A*g+B*gg+C*ggg..., where gg and ggg are 2nd and 3rd, less meaningful, factors. Then the coefficient A can be used to express g in IQ points. The problem is that for different batteries of tests that include IQ test the coefficients A are different because g is battery test dependent.

    I understand that in practice the standard deviation might be scaled so it is 15. This however as I said is quick and dirty and requires lots of hand waving to justify.
  77. @utu
    But the conversion factors and how were they derived are not explained in the review by Sarraf. Perhaps they can be found in Woodley's book.

    If you have two tests X1 and X2 when applied to the same population you can get correlation R and slope of linear regression straight line S=dX1/dX2. So if X1=IQ in IQ points and X2=RT in milliseconds mechanically one can convert changes in RT scores (∆RT) to changes in IQ scores (∆IQ) following ∆RT/∆IQ=S=dX1/dX2 proportion. However when correlation R is small this is pretty meaningless and if used it amounts to

    mathematical charlatanry
     
    .

    Sarraf wrote "simple reaction times, a decent proxy for g." How can he write it with a straight face when reaction time has very low correlation (it is not even 0.3) with IQ?

    The bottom line is that it all comes down to IQ tests, right? Everything is converted to the scale implicitly defined by IQ tests. This includes g. It is interesting that Sarraf (I am not sure about Woodley as I did not read his book) managed to reach the Mount Everest of g reification. g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges. How does Sarraf use g in his text? Let me count the ways:

    "a decent proxy for g", "the loss of g in the West", "reductions in g", "with the diminution of g even", "the integrity of genetic factors that underlie g", "suggesting that diminishing g is pervasive", 'assert that “dysgenesis” on g may also explain “anti-Flynn effects,”'

    All those statements containing "g" are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds. There are the deceivers and there are the believers. But the best, most effective deceivers are the believers. Why seemingly intelligent people (I listened to Woodley on YT and he clearly is very intelligent) let themselves to be fooled? Is Woodley a charlatan or a fool?

    g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges . . . All those statements containing “g” are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds.

    I think it is you who have fooled yourself. Invisible entities are ubiquitous in science. Are electrons real? Are quarks real? What the hell is a magnetic field, what does it smell like, and is its sister pretty? To be clear, I believe in cloud chambers and circuits: it’s electrons I question.

    Positivism (and its variants) deals with this problem by making the test of good science not “are the hypothesized entities real” but “if we assume the hypothesized entities, does that help us predict and control the world” IQ does this. So, it gets to be real. Like electrons.

    The problem with Astrology is not that it posits invisible entities. The problem with Astrology is that it sucks at prediction and control.

    Or, to come at it an entirely different way, do you believe in carbon dating? Suppose, like Young Earth Creationists, you really didn’t want to believe in carbon dating. Would carbon dating survive the sort of shielding skepticism you are happily deploying against g? Not a chance. After all, scientists were not there 10,000 years ago measuring c12/c14 proportions or, if you want to get really crazy, measuring the decay rate of C14. Plus, there are all sorts of anomalies with carbon dating when it is actually used. Shielding skepticism only gets deployed against things we don’t want to believe.

    Read More
    • Replies: @utu
    I am not sure how to respond to your comment. Perhaps I should say I do not like arguments that hinge on analogy. g is not like electron and I do not see connection to astrology or carbon dating.

    Anyway, I do not think that the concept of g and its mathematical construct (which is non-unique by the way) helped anybody to prove or explain anything. The IQ research is purely an empirical enterprise sometimes also called science. The g was postulated to give more firm theoretical foundation to this empirical enterprise. Perhaps a little bit of physics envy. In fact g serves only this purpose because it is not really used for anything else. g scores are rarely calculated for individuals because g is no unique. Different batteries of tests yield different g's.
  78. @JackOH
    Thanks, Prof. Thompson. A week ago I'd remarked idly about my trying to imagine an "IQ-centric" political party, political faction, or what-have-you. Call it the Achiever Party, or something, with an Achievement Foundation appended to it. The idea would be to explore whether it's even possible to explicitly politicize IQ in a measured, temperate way.

    My personal feeling is that Shockley, Rushton, and other intelligence researchers got bashed about, a bit unfairly in my opinion, is that they didn't recognize the political backdrop of an ever-expanding political franchise, Black upheaval, Marxist revolution, etc. , which, I think, has put IQ in a bad spot. My idea of an Achievement Foundation would be to rescue IQ from the shadows.

    As a casual, non-expert reader I may be getting something wrong here. I am learning something from your articles and the comments below. Many thanks.

    You’d do much better forming a secret society. For reasons which are obvious.

    Actually, once you realize that, you might wonder whether somebody else might have thought of that already.

    Read More
    • Replies: @JackOH
    Bill, I was thinking of open, transparent debate. Imagine a headline of 2018: "Study Says America Fails, Ignores Its Best and Brightest". Subhead #1: "Costs Said to Be in Trillions". Subhead #2: "Proposes Education, Immigration, Other Reforms".

    You'd need money, experts with their hearts in the right place, time to formulate well thought-out ideas and responses to the critics, T & E cash for a media blitz, and a bit of moral conviction. But, yes, I think it's possible to openly propose "IQ-centric" policies without being marginalized.
  79. The HBD types leave out the most important aspect of evolution – the aspect that the environment plays in who propagates and who prospers. In biology, environment is king. When the environment is helpful to the most intelligent individuals, then there is a push-push effect for more intelligence.

    Genetics selection is determined by environment. The more helpful the fertilizer, the more prosperous the existing seed is. That is the Flynn Effect. Nutrition and education have improved the overall level of intellectual progress.

    The Woodley Effect of diminished genetic excellence in the overall group – is the result of diminished culling of the species. In these days, most everyone lives. One in four gets the best genetics of both parents. Two in four get an average, and one in four get the least. Today the ones with the least are not culled – thus bringing down the average. The push-push effect reverts to a mean.

    Peace — Art

    p.s. The HBD types are totally wrong. Today humanity is driven by knowledge selection, not by biological selection. Those who use the best knowledge prosper – regardless of tribe or personal intellectual prowess. Today knowledge is king – not intellectual acuity.

    p.s. Look at the beautiful modern city of Dubai – did the people of Dubai achieve that with personal genetic intellectual prowess or was it knowledge created by others over the ages.

    p.s. The future of genetic determination is now in the hands of knowledge. Knowledge will determine the future.

    p.s. U.S. experts soften on DNA editing of human eggs, sperm, embryos

    https://www.yahoo.com/news/scientists-soften-dna-editing-human-eggs-sperm-embryos-160345374.html

    Read More
    • Replies: @Kn83
    1. There is no such thing as knowledge selection. The ability to even collect and use knowledge effectively in the first place requires higher intelligence (which is 80% genetic).

    2. People with lower iq but more education (and thus knowledge) still achieve much less on average than those with higher iq but less education. Knowledge alone is not power.

    3. Environment is not king, not even close. Thousands of scientific studies consistently demonstrate that "environment" on the whole has little to no impact on how people turn out. Environments don't make people, people make environments based on their own nature. The blank-slate is dead @art. The only influence environment has is how it indirectly influences which type of people reproduce more then others.

    4. There is not a single example of a low iq population (without a high iq elite)
    creating and maintaining an advanced civilization. The people of Dubai were only able to afford a city like that thanks to oil money. They don't have the average iq to create it on their own, so they hired and paid western architects to do it for them.
    , @utu

    "The Woodley Effect of diminished genetic excellence in the overall group – is the result of diminished culling of the species."
     
    No Woodley effect was proven to exist. No connection to genetics was ever made.
  80. @RaceRealist88
    I believe he's referring to the reversal of the FLynn Effect since the 90s. It's been occurring in first world countries, most notably France, for well over a decade. See Woodley of Menie and Dutton 2015 and Lynn and Dutton 2015 for more information.

    Here are some results for France and Germany, though data collection for the latter was mostly in Austria

    http://www.unz.com/jthompson/is-france-sinking-even-further

    http://www.unz.com/jthompson/deutschland-uber-alles-dann-unter-allen

    Read More
  81. @EH
    They are = standard deviations * 15, so basically = IQ points.

    "And, yes I must ask what are the units of g?"

    IQ points with a s.d. of 15 are the standard unit for measurements of g. IQ points are on an equal-interval scale (at least for small numbers, less than a standard deviation or two). You can add and subtract them, but not multiply or divide them by each other. Basically like working with Centigrade or Fahrenheit. They're also not equal-interval out past 30 points from the average and become steadily less so the farther out you go because the real distribution of g has fatter tails than the normal distribution. Their biggest drawback of IQ is that it is not a measure of intelligence properly speaking, but of the rarity of intelligence relative to a given age, so an IQ 100 9 year old is not as capable of answering questions as a 29 year old with the same IQ. The size of the unit IQ points is theoretically the same, that is, a 110 IQ 9 y.o. would be as much smarter than a 100 IQ 9 y.o. as the same case but with 29 year olds. (In practice, 9 year-olds have a tighter distribution, but this sort of comparison between ages is seldom if ever used.)


    Given the right sort of test (~measured item difficulties can be graphed as a straight line) there is a transformation of the raw scores that gives you a ratio scale like Kelvin, with an absolute zero, which allows all arithmetic operations, thus letting you say: "A is 10% smarter than B". This is called a Rasch measure. The only arbitrary choice is the size of the unit. Riverside publishing's Stanford-Binet CSS (Change-sensitive score) and Woodcock-Johnson "W" scales set the size of their Rasch unit by reference to the average 10-year old who is assigned a CSS of 500. Adults are around 510. The form of the CSS vs. age graph is logarithmic, rising quickly at first, them leveling off. IIRC, the s.d. for the FSIQ (full-scale, whole-test) CSS scale for adults is about 8.5 CSS points, or roughly 6 CSS at age 9, earlier ages have wider distributions. (Each subtest also has its own CSS with the same 500 @ age 10 anchor. Actually every question has a a CSS score on the same scale which denotes its difficulty - when difficulty = ability, the chance of getting the item correct is 50%).

    So the percentage variation in human intelligence is low (~10% difference within the middle 99.9% of the adult population), but expressed as age differences, +2 s.d. people are smarter at age 9 than the average adult, while -2 s.d. adults are only as smart as the average 6 year old.

    Most research on human intelligence does not require a ratio scale so IQ is good enough for those purposes. Rasch / ratio scales are more rigorously defined, though, and allow doing some things that IQ can't do, or at least makes more difficult and error-prone.

    Thank you very much for your comments which I enjoyed reading.

    Read More
  82. @Art
    The HBD types leave out the most important aspect of evolution – the aspect that the environment plays in who propagates and who prospers. In biology, environment is king. When the environment is helpful to the most intelligent individuals, then there is a push-push effect for more intelligence.

    Genetics selection is determined by environment. The more helpful the fertilizer, the more prosperous the existing seed is. That is the Flynn Effect. Nutrition and education have improved the overall level of intellectual progress.

    The Woodley Effect of diminished genetic excellence in the overall group – is the result of diminished culling of the species. In these days, most everyone lives. One in four gets the best genetics of both parents. Two in four get an average, and one in four get the least. Today the ones with the least are not culled – thus bringing down the average. The push-push effect reverts to a mean.

    Peace --- Art

    p.s. The HBD types are totally wrong. Today humanity is driven by knowledge selection, not by biological selection. Those who use the best knowledge prosper – regardless of tribe or personal intellectual prowess. Today knowledge is king – not intellectual acuity.

    p.s. Look at the beautiful modern city of Dubai - did the people of Dubai achieve that with personal genetic intellectual prowess or was it knowledge created by others over the ages.

    p.s. The future of genetic determination is now in the hands of knowledge. Knowledge will determine the future.

    p.s. U.S. experts soften on DNA editing of human eggs, sperm, embryos

    https://www.yahoo.com/news/scientists-soften-dna-editing-human-eggs-sperm-embryos-160345374.html

    1. There is no such thing as knowledge selection. The ability to even collect and use knowledge effectively in the first place requires higher intelligence (which is 80% genetic).

    2. People with lower iq but more education (and thus knowledge) still achieve much less on average than those with higher iq but less education. Knowledge alone is not power.

    3. Environment is not king, not even close. Thousands of scientific studies consistently demonstrate that “environment” on the whole has little to no impact on how people turn out. Environments don’t make people, people make environments based on their own nature. The blank-slate is dead @art. The only influence environment has is how it indirectly influences which type of people reproduce more then others.

    4. There is not a single example of a low iq population (without a high iq elite)
    creating and maintaining an advanced civilization. The people of Dubai were only able to afford a city like that thanks to oil money. They don’t have the average iq to create it on their own, so they hired and paid western architects to do it for them.

    Read More
    • Replies: @Santoculto
    How urbanization rates and Flynn effect have interacted one each other??

    Flynn effect was greater in Germany than in Great Britain since second half of XIX century and (seems) also the increasing of urbanization was greater in Germany (bigger rural exodus) than in Great Britain because Germany only started their industrialization and urbanization ~ 100 years after GB. We know people from the countryside are on avg different than people from urbanside. Only Self selection and/or environment also can have a impact in personality (and cognition) if people will be nurtured in certain environment since they are babies??

    The difference between born blind and become blind?

    , @Art
    There is no such thing as knowledge selection.

    Old people do things the old way - younger people select newer knowledge. Every day all the time, people select what knowledge to use.

    People with lower iq but more education (and thus knowledge) still achieve much less on average than those with higher iq but less education.

    Very smart Chinese people with an abacus do brilliant things – not so smart Westerners with a computer do far more.

    Environment is not king, not even close.

    People in northern clines are more cooperative then people in hot clines. Example Northern Italy and southern Italy. A cold environment requires cooperation.

    The people of Dubai were only able to afford a city like that thanks to oil money. They don’t have the average iq to create it on their own, so they hired and paid western architects to do it for them.

    The truth is that hi IQ people in Dubai selected hi IQ people from other countries that had KNOWLEDGE. They did not select hi IQ people with no knowledge.

    It is obvious, collective evolving knowledge from the caveman on - is driving human kind - PERIOD.

    Peace --- Art
  83. @dc.sunsets
    That you cite a piece of Hollywood fiction as an example for a real world argument speaks volumes about your position.

    I find two kinds of people who argue about the validity of IQ measures as an indicator of life outcomes: 1) people who aren't very bright, envy those whose success rests on higher intelligence, resent them for it and seek to attribute all success to luck/nepotism/privilege of some unfair sort, and 2) people who are very bright and lack much contact with the real world where most people are dull as a 2x4 and some of them are too stupid to survive four weeks without constant help (and the only activity at which they excel is sexual reproduction.)

    My 4th grade teacher wife can easily assess the underlying intelligence of her students and do so entirely independent of whether the kid is on ADHD meds or if the kid is simply lazy and lacking in self-discipline. She doesn't need an IQ test to do so. Life is an IQ test.

    So anyone who disagrees with your position on IQ is either 1) an angry, jealous idiot, or 2) a social recluse. Your response speaks volumes about your arrogance.

    Read More
  84. @Kn83
    1. There is no such thing as knowledge selection. The ability to even collect and use knowledge effectively in the first place requires higher intelligence (which is 80% genetic).

    2. People with lower iq but more education (and thus knowledge) still achieve much less on average than those with higher iq but less education. Knowledge alone is not power.

    3. Environment is not king, not even close. Thousands of scientific studies consistently demonstrate that "environment" on the whole has little to no impact on how people turn out. Environments don't make people, people make environments based on their own nature. The blank-slate is dead @art. The only influence environment has is how it indirectly influences which type of people reproduce more then others.

    4. There is not a single example of a low iq population (without a high iq elite)
    creating and maintaining an advanced civilization. The people of Dubai were only able to afford a city like that thanks to oil money. They don't have the average iq to create it on their own, so they hired and paid western architects to do it for them.

    How urbanization rates and Flynn effect have interacted one each other??

    Flynn effect was greater in Germany than in Great Britain since second half of XIX century and (seems) also the increasing of urbanization was greater in Germany (bigger rural exodus) than in Great Britain because Germany only started their industrialization and urbanization ~ 100 years after GB. We know people from the countryside are on avg different than people from urbanside. Only Self selection and/or environment also can have a impact in personality (and cognition) if people will be nurtured in certain environment since they are babies??

    The difference between born blind and become blind?

    Read More
  85. @Matthew Sarraf
    I have read over all of your comments on this post. It is evident that correcting your misunderstandings would require the administration of introductory courses in statistics and psychometrics (at minimum). This is not the place to receive such education.

    I knew I was onto something (see my comment #68). Apparently neither you nor Kong et al.

    http://www.pnas.org/content/114/5/E727.abstract

    developed the habit of using dimensional analysis to check the plausibility of equations. The used to teach it in high school physics or chemistry class.

    The equation 0.038 × (30/3.74) = 0.30 you cited from Kong is incorrect. Kong made a mistake which neither you nor Woodley did catch. The mean cannot be scaled with variance. Mean can be scaled with the square root of variance. Got it? Therefore the correct equation is:

    0.038 × (30/3.74)^(1/2) = 0.11

    and subsequently your recalculations for variance 80% and 87% yield

    0.038*(80/3.74)^(1/2)=0.17
    0.038*(87/3.74)^(1/2)=0.18

    I am surprised that Kong made this silly mistake (perhaps too many coauthors) particularly in light that just one paragraph earlier (page 4) he wrote a correct equation of the same type and even justified it:

    Thus, if POLYfull is assumed to account for 30% of the variance of EDU, then its estimated rate of change, by extrapolation, is −0.010 × (30/3.74)^(1/2)= −0.028 SUs per decade.

    Basically when you want to partition variable (in this case it is a mean) according to existing variance partition you must use standard deviations not variances in your proportion: X1:X2 =SD1:SD2 and not X1:X2=V1:V2.

    Listen Matthew, it’s never too late too learn. And I agree that this is not the best place to receive education. But I could not resits. Your job, your livelihood could depend on it. Don’t forget to share your newly acquired knowledge with Kong, his N-coauthors and with Woodley.

    Read More
    • Replies: @Michael A. Woodley of Menie
    There is no error in Kong et al. (2017). Let me explain how they arrived at the 0.3 IQ points per decade decline estimate.

    The Breeder’s equation (developed by R.A. Fisher in the 20’s) is written as follows:

    [1] R = S*h^2

    Where R = the observed generational change in the trait, S = the strength of selection operating on that trait and h^2 = the trait additive heritability.
    Therefore for any given strength of selection:

    [2] R ∝ h^2

    How do we know that this proportionality holds true? Because a century’s worth of quantitative evolutionary research involving plants and animals has shown it to be thus (see: Crow, 2010).

    Kong et al. are simply employing a variant of this same equation. In their case they have observed the decadal decline in the polygenic score directly. They also know that their polygenic score accounts for 3.74% of the variance in their target phenotype (IQ), which is functionally equivalent to saying that their phenotype has an additive heritability of 0.0374. On this basis, the phenotype should decline by 0.038 IQ points per decade. However, we all know that the additive heritability of IQ is not 0.0374, it is much higher. How much higher is debatable, however Kong et al. chose an extremely conservative value of 30% or 0.3. So given that the observed decadal loss in IQ is clearly a substantial underestimate of the true loss, to get to the expected loss, the ratio of the two heritabilities is computed (0.3/0.0374), which yields an approximately eight-fold upwards correction factor. The product of the observed value of R (0.038 IQ points per decade) and this correction factor therefore becomes the expected value of R for IQ that would result if h^2 were 0.3 rather than 0.0374. This result is valid because [2]. If you don’t think that [2] is valid, then the burden of proof is on you to explain away a century of successful quantitative evolutionary research that has proceeded on the basis that this proportionality holds. Rather you than me.

    Of course, we also know that the true additive heritability of General Mental Ability is not 0.3, but closer to 0.8 (Bouchard Jr, 2004). Therefore Mat Sarraf’s statement that when adjusted on this basis, the IQ loss reported in Kong et al. increases to nearly a whole point per decade is valid also.

    I actually spoke yesterday with Augustine Kong about this equation, among other things. The aforementioned is correct and there is no error in the work.

    Refs.

    Bouchard Jr, T.J. (2004). Genetic influence on human psychological traits – a survey. Current Directions in Psychological Science, 13, 148-151.

    Crow, J.F. (2010). On epistasis: Why it is unimportant in polygenic directional selection. Philosophical Transactions of the Royal Society B: Biological Sciences, 36, 1241-1244.

  86. @EH
    They are = standard deviations * 15, so basically = IQ points.

    "And, yes I must ask what are the units of g?"

    IQ points with a s.d. of 15 are the standard unit for measurements of g. IQ points are on an equal-interval scale (at least for small numbers, less than a standard deviation or two). You can add and subtract them, but not multiply or divide them by each other. Basically like working with Centigrade or Fahrenheit. They're also not equal-interval out past 30 points from the average and become steadily less so the farther out you go because the real distribution of g has fatter tails than the normal distribution. Their biggest drawback of IQ is that it is not a measure of intelligence properly speaking, but of the rarity of intelligence relative to a given age, so an IQ 100 9 year old is not as capable of answering questions as a 29 year old with the same IQ. The size of the unit IQ points is theoretically the same, that is, a 110 IQ 9 y.o. would be as much smarter than a 100 IQ 9 y.o. as the same case but with 29 year olds. (In practice, 9 year-olds have a tighter distribution, but this sort of comparison between ages is seldom if ever used.)


    Given the right sort of test (~measured item difficulties can be graphed as a straight line) there is a transformation of the raw scores that gives you a ratio scale like Kelvin, with an absolute zero, which allows all arithmetic operations, thus letting you say: "A is 10% smarter than B". This is called a Rasch measure. The only arbitrary choice is the size of the unit. Riverside publishing's Stanford-Binet CSS (Change-sensitive score) and Woodcock-Johnson "W" scales set the size of their Rasch unit by reference to the average 10-year old who is assigned a CSS of 500. Adults are around 510. The form of the CSS vs. age graph is logarithmic, rising quickly at first, them leveling off. IIRC, the s.d. for the FSIQ (full-scale, whole-test) CSS scale for adults is about 8.5 CSS points, or roughly 6 CSS at age 9, earlier ages have wider distributions. (Each subtest also has its own CSS with the same 500 @ age 10 anchor. Actually every question has a a CSS score on the same scale which denotes its difficulty - when difficulty = ability, the chance of getting the item correct is 50%).

    So the percentage variation in human intelligence is low (~10% difference within the middle 99.9% of the adult population), but expressed as age differences, +2 s.d. people are smarter at age 9 than the average adult, while -2 s.d. adults are only as smart as the average 6 year old.

    Most research on human intelligence does not require a ratio scale so IQ is good enough for those purposes. Rasch / ratio scales are more rigorously defined, though, and allow doing some things that IQ can't do, or at least makes more difficult and error-prone.

    “They are = standard deviations * 15, so basically = IQ points.”

    Yes, but it is rather quick and dirty method. Sometimes it cannot be justified. Particularly for g. Because g expressed in IQ points by this method will have the same SD as IQ. This would mean that the variance of g accounts also for the variance in IQ that is attributed to environment. I do not think that the proponents of g construct would like it.

    If you do linear regression between IQ and some other test X you get equation IQ=A*X (+B). If correlation R between IQ and X is large then you can use the coefficient A to express X in IQ points. The coefficient A is not equal to the ratio of standard deviations of IQ and X, respectively. How do you justify if correlation R is small like R=0.3 for IQ and RT?

    From factor analysis you get equation: IQ=A*g+B*gg+C*ggg…, where gg and ggg are 2nd and 3rd, less meaningful, factors. Then the coefficient A can be used to express g in IQ points. The problem is that for different batteries of tests that include IQ test the coefficients A are different because g is battery test dependent.

    I understand that in practice the standard deviation might be scaled so it is 15. This however as I said is quick and dirty and requires lots of hand waving to justify.

    Read More
  87. @Bill

    g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges . . . All those statements containing “g” are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds.
     
    I think it is you who have fooled yourself. Invisible entities are ubiquitous in science. Are electrons real? Are quarks real? What the hell is a magnetic field, what does it smell like, and is its sister pretty? To be clear, I believe in cloud chambers and circuits: it's electrons I question.

    Positivism (and its variants) deals with this problem by making the test of good science not "are the hypothesized entities real" but "if we assume the hypothesized entities, does that help us predict and control the world" IQ does this. So, it gets to be real. Like electrons.

    The problem with Astrology is not that it posits invisible entities. The problem with Astrology is that it sucks at prediction and control.

    Or, to come at it an entirely different way, do you believe in carbon dating? Suppose, like Young Earth Creationists, you really didn't want to believe in carbon dating. Would carbon dating survive the sort of shielding skepticism you are happily deploying against g? Not a chance. After all, scientists were not there 10,000 years ago measuring c12/c14 proportions or, if you want to get really crazy, measuring the decay rate of C14. Plus, there are all sorts of anomalies with carbon dating when it is actually used. Shielding skepticism only gets deployed against things we don't want to believe.

    I am not sure how to respond to your comment. Perhaps I should say I do not like arguments that hinge on analogy. g is not like electron and I do not see connection to astrology or carbon dating.

    Anyway, I do not think that the concept of g and its mathematical construct (which is non-unique by the way) helped anybody to prove or explain anything. The IQ research is purely an empirical enterprise sometimes also called science. The g was postulated to give more firm theoretical foundation to this empirical enterprise. Perhaps a little bit of physics envy. In fact g serves only this purpose because it is not really used for anything else. g scores are rarely calculated for individuals because g is no unique. Different batteries of tests yield different g’s.

    Read More
    • Replies: @Wizard of Oz
    You remind me of an Austrian school economist of my acquaintance who wrote a treatise on Say"s Law and oft repeated global denunciations of JM Keynes because he allegedly misrepresented Say's Law, and having this "one big thing" (the hedgehog v. the fox) can't climb out of the well lolished rut he is in. As you seem to understand the predictive power and practical utility of IQ tests, and I see g as just another manifestation of the same measuring process and outcome of the measuring I can't understand your problem. However.... let me take up your objection to the way g is produced - or extracted - by calculation. I shall try an analogy.

    Instead of intelligence we are seeking to rate ball game ability. So, having a large grant as befits a subject that politicians think important we devise a ginormous battery of tests from soccer balls bounced from foot to knee to head to foot and so on, to volleying tennis balls hit at various speed at different distances from the centre of the racquet will find with varying degrees of accuracy, to serving balls at tennis to hit the corners and so on and on from croquet to cricket to billiards. And you calculate g.

    You will certainly find that the test subject who scores 2 sds above average g will nearly always beat average g ball hitters at every ball game. Except.... here you bring in James B. Carroll's three level approach as described in relation to IQ in the New Scientist article linked by someone a short time ago.

    A whose ball play g is +3sds probably gets on the professional circuit or equivalent whichever ball sport he takes up but then he is advised to choose tennis rather than football because he is 6ft 6in tall and can serve at 250kph. Then at the third level he has mastered the heavily sliced and the kicking serve that his coach has worked on as well as the double handed backhand with its vicious spin. Do you have a problem with accepting that a pretty useful g factor will be extracted in the 12 year old's first week to determine whether he should be afforded soecialist ball sport training or shuffled off to wrestling or swimming?

    I accept that different batteries of tests are going to produce slightly different g scores and it may well be that you drop bolliards, snooker, rugby football and even golf when you are choosing potential cricketers, tennis, rackets and squash players. Some bright sports psychologist will of course devise a simple piece of apparatus which is the Raven's Progressive Matrices test that finds a future Wimbledon champion amongst 12 year old Masai boys herding cattle.

  88. @Art
    The HBD types leave out the most important aspect of evolution – the aspect that the environment plays in who propagates and who prospers. In biology, environment is king. When the environment is helpful to the most intelligent individuals, then there is a push-push effect for more intelligence.

    Genetics selection is determined by environment. The more helpful the fertilizer, the more prosperous the existing seed is. That is the Flynn Effect. Nutrition and education have improved the overall level of intellectual progress.

    The Woodley Effect of diminished genetic excellence in the overall group – is the result of diminished culling of the species. In these days, most everyone lives. One in four gets the best genetics of both parents. Two in four get an average, and one in four get the least. Today the ones with the least are not culled – thus bringing down the average. The push-push effect reverts to a mean.

    Peace --- Art

    p.s. The HBD types are totally wrong. Today humanity is driven by knowledge selection, not by biological selection. Those who use the best knowledge prosper – regardless of tribe or personal intellectual prowess. Today knowledge is king – not intellectual acuity.

    p.s. Look at the beautiful modern city of Dubai - did the people of Dubai achieve that with personal genetic intellectual prowess or was it knowledge created by others over the ages.

    p.s. The future of genetic determination is now in the hands of knowledge. Knowledge will determine the future.

    p.s. U.S. experts soften on DNA editing of human eggs, sperm, embryos

    https://www.yahoo.com/news/scientists-soften-dna-editing-human-eggs-sperm-embryos-160345374.html

    “The Woodley Effect of diminished genetic excellence in the overall group – is the result of diminished culling of the species.”

    No Woodley effect was proven to exist. No connection to genetics was ever made.

    Read More
    • Replies: @Santoculto
    Fertility differential in social class and education levels, both proxies for higher (mean) intelligence, is a good start to prove it.
  89. @Kn83
    1. There is no such thing as knowledge selection. The ability to even collect and use knowledge effectively in the first place requires higher intelligence (which is 80% genetic).

    2. People with lower iq but more education (and thus knowledge) still achieve much less on average than those with higher iq but less education. Knowledge alone is not power.

    3. Environment is not king, not even close. Thousands of scientific studies consistently demonstrate that "environment" on the whole has little to no impact on how people turn out. Environments don't make people, people make environments based on their own nature. The blank-slate is dead @art. The only influence environment has is how it indirectly influences which type of people reproduce more then others.

    4. There is not a single example of a low iq population (without a high iq elite)
    creating and maintaining an advanced civilization. The people of Dubai were only able to afford a city like that thanks to oil money. They don't have the average iq to create it on their own, so they hired and paid western architects to do it for them.

    There is no such thing as knowledge selection.

    Old people do things the old way – younger people select newer knowledge. Every day all the time, people select what knowledge to use.

    People with lower iq but more education (and thus knowledge) still achieve much less on average than those with higher iq but less education.

    Very smart Chinese people with an abacus do brilliant things – not so smart Westerners with a computer do far more.

    Environment is not king, not even close.

    People in northern clines are more cooperative then people in hot clines. Example Northern Italy and southern Italy. A cold environment requires cooperation.

    The people of Dubai were only able to afford a city like that thanks to oil money. They don’t have the average iq to create it on their own, so they hired and paid western architects to do it for them.

    The truth is that hi IQ people in Dubai selected hi IQ people from other countries that had KNOWLEDGE. They did not select hi IQ people with no knowledge.

    It is obvious, collective evolving knowledge from the caveman on – is driving human kind – PERIOD.

    Peace — Art

    Read More
    • Replies: @Santoculto
    People in northern clines are more cooperative then people in hot clines. Example Northern Italy and southern Italy. A cold environment requires cooperation.

    Canadian blacks are more cooperative than caribbean blacks*

    Barbadian blacks are less cooperative than Detroitian blacks*

    Northern Italy is not so cold than Southern Italy to cause this differences, via direct/causal ways. and the summer is very hot in most part of italian territory.

    And within those populations we have exceptions [or not so exceptions of] cooperative and uncooperative people. Central asian minorities in Russian Federation seems less cooperative than southern italians.

    Yes very harsh climates tend to des-select people who are not cooperative, but only if the climate were very harsh and the selective process were very intense [and isolated from other groups] to produce this [genetic] changes.
  90. @Bill
    You'd do much better forming a secret society. For reasons which are obvious.

    Actually, once you realize that, you might wonder whether somebody else might have thought of that already.

    Bill, I was thinking of open, transparent debate. Imagine a headline of 2018: “Study Says America Fails, Ignores Its Best and Brightest”. Subhead #1: “Costs Said to Be in Trillions”. Subhead #2: “Proposes Education, Immigration, Other Reforms”.

    You’d need money, experts with their hearts in the right place, time to formulate well thought-out ideas and responses to the critics, T & E cash for a media blitz, and a bit of moral conviction. But, yes, I think it’s possible to openly propose “IQ-centric” policies without being marginalized.

    Read More
  91. @utu

    "The Woodley Effect of diminished genetic excellence in the overall group – is the result of diminished culling of the species."
     
    No Woodley effect was proven to exist. No connection to genetics was ever made.

    Fertility differential in social class and education levels, both proxies for higher (mean) intelligence, is a good start to prove it.

    Read More
  92. Genetics stablish our limits, environments test it.

    What is behind the debate ”gens versus environment”

    Those who say that genes predominantly determine behavior tend to be more narcissistic and proud of their characteristics.

    Those who say that it is the environment that predominantly determines the behavior tend to use this argument to justify themselves, that is, to blame it for their own behaviors.

    Read More
  93. @Art
    There is no such thing as knowledge selection.

    Old people do things the old way - younger people select newer knowledge. Every day all the time, people select what knowledge to use.

    People with lower iq but more education (and thus knowledge) still achieve much less on average than those with higher iq but less education.

    Very smart Chinese people with an abacus do brilliant things – not so smart Westerners with a computer do far more.

    Environment is not king, not even close.

    People in northern clines are more cooperative then people in hot clines. Example Northern Italy and southern Italy. A cold environment requires cooperation.

    The people of Dubai were only able to afford a city like that thanks to oil money. They don’t have the average iq to create it on their own, so they hired and paid western architects to do it for them.

    The truth is that hi IQ people in Dubai selected hi IQ people from other countries that had KNOWLEDGE. They did not select hi IQ people with no knowledge.

    It is obvious, collective evolving knowledge from the caveman on - is driving human kind - PERIOD.

    Peace --- Art

    People in northern clines are more cooperative then people in hot clines. Example Northern Italy and southern Italy. A cold environment requires cooperation.

    Canadian blacks are more cooperative than caribbean blacks*

    Barbadian blacks are less cooperative than Detroitian blacks*

    Northern Italy is not so cold than Southern Italy to cause this differences, via direct/causal ways. and the summer is very hot in most part of italian territory.

    And within those populations we have exceptions [or not so exceptions of] cooperative and uncooperative people. Central asian minorities in Russian Federation seems less cooperative than southern italians.

    Yes very harsh climates tend to des-select people who are not cooperative, but only if the climate were very harsh and the selective process were very intense [and isolated from other groups] to produce this [genetic] changes.

    Read More
    • Replies: @Art

    Northern Italy is not so cold than Southern Italy to cause this differences, via direct/causal ways. and the summer is very hot in most part of italian territory.
     
    Please - everyone with an ounce of honesty acknowledges that cold Milan has more productive industry then warm Palermo.

    People in cold climates must be cooperative and think out a year’s time frame – end of story.

    Genetics rise to meet the challenge of the environment.

    Peace --- Art
  94. @utu
    I am not sure how to respond to your comment. Perhaps I should say I do not like arguments that hinge on analogy. g is not like electron and I do not see connection to astrology or carbon dating.

    Anyway, I do not think that the concept of g and its mathematical construct (which is non-unique by the way) helped anybody to prove or explain anything. The IQ research is purely an empirical enterprise sometimes also called science. The g was postulated to give more firm theoretical foundation to this empirical enterprise. Perhaps a little bit of physics envy. In fact g serves only this purpose because it is not really used for anything else. g scores are rarely calculated for individuals because g is no unique. Different batteries of tests yield different g's.

    You remind me of an Austrian school economist of my acquaintance who wrote a treatise on Say”s Law and oft repeated global denunciations of JM Keynes because he allegedly misrepresented Say’s Law, and having this “one big thing” (the hedgehog v. the fox) can’t climb out of the well lolished rut he is in. As you seem to understand the predictive power and practical utility of IQ tests, and I see g as just another manifestation of the same measuring process and outcome of the measuring I can’t understand your problem. However…. let me take up your objection to the way g is produced – or extracted – by calculation. I shall try an analogy.

    Instead of intelligence we are seeking to rate ball game ability. So, having a large grant as befits a subject that politicians think important we devise a ginormous battery of tests from soccer balls bounced from foot to knee to head to foot and so on, to volleying tennis balls hit at various speed at different distances from the centre of the racquet will find with varying degrees of accuracy, to serving balls at tennis to hit the corners and so on and on from croquet to cricket to billiards. And you calculate g.

    You will certainly find that the test subject who scores 2 sds above average g will nearly always beat average g ball hitters at every ball game. Except…. here you bring in James B. Carroll’s three level approach as described in relation to IQ in the New Scientist article linked by someone a short time ago.

    A whose ball play g is +3sds probably gets on the professional circuit or equivalent whichever ball sport he takes up but then he is advised to choose tennis rather than football because he is 6ft 6in tall and can serve at 250kph. Then at the third level he has mastered the heavily sliced and the kicking serve that his coach has worked on as well as the double handed backhand with its vicious spin. Do you have a problem with accepting that a pretty useful g factor will be extracted in the 12 year old’s first week to determine whether he should be afforded soecialist ball sport training or shuffled off to wrestling or swimming?

    I accept that different batteries of tests are going to produce slightly different g scores and it may well be that you drop bolliards, snooker, rugby football and even golf when you are choosing potential cricketers, tennis, rackets and squash players. Some bright sports psychologist will of course devise a simple piece of apparatus which is the Raven’s Progressive Matrices test that finds a future Wimbledon champion amongst 12 year old Masai boys herding cattle.

    Read More
    • Replies: @dc.sunsets
    Just for clarification:

    You thought Say's Law was invalid, you thought your friend was wrong about Keynes' (mis)interpretation of it, or you thought Keynes' misinterpretation of it was immaterial?

    I'm always curious about this.

    FTR, my own view is that modern monetary theory rests entirely on ignoring Say's Law, and just like the USSR's central planners ignored for almost 70 years Mises' irrefutable critique of economic calculation in the absence of factor prices, it doesn't matter how long it takes for reality to prevail, prevail it will. You don't have to be an Austrian economist to question the logic of how people can enter a marketplace without first having produced something to trade with.
  95. @FKA Max

    though I do not know the cause of it.
     
    Might spending more time indoors, especially as children, be the cause and explanation?

    The Sun Is the Best Optometrist

    http://www.nytimes.com/2011/06/21/opinion/21wang.html

    Researchers suspect that bright outdoor light helps children’s developing eyes maintain the correct distance between the lens and the retina — which keeps vision in focus. Dim indoor lighting doesn’t seem to provide the same kind of feedback. As a result, when children spend too many hours inside, their eyes fail to grow correctly and the distance between the lens and retina becomes too long, causing far-away objects to look blurry.

    One study published in 2008 in the Archives of Ophthalmology compared 6- and 7-year-old children of Chinese ethnicity living in Sydney, Australia, with those living in Singapore. The rate of nearsightedness in Singapore (29 percent) was nearly nine times higher than in Sydney. The rates of nearsightedness among the parents of the two groups of children were similar, but the children in Sydney spent on average nearly 14 hours per week outside, compared with just three hours per week in Singapore.
     

    This could surely be a plausible explanation for the drops in reaction times, color acuity, etc.

    Similarly, a 2007 study by scholars at Ohio State University found that, among American children with two myopic parents, those who spent at least two hours per day outdoors were four times less likely to be nearsighted than those who spent less than one hour per day outside.
     


    Parents concerned about their children’s spending time playing instead of studying may be relieved to know that the common belief that “near work” — reading or computer use — leads to nearsightedness is incorrect. Among children who spend the same amount of time outside, the amount of near work has no correlation with nearsightedness. Hours spent indoors looking at a screen or book simply means less time spent outside, which is what really matters.

    This leads us to a recommendation that may satisfy tiger and soccer moms alike: if your child is going to stick his nose in a book this summer, get him to do it outdoors.
     

    Are children of myopic parents randomly selected to play inside or outside? No, of course not. This is mostly a function of these children’s dispositions, including genetic dispositions. So, you have a confound.

    This is the problem with all of these supposedly environmental findings. They might just reflect heritability choices in the environments people seek out or end up in.

    https://www.cambridge.org/core/journals/psychological-medicine/article/div-classtitlegenetic-influences-on-measures-of-the-environment-a-systematic-reviewdiv/76ECA7D8F0F92906DBB2AAFBED720F0C

    Read More
    • Replies: @FKA Max
    The myopia boom

    Short-sightedness is reaching epidemic proportions. Some scientists think they have found a reason why.

    http://www.nature.com/news/the-myopia-boom-1.17120


    For many years, the scientific consensus held that myopia was largely down to genes. Studies in the 1960s showed that the condition was more common among genetically identical twins than non-identical ones, suggesting that susceptibility is strongly influenced by DNA1. Gene-finding efforts have now linked more than 100 regions of the genome to short-sightedness.

    But it was obvious that genes could not be the whole story. One of the clearest signs came from a 1969 study of Inuit people on the northern tip of Alaska whose lifestyle was changing2. Of adults who had grown up in isolated communities, only 2 of 131 had myopic eyes. But more than half of their children and grandchildren had the condition. Genetic changes happen too slowly to explain this rapid change — or the soaring rates in myopia that have since been documented all over the world (see 'The march of myopia'). “There must be an environmental effect that has caused the generational difference,” says Seang Mei Saw, who studies the epidemiology and genetics of myopia at the National University of Singapore.

    [...]

    But what scientists really needed was a mechanism: something to explain how bright light could prevent myopia. The leading hypothesis is that light stimulates the release of dopamine in the retina, and this neurotransmitter in turn blocks the elongation of the eye during development. The best evidence for the 'light–dopamine' hypothesis comes — again — from chicks. In 2010, Ashby and Schaeffel showed that injecting a dopamine-inhibiting drug called spiperone into chicks' eyes could abolish the protective effect of bright light11.
     
    , @res
    What you describe is definitely a major issue for within cohort studies. But I think the differences between cohorts over time make clear that there is some important environmental factor(s?) changing. For myopia the only good candidate I have seen is relative indoor/outdoor time. Less apparent is whether this is due to short sightlines, natural light, or?

    Obesity is the obvious analogy IMHO. There the obvious factors would be diet and lack of exercise. Which makes me wonder if either of those matters for myopia as well.

    Has anyone tried doing generational within family comparisons of things like obesity and myopia as a possible way to focus on the environmental factors? It seems like adding generational questions (say a generation or two on either side) to a more typical study would be a relatively low cost/effort way to accomplish this.

    Would it be possible to use data from the Framingham study to investigate this? That combines family information with a broad population base.

    , @RaceRealist88
    Emil, What are your thoughts on the myopia/IQ correlation?
    , @Anonymous
    Finding the actual cause of a particular phenomenon - the cause in this case being a chronic environmental stimulus - is not a "problem." People can't "choose" to induce a particular phenomenon - in this case a certain phenotype in their bodies - if they have no idea which environmental stimuli produce it.
  96. @utu
    But the conversion factors and how were they derived are not explained in the review by Sarraf. Perhaps they can be found in Woodley's book.

    If you have two tests X1 and X2 when applied to the same population you can get correlation R and slope of linear regression straight line S=dX1/dX2. So if X1=IQ in IQ points and X2=RT in milliseconds mechanically one can convert changes in RT scores (∆RT) to changes in IQ scores (∆IQ) following ∆RT/∆IQ=S=dX1/dX2 proportion. However when correlation R is small this is pretty meaningless and if used it amounts to

    mathematical charlatanry
     
    .

    Sarraf wrote "simple reaction times, a decent proxy for g." How can he write it with a straight face when reaction time has very low correlation (it is not even 0.3) with IQ?

    The bottom line is that it all comes down to IQ tests, right? Everything is converted to the scale implicitly defined by IQ tests. This includes g. It is interesting that Sarraf (I am not sure about Woodley as I did not read his book) managed to reach the Mount Everest of g reification. g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges. How does Sarraf use g in his text? Let me count the ways:

    "a decent proxy for g", "the loss of g in the West", "reductions in g", "with the diminution of g even", "the integrity of genetic factors that underlie g", "suggesting that diminishing g is pervasive", 'assert that “dysgenesis” on g may also explain “anti-Flynn effects,”'

    All those statements containing "g" are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds. There are the deceivers and there are the believers. But the best, most effective deceivers are the believers. Why seemingly intelligent people (I listened to Woodley on YT and he clearly is very intelligent) let themselves to be fooled? Is Woodley a charlatan or a fool?

    The g loading of a test is dependent on the battery of other tests in which it is extracted, but not very much so in most cases. See:

    http://www.sciencedirect.com/science/article/pii/S0160289607000931

    (there are a number of other earlier studies with other methods that found similar results)

    g itself is usually just measured in standardized units. You of course know this. Sometimes it is useful to distinguish explicitly between the trait and the factor since they can differ. Let’s call the factor for g and the trait for GCA, general cognitive ability.

    Optimally, one switch to using a ratio scale for GCA, but there seems to be little progress towards this goal. At least, as explicitly stated to be working towards that goal. Presumably, one could build a ratio scale measurement using appropriate brain measurements. This would not help with estimating historical GCA declines since we lack detailed brain measurements from back then. One will have to rely on crude measures (reaction time, visual acuity etc.) or genetic data. The latter is more plausible, but will not capture any environmental changes in GCA.

    Note that a simple measure may be a good proxy for the mean level of GCA when using aggregate data, while not being so at the individual level. This is what Woodley et al. argues for with reaction time etc. To demonstrate this is rather trivial, so I leave that task to the reader. But it remains an assumption that is hard to test.

    Read More
    • Replies: @utu
    I have looked at the paper you have linked (Johnson et al. "Still just 1 g: Consistent results from five test batteries") and had to give up when I got to the Table 1 where correlations greater than 1 are reported. What some people do with factor analysis (FA) never ceases to amaze me. We know FA was invented by psychologists not mathematicians. We know that FA does not produce unique solutions. The "uniqueness' is enforced by often arbitrary constraints. There is a list of procedures that can be used to accomplish it like various types of rotations. Each procedure is well defined however the parameters that constrain it must be selected. Different parameters like different procedures produce different results. We know that at each step between procedures some decisions are made. The decisions are subjective and often arbitrary. Often it is more like art than science. But I did not know that in the end product correlations can be larger than 1. That's really new to me. By definition correlation cannot be larger than 1 or smaller than -1. The formula for correlation can't produce numbers outside (-1,+1) interval. However when you do not have raw data (scores from tests - and this was the case of this paper: "We did not have access to individual participant data of any kind" ) and you use only meta data of covariance matrices the correlations are derived from other formulas. Still these formulas will provide exactly correct correlation if they are carried out correctly assuring that all conditions are fulfilled. When you start doing oblique rotations that destroys mutual orthogonality of factors (that also never ceases to amaze me what is the point) you may create some conditions between intermediate variable that derivation of correlation will fail. This is my diagnosis! What should you do when it happens? Discard the results and start over again.

    How can I accept their result "Still just 1 g" that correlations between g's from different batteries of tests are very high when their results are mathematically inadmissible? They talk about it. It is just a feature to them not a failure of the procedure. Anyway, I have seen several papers comparing g's from various batteries. Sometimes just via congruence (correlation between loadings) and sometimes via correlation between tests scores. Conclusions differ. For example here:


    Are cognitive g and academic achievement g one and the same g? An exploration on the Woodcock–Johnson and Kaufman tests
    http://scottbarrykaufman.com/wp-content/uploads/2012/02/Kaufman-et-al.-2012.pdf
     
    they concluded: ". In the title of this paper, however, we posed the question: Is COG-g and ACH-g one and the same g? The answer to this question is no. They are highly related, yet distinct constructs"

    It seems that results depends on what researchers bring in their heads in to the research. If they want g's to be alike they are alike. When they want g's to be not alike they are not alike. Seems that nature is very pliable and merciful to scientists at least in this field. It gives them what they want. So one should ask what kind of science it is? FA is part of the problem or rather psychometricians who do not understand what FA really is from the mathematical stand point.

    I could rant about FA some more. Instead I add a link to a book:

    Factor Analysis: Healing an Ailing Model
    http://www.univerlag.uni-goettingen.de/bitstream/handle/3/isbn-978-3-86395-133-7/Ertel_factor.pdf?sequence=1
     

    “Exploratory factor analysis has never been developed to anything approaching its full promise and potential, despite the eighty-year history of its efforts …”
     

    “How curious … that we are so little further forward in our understanding of the psychology of individual differences as a result of these advances … Can anyone identify a single publication in the last 50 years in which the use of factor analysis has led to counter-intuitive, or surprising, or genuinely enlightening outcomes?”
     

    "the situation of factor analytical research helps understand where the calamity comes from. An “unease in factor analysis” is generally ascribed to an arbitrariness of procedural decision taking. Arbitrariness occurs when variables for correlations are selected, when samples of individuals are formed, when the number factors to be extracted are determined, when the choice between orthogonal or oblique rotation is made, and when one rotation procedure is selected from among a large number of options"
     
    and this article:

    OBJECTIVITY IN FACTOR ANALYSIS http://journals.sagepub.com/doi/pdf/10.1177/001316445801800303
     

    One peculiarity of contemporary factor analysis is its subjectivity. In most statistical work two persons who start with the same data and calculate correctly will reach the same answer. This is not necessarily the case for factor analysis. This remains a method which depends upon arbitrary judgments by the investigator, so that skill is acquired only after long experience in estimating communalities, deciding upon the number of factors to be extracted, selecting pairs of factors for rotation, and so on. This emphasis upon human judgment seems to have resulted because the psychologist has played a larger part in its development than the mathematician.
     
    Also here is an interesting and funny example of when FA produces nonsense:

    Derivation of Theory by Means of Factor Analysis or Tom Swift and his Electric Factor Analysis Machine
    https://dspace.mit.edu/bitstream/handle/1721.1/47256/derivationoftheo00arms.pdf?sequence=1
     
    ___________
    "Optimally, one switch to using a ratio scale for GCA, but there seems to be little progress towards this goal."

    I would like to look in to it. Thanks for bringing it up.
  97. @Emil Kirkegaard
    Are children of myopic parents randomly selected to play inside or outside? No, of course not. This is mostly a function of these children's dispositions, including genetic dispositions. So, you have a confound.

    This is the problem with all of these supposedly environmental findings. They might just reflect heritability choices in the environments people seek out or end up in.

    https://www.cambridge.org/core/journals/psychological-medicine/article/div-classtitlegenetic-influences-on-measures-of-the-environment-a-systematic-reviewdiv/76ECA7D8F0F92906DBB2AAFBED720F0C

    The myopia boom

    Short-sightedness is reaching epidemic proportions. Some scientists think they have found a reason why.

    http://www.nature.com/news/the-myopia-boom-1.17120

    For many years, the scientific consensus held that myopia was largely down to genes. Studies in the 1960s showed that the condition was more common among genetically identical twins than non-identical ones, suggesting that susceptibility is strongly influenced by DNA1. Gene-finding efforts have now linked more than 100 regions of the genome to short-sightedness.

    But it was obvious that genes could not be the whole story. One of the clearest signs came from a 1969 study of Inuit people on the northern tip of Alaska whose lifestyle was changing2. Of adults who had grown up in isolated communities, only 2 of 131 had myopic eyes. But more than half of their children and grandchildren had the condition. Genetic changes happen too slowly to explain this rapid change — or the soaring rates in myopia that have since been documented all over the world (see ‘The march of myopia’). “There must be an environmental effect that has caused the generational difference,” says Seang Mei Saw, who studies the epidemiology and genetics of myopia at the National University of Singapore.

    [...]

    But what scientists really needed was a mechanism: something to explain how bright light could prevent myopia. The leading hypothesis is that light stimulates the release of dopamine in the retina, and this neurotransmitter in turn blocks the elongation of the eye during development. The best evidence for the ‘light–dopamine’ hypothesis comes — again — from chicks. In 2010, Ashby and Schaeffel showed that injecting a dopamine-inhibiting drug called spiperone into chicks’ eyes could abolish the protective effect of bright light11.

    Read More
    • Replies: @OutWest
    Or could it be that people just started doing more close and fine work when in shops rather than in fields.
    , @Emil Kirkegaard
    I did not say there was no environmental effect. My point was methodological. The type of findings you cited from before cannot show environmental effects because they do not rule out plausible genetic confounds. In this second study you cite, if the effect size is really that large, not just due to differential diagnosing, it's not cherry picked etc., then it is too large to be a genetic confound.
  98. Raven’s Progressive Matrices are fun! I get a little hit of dopamine each time I recognize the pattern.

    Read More
  99. @FKA Max
    The myopia boom

    Short-sightedness is reaching epidemic proportions. Some scientists think they have found a reason why.

    http://www.nature.com/news/the-myopia-boom-1.17120


    For many years, the scientific consensus held that myopia was largely down to genes. Studies in the 1960s showed that the condition was more common among genetically identical twins than non-identical ones, suggesting that susceptibility is strongly influenced by DNA1. Gene-finding efforts have now linked more than 100 regions of the genome to short-sightedness.

    But it was obvious that genes could not be the whole story. One of the clearest signs came from a 1969 study of Inuit people on the northern tip of Alaska whose lifestyle was changing2. Of adults who had grown up in isolated communities, only 2 of 131 had myopic eyes. But more than half of their children and grandchildren had the condition. Genetic changes happen too slowly to explain this rapid change — or the soaring rates in myopia that have since been documented all over the world (see 'The march of myopia'). “There must be an environmental effect that has caused the generational difference,” says Seang Mei Saw, who studies the epidemiology and genetics of myopia at the National University of Singapore.

    [...]

    But what scientists really needed was a mechanism: something to explain how bright light could prevent myopia. The leading hypothesis is that light stimulates the release of dopamine in the retina, and this neurotransmitter in turn blocks the elongation of the eye during development. The best evidence for the 'light–dopamine' hypothesis comes — again — from chicks. In 2010, Ashby and Schaeffel showed that injecting a dopamine-inhibiting drug called spiperone into chicks' eyes could abolish the protective effect of bright light11.
     

    Or could it be that people just started doing more close and fine work when in shops rather than in fields.

    Read More
  100. @Wizard of Oz
    You remind me of an Austrian school economist of my acquaintance who wrote a treatise on Say"s Law and oft repeated global denunciations of JM Keynes because he allegedly misrepresented Say's Law, and having this "one big thing" (the hedgehog v. the fox) can't climb out of the well lolished rut he is in. As you seem to understand the predictive power and practical utility of IQ tests, and I see g as just another manifestation of the same measuring process and outcome of the measuring I can't understand your problem. However.... let me take up your objection to the way g is produced - or extracted - by calculation. I shall try an analogy.

    Instead of intelligence we are seeking to rate ball game ability. So, having a large grant as befits a subject that politicians think important we devise a ginormous battery of tests from soccer balls bounced from foot to knee to head to foot and so on, to volleying tennis balls hit at various speed at different distances from the centre of the racquet will find with varying degrees of accuracy, to serving balls at tennis to hit the corners and so on and on from croquet to cricket to billiards. And you calculate g.

    You will certainly find that the test subject who scores 2 sds above average g will nearly always beat average g ball hitters at every ball game. Except.... here you bring in James B. Carroll's three level approach as described in relation to IQ in the New Scientist article linked by someone a short time ago.

    A whose ball play g is +3sds probably gets on the professional circuit or equivalent whichever ball sport he takes up but then he is advised to choose tennis rather than football because he is 6ft 6in tall and can serve at 250kph. Then at the third level he has mastered the heavily sliced and the kicking serve that his coach has worked on as well as the double handed backhand with its vicious spin. Do you have a problem with accepting that a pretty useful g factor will be extracted in the 12 year old's first week to determine whether he should be afforded soecialist ball sport training or shuffled off to wrestling or swimming?

    I accept that different batteries of tests are going to produce slightly different g scores and it may well be that you drop bolliards, snooker, rugby football and even golf when you are choosing potential cricketers, tennis, rackets and squash players. Some bright sports psychologist will of course devise a simple piece of apparatus which is the Raven's Progressive Matrices test that finds a future Wimbledon champion amongst 12 year old Masai boys herding cattle.

    Just for clarification:

    You thought Say’s Law was invalid, you thought your friend was wrong about Keynes’ (mis)interpretation of it, or you thought Keynes’ misinterpretation of it was immaterial?

    I’m always curious about this.

    FTR, my own view is that modern monetary theory rests entirely on ignoring Say’s Law, and just like the USSR’s central planners ignored for almost 70 years Mises’ irrefutable critique of economic calculation in the absence of factor prices, it doesn’t matter how long it takes for reality to prevail, prevail it will. You don’t have to be an Austrian economist to question the logic of how people can enter a marketplace without first having produced something to trade with.

    Read More
    • Replies: @Wizard of Oz
    I haven't time to brush up my recollections and do a decent job of replying to your questions but remember what made me decide to lllleave me in the camp of Keynes the genius who had the confidence to change with the facts and his understanding of the facts snd give up on my rather limited "Austrian" acquaintsnce. It was when he blamed Roosevelt's errors on Keynes when Keynes was very critical of FDR's failures both of understanding and implementation, not least for the "Roosevelt recession" of 1937-38, and then tried to wriggle out of Keynes's vindication by the debt financed WW2 recovery by suggesting it was all because of the male workforce being overseas that I lost respect for him as an economist (though his heart was in thd right place wrt Australian Labor government waste).
    , @Wizard of Oz
    I haven't time to brush up my recollections and do a decent job of replying to your questions but remember what made me decide to lllleave me in the camp of Keynes the genius who had the confidence to change with the facts and his understanding of the facts snd give up on my rather limited "Austrian" acquaintsnce. It was when he blamed Roosevelt's errors on Keynes when Keynes was very critical of FDR's failures both of understanding and implementation, not least for the "Roosevelt recession" of 1937-38, and then tried to wriggle out of Keynes's vindication by the debt financed WW2 recovery by suggesting it was all because of the male workforce being overseas that I lost respect for him as an economist (though his heart was in thd right place wrt Australian Labor government waste).
  101. @utu
    But the conversion factors and how were they derived are not explained in the review by Sarraf. Perhaps they can be found in Woodley's book.

    If you have two tests X1 and X2 when applied to the same population you can get correlation R and slope of linear regression straight line S=dX1/dX2. So if X1=IQ in IQ points and X2=RT in milliseconds mechanically one can convert changes in RT scores (∆RT) to changes in IQ scores (∆IQ) following ∆RT/∆IQ=S=dX1/dX2 proportion. However when correlation R is small this is pretty meaningless and if used it amounts to

    mathematical charlatanry
     
    .

    Sarraf wrote "simple reaction times, a decent proxy for g." How can he write it with a straight face when reaction time has very low correlation (it is not even 0.3) with IQ?

    The bottom line is that it all comes down to IQ tests, right? Everything is converted to the scale implicitly defined by IQ tests. This includes g. It is interesting that Sarraf (I am not sure about Woodley as I did not read his book) managed to reach the Mount Everest of g reification. g which is a mathematical construct that is not measured directly and there is no agreed method for its definition as it depends on the battery of tests used in factor analysis from which g emerges. How does Sarraf use g in his text? Let me count the ways:

    "a decent proxy for g", "the loss of g in the West", "reductions in g", "with the diminution of g even", "the integrity of genetic factors that underlie g", "suggesting that diminishing g is pervasive", 'assert that “dysgenesis” on g may also explain “anti-Flynn effects,”'

    All those statements containing "g" are empty. It is not a science. It is more like occult when you can create circular reasoning patterns until you end up confusing the less sharp minds. There are the deceivers and there are the believers. But the best, most effective deceivers are the believers. Why seemingly intelligent people (I listened to Woodley on YT and he clearly is very intelligent) let themselves to be fooled? Is Woodley a charlatan or a fool?

    God only knows how many people over the previous century have imagined that they have uncovered some elementary mistake behind the theory of g which undermines it entirely — going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi.

    And all of these claims have come to ruin — exactly as one would expect, given that the good number of truly outstanding intellects who have contributed to the theory of g would not, in aggregate, be exactly likely to make and perpetuate elementary errors.

    Point is: if you think you’ve found an elementary error in the theory of g, then almost certainly it’s you who have made one.

    g may have its problems — but they are sophisticated and subtle, not trivial and obvious.

    Read More
    • Replies: @Matthew Sarraf
    Yes. utu's unprincipled application of Stats 101 precepts wherever they appear, at first glance, to be relevant has also led him astray in his attempt at critiquing Kong et al., 2017. More on that later.
    , @utu
    "going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi."
    "And all of these claims have come to ruin " - they all made very legitimate points. Their ideas are not dead. There is research done on intelligence where nobody invokes silly construct g. g is totally unnecessary. But if you impose constraint on your world view that there is one factor you will end up missing some aspects of reality and will have to start sweeping inconvenient facts under the carpet. Single factor g cannot explain why for example there are different correlations between verbal and spatial intelligence and that they go in opposite directions for various ethnic groups. I can understand why Spearman liked the concept of single g. It was a physics envy in those days. So he came up with mental energy or mental power. Cool. He could divide g by temperature in Kelvins and call it mental entropy. Why not? However I can't understand why Jensen decided to dust it off and bring it back from the attic that really belonged to 19 century? Perhaps we should dust off Otto Weininger's theories as well. They would knock you sucks off.


    "exactly as one would expect, given that the good number of truly outstanding intellects who have contributed to the theory of g would not, in aggregate, be exactly likely to make and perpetuate elementary errors." - you do not seem to understand the nature of this enterprise. Are you that naive? The errors technically are not errors. They are features. They are constructs. They are imposed on reality. They constrain reality. Often suffocate it. You have no clue what is the social dynamic of a group of intelligent individuals that succumb to group think. Look at climate science. How much it was corrupted.
  102. @candid_observer
    God only knows how many people over the previous century have imagined that they have uncovered some elementary mistake behind the theory of g which undermines it entirely -- going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi.

    And all of these claims have come to ruin -- exactly as one would expect, given that the good number of truly outstanding intellects who have contributed to the theory of g would not, in aggregate, be exactly likely to make and perpetuate elementary errors.

    Point is: if you think you've found an elementary error in the theory of g, then almost certainly it's you who have made one.

    g may have its problems -- but they are sophisticated and subtle, not trivial and obvious.

    Yes. utu’s unprincipled application of Stats 101 precepts wherever they appear, at first glance, to be relevant has also led him astray in his attempt at critiquing Kong et al., 2017. More on that later.

    Read More
  103. @Emil Kirkegaard
    Are children of myopic parents randomly selected to play inside or outside? No, of course not. This is mostly a function of these children's dispositions, including genetic dispositions. So, you have a confound.

    This is the problem with all of these supposedly environmental findings. They might just reflect heritability choices in the environments people seek out or end up in.

    https://www.cambridge.org/core/journals/psychological-medicine/article/div-classtitlegenetic-influences-on-measures-of-the-environment-a-systematic-reviewdiv/76ECA7D8F0F92906DBB2AAFBED720F0C

    What you describe is definitely a major issue for within cohort studies. But I think the differences between cohorts over time make clear that there is some important environmental factor(s?) changing. For myopia the only good candidate I have seen is relative indoor/outdoor time. Less apparent is whether this is due to short sightlines, natural light, or?

    Obesity is the obvious analogy IMHO. There the obvious factors would be diet and lack of exercise. Which makes me wonder if either of those matters for myopia as well.

    Has anyone tried doing generational within family comparisons of things like obesity and myopia as a possible way to focus on the environmental factors? It seems like adding generational questions (say a generation or two on either side) to a more typical study would be a relatively low cost/effort way to accomplish this.

    Would it be possible to use data from the Framingham study to investigate this? That combines family information with a broad population base.

    Read More
    • Replies: @Santoculto
    Myopia can be caused by intrinsic vulnerability + complexity of environment versus simple environment= less people, less interactions, less things (buildings, streets,etc) to see.

    Maybe the same visual acuity to live in this sparsely populated areas become a disadvantage in densely populated areas.

    Or Orr or niet
    , @Emil Kirkegaard
    Yes, these historical trends are usually too large to purely be genetic in origin: obesity, height, IQ scores, myopia, visual acuity, etc. They might be partially genetic in origin such as the height increase is, or the genetic effect may actually be reverse that of the phenotypic effect as is the case for GCA (Woodley's co-occurrence model). I don't know whether genetic has anything to do with the other trends.

    For height, see:
    http://rspb.royalsocietypublishing.org/content/282/1806/20150211

    Generational design within family? I assume you mean comparing parents and children at the same ages. That's a good idea. It would control for any genetic changes while leaving the transgenerational non-genetic effects intact. However, it also requires hard to get data: phenotype measured at the same age for parents and children. The Nordic countries may be able to provide this by using draft measures of fathers and sons. These are done at age 18 and can be linked using the register. I know they have IQ, height, weight. Might have visual ability.

    If one compared the within family estimates with the cross-sectional estimates, one should be able to spot the genetic effect if any.

    I don't know about Framingham dataset in particular.
  104. @res
    What you describe is definitely a major issue for within cohort studies. But I think the differences between cohorts over time make clear that there is some important environmental factor(s?) changing. For myopia the only good candidate I have seen is relative indoor/outdoor time. Less apparent is whether this is due to short sightlines, natural light, or?

    Obesity is the obvious analogy IMHO. There the obvious factors would be diet and lack of exercise. Which makes me wonder if either of those matters for myopia as well.

    Has anyone tried doing generational within family comparisons of things like obesity and myopia as a possible way to focus on the environmental factors? It seems like adding generational questions (say a generation or two on either side) to a more typical study would be a relatively low cost/effort way to accomplish this.

    Would it be possible to use data from the Framingham study to investigate this? That combines family information with a broad population base.

    Myopia can be caused by intrinsic vulnerability + complexity of environment versus simple environment= less people, less interactions, less things (buildings, streets,etc) to see.

    Maybe the same visual acuity to live in this sparsely populated areas become a disadvantage in densely populated areas.

    Or Orr or niet

    Read More
  105. @Emil Kirkegaard
    Are children of myopic parents randomly selected to play inside or outside? No, of course not. This is mostly a function of these children's dispositions, including genetic dispositions. So, you have a confound.

    This is the problem with all of these supposedly environmental findings. They might just reflect heritability choices in the environments people seek out or end up in.

    https://www.cambridge.org/core/journals/psychological-medicine/article/div-classtitlegenetic-influences-on-measures-of-the-environment-a-systematic-reviewdiv/76ECA7D8F0F92906DBB2AAFBED720F0C

    Emil, What are your thoughts on the myopia/IQ correlation?

    Read More
    • Replies: @Emil Kirkegaard
    Myopia and GCA are phenotypically (r = -14 in this dataset) and genetically related. Not sure whether the relationship is intrinsic or due to relevant behavior e.g. reading or working indoors.

    Historical trend is mostly not due to genetics. Have no particular opinion about the cause of the historical trend as I did not study it in detail. Does not seem a pressing issue since these problems can be solved or mitigated by various methods (glasses, contacts, lasek).

    See for genetics & GCA, see:
    http://iovs.arvojournals.org/article.aspx?articleid=2336642
    , @Anonymous
    The correlation is simply due to the fact that higher IQ people enjoy reading, and besides reading for pleasure, are often engaged in occupations involving reading and other forms of near work such as computer use. The eyes of higher IQ people in developed countries are subjected to almost constant near stress during their waking lives.

    This is increasingly affecting lower IQ people as well due to the proliferation of smartphones and electronic media.
  106. @FKA Max
    The myopia boom

    Short-sightedness is reaching epidemic proportions. Some scientists think they have found a reason why.

    http://www.nature.com/news/the-myopia-boom-1.17120


    For many years, the scientific consensus held that myopia was largely down to genes. Studies in the 1960s showed that the condition was more common among genetically identical twins than non-identical ones, suggesting that susceptibility is strongly influenced by DNA1. Gene-finding efforts have now linked more than 100 regions of the genome to short-sightedness.

    But it was obvious that genes could not be the whole story. One of the clearest signs came from a 1969 study of Inuit people on the northern tip of Alaska whose lifestyle was changing2. Of adults who had grown up in isolated communities, only 2 of 131 had myopic eyes. But more than half of their children and grandchildren had the condition. Genetic changes happen too slowly to explain this rapid change — or the soaring rates in myopia that have since been documented all over the world (see 'The march of myopia'). “There must be an environmental effect that has caused the generational difference,” says Seang Mei Saw, who studies the epidemiology and genetics of myopia at the National University of Singapore.

    [...]

    But what scientists really needed was a mechanism: something to explain how bright light could prevent myopia. The leading hypothesis is that light stimulates the release of dopamine in the retina, and this neurotransmitter in turn blocks the elongation of the eye during development. The best evidence for the 'light–dopamine' hypothesis comes — again — from chicks. In 2010, Ashby and Schaeffel showed that injecting a dopamine-inhibiting drug called spiperone into chicks' eyes could abolish the protective effect of bright light11.
     

    I did not say there was no environmental effect. My point was methodological. The type of findings you cited from before cannot show environmental effects because they do not rule out plausible genetic confounds. In this second study you cite, if the effect size is really that large, not just due to differential diagnosing, it’s not cherry picked etc., then it is too large to be a genetic confound.

    Read More
  107. @res
    What you describe is definitely a major issue for within cohort studies. But I think the differences between cohorts over time make clear that there is some important environmental factor(s?) changing. For myopia the only good candidate I have seen is relative indoor/outdoor time. Less apparent is whether this is due to short sightlines, natural light, or?

    Obesity is the obvious analogy IMHO. There the obvious factors would be diet and lack of exercise. Which makes me wonder if either of those matters for myopia as well.

    Has anyone tried doing generational within family comparisons of things like obesity and myopia as a possible way to focus on the environmental factors? It seems like adding generational questions (say a generation or two on either side) to a more typical study would be a relatively low cost/effort way to accomplish this.

    Would it be possible to use data from the Framingham study to investigate this? That combines family information with a broad population base.

    Yes, these historical trends are usually too large to purely be genetic in origin: obesity, height, IQ scores, myopia, visual acuity, etc. They might be partially genetic in origin such as the height increase is, or the genetic effect may actually be reverse that of the phenotypic effect as is the case for GCA (Woodley’s co-occurrence model). I don’t know whether genetic has anything to do with the other trends.

    For height, see:

    http://rspb.royalsocietypublishing.org/content/282/1806/20150211

    Generational design within family? I assume you mean comparing parents and children at the same ages. That’s a good idea. It would control for any genetic changes while leaving the transgenerational non-genetic effects intact. However, it also requires hard to get data: phenotype measured at the same age for parents and children. The Nordic countries may be able to provide this by using draft measures of fathers and sons. These are done at age 18 and can be linked using the register. I know they have IQ, height, weight. Might have visual ability.

    If one compared the within family estimates with the cross-sectional estimates, one should be able to spot the genetic effect if any.

    I don’t know about Framingham dataset in particular.

    Read More
  108. @RaceRealist88
    Emil, What are your thoughts on the myopia/IQ correlation?

    Myopia and GCA are phenotypically (r = -14 in this dataset) and genetically related. Not sure whether the relationship is intrinsic or due to relevant behavior e.g. reading or working indoors.

    Historical trend is mostly not due to genetics. Have no particular opinion about the cause of the historical trend as I did not study it in detail. Does not seem a pressing issue since these problems can be solved or mitigated by various methods (glasses, contacts, lasek).

    See for genetics & GCA, see:

    http://iovs.arvojournals.org/article.aspx?articleid=2336642

    Read More
  109. Anonymous says:     Show CommentNext New Comment
    @Emil Kirkegaard
    Are children of myopic parents randomly selected to play inside or outside? No, of course not. This is mostly a function of these children's dispositions, including genetic dispositions. So, you have a confound.

    This is the problem with all of these supposedly environmental findings. They might just reflect heritability choices in the environments people seek out or end up in.

    https://www.cambridge.org/core/journals/psychological-medicine/article/div-classtitlegenetic-influences-on-measures-of-the-environment-a-systematic-reviewdiv/76ECA7D8F0F92906DBB2AAFBED720F0C

    Finding the actual cause of a particular phenomenon – the cause in this case being a chronic environmental stimulus – is not a “problem.” People can’t “choose” to induce a particular phenomenon – in this case a certain phenotype in their bodies – if they have no idea which environmental stimuli produce it.

    Read More
  110. Anonymous says:     Show CommentNext New Comment
    @RaceRealist88
    Emil, What are your thoughts on the myopia/IQ correlation?

    The correlation is simply due to the fact that higher IQ people enjoy reading, and besides reading for pleasure, are often engaged in occupations involving reading and other forms of near work such as computer use. The eyes of higher IQ people in developed countries are subjected to almost constant near stress during their waking lives.

    This is increasingly affecting lower IQ people as well due to the proliferation of smartphones and electronic media.

    Read More
    • Replies: @RaceRealist88
    Reading seems to increase non-verbal and verbal intelligence.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4354297/

    You're right that the more intelligent do like to read more. However, I seem to recall reading that the explanation for myopia due to reading a lot/straining the eyes was bunk?
  111. @Anonymous
    The correlation is simply due to the fact that higher IQ people enjoy reading, and besides reading for pleasure, are often engaged in occupations involving reading and other forms of near work such as computer use. The eyes of higher IQ people in developed countries are subjected to almost constant near stress during their waking lives.

    This is increasingly affecting lower IQ people as well due to the proliferation of smartphones and electronic media.

    Reading seems to increase non-verbal and verbal intelligence.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4354297/

    You’re right that the more intelligent do like to read more. However, I seem to recall reading that the explanation for myopia due to reading a lot/straining the eyes was bunk?

    Read More
    • Replies: @Anonymous
    It's not bunk. Nearwork induced transient myopia is a well established phenomenon in the literature. There are roughly 11,000 scholarly articles on it on Google Scholar:

    https://scholar.google.com/scholar?q=nearwork+induced+transient+myopia&btnG=&hl=en&as_sdt=0%2C47&as_vis=1

    When experimenters take test subjects and subject them to some sustained near visual task, like reading or staring at some near object, they induce a small myopic shift, that is a small shift towards nearsightedness. This effect is called "transient" myopia because the experiment is limited in duration and the eye seems to shift back to normal over time after relief from the near visual task. Myopia develops over a period of years, and for obvious practical reasons, there haven't been experiments in which test subjects are subjected to some sustained near visual task over a period of years. Thus this effect is confined to transient myopia. However, people with myopia tend to be subjected to chronic, sustained nearwork, and this nearwork induced transient myopia seems to have an additive effect.

    Studies that look at other environmental factors like time spent outdoors tend to have the environmental confound of nearwork. Generally, when you're outside, you're not doing nearwork; you're looking at things far away. And when you're indoors, generally you're doing things like reading or other nearwork.
  112. @dc.sunsets
    Just for clarification:

    You thought Say's Law was invalid, you thought your friend was wrong about Keynes' (mis)interpretation of it, or you thought Keynes' misinterpretation of it was immaterial?

    I'm always curious about this.

    FTR, my own view is that modern monetary theory rests entirely on ignoring Say's Law, and just like the USSR's central planners ignored for almost 70 years Mises' irrefutable critique of economic calculation in the absence of factor prices, it doesn't matter how long it takes for reality to prevail, prevail it will. You don't have to be an Austrian economist to question the logic of how people can enter a marketplace without first having produced something to trade with.

    I haven’t time to brush up my recollections and do a decent job of replying to your questions but remember what made me decide to lllleave me in the camp of Keynes the genius who had the confidence to change with the facts and his understanding of the facts snd give up on my rather limited “Austrian” acquaintsnce. It was when he blamed Roosevelt’s errors on Keynes when Keynes was very critical of FDR’s failures both of understanding and implementation, not least for the “Roosevelt recession” of 1937-38, and then tried to wriggle out of Keynes’s vindication by the debt financed WW2 recovery by suggesting it was all because of the male workforce being overseas that I lost respect for him as an economist (though his heart was in thd right place wrt Australian Labor government waste).

    Read More
    • Replies: @dc.sunsets
    This is why I left the Austrian school behind. It doesn't explain long periods when up is down. A theory without explanatory power is kind of useless.

    I attribute absolutely zero of the postwar recovery to debt-financing or pump-priming. I strongly prefer the mass psychological explanation provided by the Socionomic Hypothesis.

    Under this view, stocks had been rising since 1937, signaling that the entire time the war was running, people were becoming more optimistic. This theory posits that outside events (like wars, tsunamis, assassinations, etc.) have no effect on social mood and thus no noticeable, lasting effect on stock indices (that fill in as social mood barometers.)

    As soon as wartime price controls and rationing were removed, the pent-up demand of rising social mood produced a very broad economic expansion that ran its course by the mid-60's, where a bear market in social mood was reflected by stocks going dead sideways until the lows of 1974 (DJIA) and 1982 (the SPX and broader market.) Since then it's been off to the races on the grandest debt bubble ever, with the whole party going on the National Credit Card.

    This also explains why there's never a time when surpluses are run and borrowing reduced. Keynes, in my view, was very close to nailing it when he spoke of animal spirits. To me, that's all there is. Everything else is window dressing to rationalize what is otherwise occurring naturally.

  113. @dc.sunsets
    Just for clarification:

    You thought Say's Law was invalid, you thought your friend was wrong about Keynes' (mis)interpretation of it, or you thought Keynes' misinterpretation of it was immaterial?

    I'm always curious about this.

    FTR, my own view is that modern monetary theory rests entirely on ignoring Say's Law, and just like the USSR's central planners ignored for almost 70 years Mises' irrefutable critique of economic calculation in the absence of factor prices, it doesn't matter how long it takes for reality to prevail, prevail it will. You don't have to be an Austrian economist to question the logic of how people can enter a marketplace without first having produced something to trade with.

    I haven’t time to brush up my recollections and do a decent job of replying to your questions but remember what made me decide to lllleave me in the camp of Keynes the genius who had the confidence to change with the facts and his understanding of the facts snd give up on my rather limited “Austrian” acquaintsnce. It was when he blamed Roosevelt’s errors on Keynes when Keynes was very critical of FDR’s failures both of understanding and implementation, not least for the “Roosevelt recession” of 1937-38, and then tried to wriggle out of Keynes’s vindication by the debt financed WW2 recovery by suggesting it was all because of the male workforce being overseas that I lost respect for him as an economist (though his heart was in thd right place wrt Australian Labor government waste).

    Read More
  114. Anonymous says:     Show CommentNext New Comment
    @RaceRealist88
    Reading seems to increase non-verbal and verbal intelligence.

    https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4354297/

    You're right that the more intelligent do like to read more. However, I seem to recall reading that the explanation for myopia due to reading a lot/straining the eyes was bunk?

    It’s not bunk. Nearwork induced transient myopia is a well established phenomenon in the literature. There are roughly 11,000 scholarly articles on it on Google Scholar:

    https://scholar.google.com/scholar?q=nearwork+induced+transient+myopia&btnG=&hl=en&as_sdt=0%2C47&as_vis=1

    When experimenters take test subjects and subject them to some sustained near visual task, like reading or staring at some near object, they induce a small myopic shift, that is a small shift towards nearsightedness. This effect is called “transient” myopia because the experiment is limited in duration and the eye seems to shift back to normal over time after relief from the near visual task. Myopia develops over a period of years, and for obvious practical reasons, there haven’t been experiments in which test subjects are subjected to some sustained near visual task over a period of years. Thus this effect is confined to transient myopia. However, people with myopia tend to be subjected to chronic, sustained nearwork, and this nearwork induced transient myopia seems to have an additive effect.

    Studies that look at other environmental factors like time spent outdoors tend to have the environmental confound of nearwork. Generally, when you’re outside, you’re not doing nearwork; you’re looking at things far away. And when you’re indoors, generally you’re doing things like reading or other nearwork.

    Read More
  115. @utu
    I knew I was onto something (see my comment #68). Apparently neither you nor Kong et al.

    http://www.pnas.org/content/114/5/E727.abstract
     
    developed the habit of using dimensional analysis to check the plausibility of equations. The used to teach it in high school physics or chemistry class.

    The equation 0.038 × (30/3.74) = 0.30 you cited from Kong is incorrect. Kong made a mistake which neither you nor Woodley did catch. The mean cannot be scaled with variance. Mean can be scaled with the square root of variance. Got it? Therefore the correct equation is:

    0.038 × (30/3.74)^(1/2) = 0.11
     
    and subsequently your recalculations for variance 80% and 87% yield

    0.038*(80/3.74)^(1/2)=0.17
    0.038*(87/3.74)^(1/2)=0.18
     
    I am surprised that Kong made this silly mistake (perhaps too many coauthors) particularly in light that just one paragraph earlier (page 4) he wrote a correct equation of the same type and even justified it:

    Thus, if POLYfull is assumed to account for 30% of the variance of EDU, then its estimated rate of change, by extrapolation, is −0.010 × (30/3.74)^(1/2)= −0.028 SUs per decade.
     
    Basically when you want to partition variable (in this case it is a mean) according to existing variance partition you must use standard deviations not variances in your proportion: X1:X2 =SD1:SD2 and not X1:X2=V1:V2.

    Listen Matthew, it's never too late too learn. And I agree that this is not the best place to receive education. But I could not resits. Your job, your livelihood could depend on it. Don't forget to share your newly acquired knowledge with Kong, his N-coauthors and with Woodley.

    There is no error in Kong et al. (2017). Let me explain how they arrived at the 0.3 IQ points per decade decline estimate.

    The Breeder’s equation (developed by R.A. Fisher in the 20’s) is written as follows:

    [1] R = S*h^2

    Where R = the observed generational change in the trait, S = the strength of selection operating on that trait and h^2 = the trait additive heritability.
    Therefore for any given strength of selection:

    [2] R ∝ h^2

    How do we know that this proportionality holds true? Because a century’s worth of quantitative evolutionary research involving plants and animals has shown it to be thus (see: Crow, 2010).

    Kong et al. are simply employing a variant of this same equation. In their case they have observed the decadal decline in the polygenic score directly. They also know that their polygenic score accounts for 3.74% of the variance in their target phenotype (IQ), which is functionally equivalent to saying that their phenotype has an additive heritability of 0.0374. On this basis, the phenotype should decline by 0.038 IQ points per decade. However, we all know that the additive heritability of IQ is not 0.0374, it is much higher. How much higher is debatable, however Kong et al. chose an extremely conservative value of 30% or 0.3. So given that the observed decadal loss in IQ is clearly a substantial underestimate of the true loss, to get to the expected loss, the ratio of the two heritabilities is computed (0.3/0.0374), which yields an approximately eight-fold upwards correction factor. The product of the observed value of R (0.038 IQ points per decade) and this correction factor therefore becomes the expected value of R for IQ that would result if h^2 were 0.3 rather than 0.0374. This result is valid because [2]. If you don’t think that [2] is valid, then the burden of proof is on you to explain away a century of successful quantitative evolutionary research that has proceeded on the basis that this proportionality holds. Rather you than me.

    Of course, we also know that the true additive heritability of General Mental Ability is not 0.3, but closer to 0.8 (Bouchard Jr, 2004). Therefore Mat Sarraf’s statement that when adjusted on this basis, the IQ loss reported in Kong et al. increases to nearly a whole point per decade is valid also.

    I actually spoke yesterday with Augustine Kong about this equation, among other things. The aforementioned is correct and there is no error in the work.

    Refs.

    Bouchard Jr, T.J. (2004). Genetic influence on human psychological traits – a survey. Current Directions in Psychological Science, 13, 148-151.

    Crow, J.F. (2010). On epistasis: Why it is unimportant in polygenic directional selection. Philosophical Transactions of the Royal Society B: Biological Sciences, 36, 1241-1244.

    Read More
    • Replies: @utu
    Thank you very much for your very helpful explanation. I have just learned about the breeder's equation! From it follows that phenotype change is proportional to heritability which is a variance to which I objected thinking that scaling should be according to the square root of variance instead of variance. Basically I should shut up at this point, right? However the breeder's equation is used for extrapolation from 3.74% variance accounted by POLY_edu to 30% variance accounted by POLY_full. As we all should know extrapolations are often iffy. Imagine that POLY_edu and POLY_full (or POLY_full-POLY_edu) obey different breeder's equations that have different proportionality constants S. Then the Kong's extrapolation will not be valid. Actually the authors were aware of this possibility:


    Under an assumption that the part of POLY_full that is not captured by POLY_edu behaves in a similar fashion in its impact on reproduction, the rate of change is proportional to the square root of the variance explained
     
    Does or does not POLY_full that is not captured by POLY_edu behave in the same fashion? Kong tries to answer this question by looking at POLY_ukb which is a subset of POLY_full that accounts for 2.52% of variance. And if I understand it correctly, he gets good confirmation by showing that the rates of declines (of SUs) that were measured directly for the two sets (edu and ukb) are scaled according to the ratio of the square roots of variances: sqrt(3.74/2.52). So the formula is verified but only in a narrow range from 2.52% to 3.74%. The range is not comparable to the extrapolation from 3.74% to 30% or, as you would want, to 80%. Does this confirmation using a smaller subset applies as well to IQ scores? I am not sure but Kong says this:

    However, under the assumptions that POLY_full accounts for 30% of the variance of EDU, and the part of POLY_full that is not captured by POLY_edu behaves in a similar fashion in its impact on both reproduction and IQ, by extrapolation, the decline of POLY_full would lead to a decline of 0.038 × (30/3.74) = 0.30 IQ points per decade.

     

    Anyway I would not be going gaga about the extrapolation results.

    "Therefore Mat Sarraf’s statement that when adjusted on this basis, the IQ loss reported in Kong et al. increases to nearly a whole point per decade is valid also."
     
    Yes, Mat Sarraf demonstrated that he can replace one number in a simple formula.
  116. @Wizard of Oz
    I haven't time to brush up my recollections and do a decent job of replying to your questions but remember what made me decide to lllleave me in the camp of Keynes the genius who had the confidence to change with the facts and his understanding of the facts snd give up on my rather limited "Austrian" acquaintsnce. It was when he blamed Roosevelt's errors on Keynes when Keynes was very critical of FDR's failures both of understanding and implementation, not least for the "Roosevelt recession" of 1937-38, and then tried to wriggle out of Keynes's vindication by the debt financed WW2 recovery by suggesting it was all because of the male workforce being overseas that I lost respect for him as an economist (though his heart was in thd right place wrt Australian Labor government waste).

    This is why I left the Austrian school behind. It doesn’t explain long periods when up is down. A theory without explanatory power is kind of useless.

    I attribute absolutely zero of the postwar recovery to debt-financing or pump-priming. I strongly prefer the mass psychological explanation provided by the Socionomic Hypothesis.

    Under this view, stocks had been rising since 1937, signaling that the entire time the war was running, people were becoming more optimistic. This theory posits that outside events (like wars, tsunamis, assassinations, etc.) have no effect on social mood and thus no noticeable, lasting effect on stock indices (that fill in as social mood barometers.)

    As soon as wartime price controls and rationing were removed, the pent-up demand of rising social mood produced a very broad economic expansion that ran its course by the mid-60′s, where a bear market in social mood was reflected by stocks going dead sideways until the lows of 1974 (DJIA) and 1982 (the SPX and broader market.) Since then it’s been off to the races on the grandest debt bubble ever, with the whole party going on the National Credit Card.

    This also explains why there’s never a time when surpluses are run and borrowing reduced. Keynes, in my view, was very close to nailing it when he spoke of animal spirits. To me, that’s all there is. Everything else is window dressing to rationalize what is otherwise occurring naturally.

    Read More
  117. @Tom Shuford
    Compelling viewing: Woodley with Molyneux

    Why Civilizations Rise and Fall | Michael Woodley of Menie and Stefan Molyneux
    December 12, 2016
    VIDEO (1h33m)
    https://www.youtube.com/watch?v=7XAzSfqrzPg&t=1372s

    Another great interview with Mr. Woodley [45min]:

    Are we getting smarter or dumber, or both? Frank Salter interviews Michael A. Woodley

    Published on Jul 14, 2016

    HNN001 – According to the “Flynn Effect” humans are getting smarter and smarter. We know more than we ever did and score higher on IQ tests than our parents. But the number of geniuses is falling, as is mental speed, as measured by response-tests. What gives? Dr. Michael Woodley, interviewed here by Frank Salter, finds evidence that the English were smarter 100 years ago than they are today, based on response-test data collected from 1904. Dr Woodley concludes that our genetic potential is falling, perhaps due to the relaxation of Darwinian selection over the last century.

    The evolutionary reason for this may lie with the theory that geniuses have insights that advance the general population. “It’s paradoxical because you think the idea of evolution is procreation, and that might be true in a lot of cases,” he explains. “But what if the way you increase your genes is by benefitting the entire group, by giving them an innovation that allows them to grow and expand and colonise new countries?”

    The lack of common sense is in keeping with the idea that a genius exists as an asset to other people, and so: “They need to be looked after,” he says. “They are vulnerable and fragile.”

    http://www.telegraph.co.uk/news/science/11232300/Why-do-geniuses-lack-common-sense.html

    Read More
  118. @Michael A. Woodley of Menie
    There is no error in Kong et al. (2017). Let me explain how they arrived at the 0.3 IQ points per decade decline estimate.

    The Breeder’s equation (developed by R.A. Fisher in the 20’s) is written as follows:

    [1] R = S*h^2

    Where R = the observed generational change in the trait, S = the strength of selection operating on that trait and h^2 = the trait additive heritability.
    Therefore for any given strength of selection:

    [2] R ∝ h^2

    How do we know that this proportionality holds true? Because a century’s worth of quantitative evolutionary research involving plants and animals has shown it to be thus (see: Crow, 2010).

    Kong et al. are simply employing a variant of this same equation. In their case they have observed the decadal decline in the polygenic score directly. They also know that their polygenic score accounts for 3.74% of the variance in their target phenotype (IQ), which is functionally equivalent to saying that their phenotype has an additive heritability of 0.0374. On this basis, the phenotype should decline by 0.038 IQ points per decade. However, we all know that the additive heritability of IQ is not 0.0374, it is much higher. How much higher is debatable, however Kong et al. chose an extremely conservative value of 30% or 0.3. So given that the observed decadal loss in IQ is clearly a substantial underestimate of the true loss, to get to the expected loss, the ratio of the two heritabilities is computed (0.3/0.0374), which yields an approximately eight-fold upwards correction factor. The product of the observed value of R (0.038 IQ points per decade) and this correction factor therefore becomes the expected value of R for IQ that would result if h^2 were 0.3 rather than 0.0374. This result is valid because [2]. If you don’t think that [2] is valid, then the burden of proof is on you to explain away a century of successful quantitative evolutionary research that has proceeded on the basis that this proportionality holds. Rather you than me.

    Of course, we also know that the true additive heritability of General Mental Ability is not 0.3, but closer to 0.8 (Bouchard Jr, 2004). Therefore Mat Sarraf’s statement that when adjusted on this basis, the IQ loss reported in Kong et al. increases to nearly a whole point per decade is valid also.

    I actually spoke yesterday with Augustine Kong about this equation, among other things. The aforementioned is correct and there is no error in the work.

    Refs.

    Bouchard Jr, T.J. (2004). Genetic influence on human psychological traits – a survey. Current Directions in Psychological Science, 13, 148-151.

    Crow, J.F. (2010). On epistasis: Why it is unimportant in polygenic directional selection. Philosophical Transactions of the Royal Society B: Biological Sciences, 36, 1241-1244.

    Thank you very much for your very helpful explanation. I have just learned about the breeder’s equation! From it follows that phenotype change is proportional to heritability which is a variance to which I objected thinking that scaling should be according to the square root of variance instead of variance. Basically I should shut up at this point, right? However the breeder’s equation is used for extrapolation from 3.74% variance accounted by POLY_edu to 30% variance accounted by POLY_full. As we all should know extrapolations are often iffy. Imagine that POLY_edu and POLY_full (or POLY_full-POLY_edu) obey different breeder’s equations that have different proportionality constants S. Then the Kong’s extrapolation will not be valid. Actually the authors were aware of this possibility:

    Under an assumption that the part of POLY_full that is not captured by POLY_edu behaves in a similar fashion in its impact on reproduction, the rate of change is proportional to the square root of the variance explained

    Does or does not POLY_full that is not captured by POLY_edu behave in the same fashion? Kong tries to answer this question by looking at POLY_ukb which is a subset of POLY_full that accounts for 2.52% of variance. And if I understand it correctly, he gets good confirmation by showing that the rates of declines (of SUs) that were measured directly for the two sets (edu and ukb) are scaled according to the ratio of the square roots of variances: sqrt(3.74/2.52). So the formula is verified but only in a narrow range from 2.52% to 3.74%. The range is not comparable to the extrapolation from 3.74% to 30% or, as you would want, to 80%. Does this confirmation using a smaller subset applies as well to IQ scores? I am not sure but Kong says this:

    However, under the assumptions that POLY_full accounts for 30% of the variance of EDU, and the part of POLY_full that is not captured by POLY_edu behaves in a similar fashion in its impact on both reproduction and IQ, by extrapolation, the decline of POLY_full would lead to a decline of 0.038 × (30/3.74) = 0.30 IQ points per decade.

    Anyway I would not be going gaga about the extrapolation results.

    “Therefore Mat Sarraf’s statement that when adjusted on this basis, the IQ loss reported in Kong et al. increases to nearly a whole point per decade is valid also.”

    Yes, Mat Sarraf demonstrated that he can replace one number in a simple formula.

    Read More
    • Replies: @Matthew Sarraf
    "Yes, Mat Sarraf demonstrated that he can replace one number in a simple formula."

    Whereas you failed to even understand that formula, despite its being, according to your own claim, "simple." Quite amusing.

    It has been shown that your critical response to my comment was flatly wrong, which you even concede. Further, you got it wrong for precisely the reason that I noted earlier in this thread (see comment #107 and #90 [your critical comment]). And yet your haughtiness remains unscathed. I recommend the following should you wish to enhance your self-knowledge: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect.
  119. Yes the post WW2 boom didn’t need any more than pent up demand and savings from the war period. That’s not to say that public debt finance of much infrastructure wasn’t justified.

    Read More
  120. @Santoculto
    People in northern clines are more cooperative then people in hot clines. Example Northern Italy and southern Italy. A cold environment requires cooperation.

    Canadian blacks are more cooperative than caribbean blacks*

    Barbadian blacks are less cooperative than Detroitian blacks*

    Northern Italy is not so cold than Southern Italy to cause this differences, via direct/causal ways. and the summer is very hot in most part of italian territory.

    And within those populations we have exceptions [or not so exceptions of] cooperative and uncooperative people. Central asian minorities in Russian Federation seems less cooperative than southern italians.

    Yes very harsh climates tend to des-select people who are not cooperative, but only if the climate were very harsh and the selective process were very intense [and isolated from other groups] to produce this [genetic] changes.

    Northern Italy is not so cold than Southern Italy to cause this differences, via direct/causal ways. and the summer is very hot in most part of italian territory.

    Please – everyone with an ounce of honesty acknowledges that cold Milan has more productive industry then warm Palermo.

    People in cold climates must be cooperative and think out a year’s time frame – end of story.

    Genetics rise to meet the challenge of the environment.

    Peace — Art

    Read More
    • Replies: @Santoculto
    And i never said nothing different that. The question is that seems you are saying ''climate make people act like that, causal way''. Maybe i misunderstood you but you need explain more because at least in my view your statements are ambiguous.

    At the same time we have a common number one blue-eyed people ancestors we also have a common ancestors of the first human populations who were strongly selected in harsh environments [ founder effect ] . I also think that very harsh environment tend to inhibit non-territorial nomadism of course depending the climate type.

    People in cold climates must be cooperative and think out a year’s time frame
     
    Yes because tend to happen a ''self-selection''. Cold climates require delay gratification, hard-working and preventive thinking more than in hot climates where food is in abundance and climate is stable throughout year.
  121. @utu
    Thank you very much for your very helpful explanation. I have just learned about the breeder's equation! From it follows that phenotype change is proportional to heritability which is a variance to which I objected thinking that scaling should be according to the square root of variance instead of variance. Basically I should shut up at this point, right? However the breeder's equation is used for extrapolation from 3.74% variance accounted by POLY_edu to 30% variance accounted by POLY_full. As we all should know extrapolations are often iffy. Imagine that POLY_edu and POLY_full (or POLY_full-POLY_edu) obey different breeder's equations that have different proportionality constants S. Then the Kong's extrapolation will not be valid. Actually the authors were aware of this possibility:


    Under an assumption that the part of POLY_full that is not captured by POLY_edu behaves in a similar fashion in its impact on reproduction, the rate of change is proportional to the square root of the variance explained
     
    Does or does not POLY_full that is not captured by POLY_edu behave in the same fashion? Kong tries to answer this question by looking at POLY_ukb which is a subset of POLY_full that accounts for 2.52% of variance. And if I understand it correctly, he gets good confirmation by showing that the rates of declines (of SUs) that were measured directly for the two sets (edu and ukb) are scaled according to the ratio of the square roots of variances: sqrt(3.74/2.52). So the formula is verified but only in a narrow range from 2.52% to 3.74%. The range is not comparable to the extrapolation from 3.74% to 30% or, as you would want, to 80%. Does this confirmation using a smaller subset applies as well to IQ scores? I am not sure but Kong says this:

    However, under the assumptions that POLY_full accounts for 30% of the variance of EDU, and the part of POLY_full that is not captured by POLY_edu behaves in a similar fashion in its impact on both reproduction and IQ, by extrapolation, the decline of POLY_full would lead to a decline of 0.038 × (30/3.74) = 0.30 IQ points per decade.

     

    Anyway I would not be going gaga about the extrapolation results.

    "Therefore Mat Sarraf’s statement that when adjusted on this basis, the IQ loss reported in Kong et al. increases to nearly a whole point per decade is valid also."
     
    Yes, Mat Sarraf demonstrated that he can replace one number in a simple formula.

    “Yes, Mat Sarraf demonstrated that he can replace one number in a simple formula.”

    Whereas you failed to even understand that formula, despite its being, according to your own claim, “simple.” Quite amusing.

    It has been shown that your critical response to my comment was flatly wrong, which you even concede. Further, you got it wrong for precisely the reason that I noted earlier in this thread (see comment #107 and #90 [your critical comment]). And yet your haughtiness remains unscathed. I recommend the following should you wish to enhance your self-knowledge: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect.

    Read More
    • LOL: dc.sunsets
    • Replies: @utu
    You do not agree that Kong's formula R1*(h2^2/h1^2)=R2 is algebraically simple? The issue is whether it is valid. My objection stemmed from thinking that scaling should be done according to standard deviations rather than variances. Heritability is variance, so I thought that a square root was missing in the formula. Thanks to Michael Woodley now I know of breeder's equation that ties trait change with heritability, i.e., with variance in a linear fashion. That changes everything, though I can't say I am 100% comfortable with it, yet.

    Anyway, from the breeder's equation (see Woodley's comment #120) the Kong's formula can be derived

    R1*(h2^2/h1^2)*(S2/S1)=R2
     
    where S1 and S2 are strengths of selection for (1) and (2), respectively . If S1=S2 we get the Kong's formula. Do we know that S1=S2? Not really. Kong's was cognizant of it so he stipulated it as an assumption, at least this is how I understand the following quote:

    under the assumptions that POLY_full accounts for 30% of the variance of EDU, and the part of POLY_full that is not captured by POLY_edu behaves in a similar fashion in its impact on both reproduction and IQ, by extrapolation, the decline of POLY_full would lead to a decline of 0.038 × (30/3.74) = 0.30 IQ points per decade.

     

    So we do not know whether POLY_full has the same impact on reproduction and IQ as Poly_edu. Can this ignorance be somehow quantified to put an error bars on the 0.30 IQ points result? Kong did not do it. Can you do it? Are you not curious how good is this result or how valid is the formula that you were so eager to use?

    Did you change your name recently or are you in a witness protection program? I could not find any publications with your name on it. This review of Woodley's book seems to be the first. The only Matthew Sarraf associated with Cornell I could find was getting Master degree at School of Industrial and Labor Relations in 2015 (thesis “Modern Work: Personal and Social Harms”). Is it you? Did you get the degree? And then what? Straight to writing book reviews?
  122. @Art

    Northern Italy is not so cold than Southern Italy to cause this differences, via direct/causal ways. and the summer is very hot in most part of italian territory.
     
    Please - everyone with an ounce of honesty acknowledges that cold Milan has more productive industry then warm Palermo.

    People in cold climates must be cooperative and think out a year’s time frame – end of story.

    Genetics rise to meet the challenge of the environment.

    Peace --- Art

    And i never said nothing different that. The question is that seems you are saying ”climate make people act like that, causal way”. Maybe i misunderstood you but you need explain more because at least in my view your statements are ambiguous.

    At the same time we have a common number one blue-eyed people ancestors we also have a common ancestors of the first human populations who were strongly selected in harsh environments [ founder effect ] . I also think that very harsh environment tend to inhibit non-territorial nomadism of course depending the climate type.

    People in cold climates must be cooperative and think out a year’s time frame

    Yes because tend to happen a ”self-selection”. Cold climates require delay gratification, hard-working and preventive thinking more than in hot climates where food is in abundance and climate is stable throughout year.

    Read More
  123. @Matthew Sarraf
    "Yes, Mat Sarraf demonstrated that he can replace one number in a simple formula."

    Whereas you failed to even understand that formula, despite its being, according to your own claim, "simple." Quite amusing.

    It has been shown that your critical response to my comment was flatly wrong, which you even concede. Further, you got it wrong for precisely the reason that I noted earlier in this thread (see comment #107 and #90 [your critical comment]). And yet your haughtiness remains unscathed. I recommend the following should you wish to enhance your self-knowledge: https://en.wikipedia.org/wiki/Dunning%E2%80%93Kruger_effect.

    You do not agree that Kong’s formula R1*(h2^2/h1^2)=R2 is algebraically simple? The issue is whether it is valid. My objection stemmed from thinking that scaling should be done according to standard deviations rather than variances. Heritability is variance, so I thought that a square root was missing in the formula. Thanks to Michael Woodley now I know of breeder’s equation that ties trait change with heritability, i.e., with variance in a linear fashion. That changes everything, though I can’t say I am 100% comfortable with it, yet.

    Anyway, from the breeder’s equation (see Woodley’s comment #120) the Kong’s formula can be derived

    R1*(h2^2/h1^2)*(S2/S1)=R2

    where S1 and S2 are strengths of selection for (1) and (2), respectively . If S1=S2 we get the Kong’s formula. Do we know that S1=S2? Not really. Kong’s was cognizant of it so he stipulated it as an assumption, at least this is how I understand the following quote:

    under the assumptions that POLY_full accounts for 30% of the variance of EDU, and the part of POLY_full that is not captured by POLY_edu behaves in a similar fashion in its impact on both reproduction and IQ, by extrapolation, the decline of POLY_full would lead to a decline of 0.038 × (30/3.74) = 0.30 IQ points per decade.

    So we do not know whether POLY_full has the same impact on reproduction and IQ as Poly_edu. Can this ignorance be somehow quantified to put an error bars on the 0.30 IQ points result? Kong did not do it. Can you do it? Are you not curious how good is this result or how valid is the formula that you were so eager to use?

    Did you change your name recently or are you in a witness protection program? I could not find any publications with your name on it. This review of Woodley’s book seems to be the first. The only Matthew Sarraf associated with Cornell I could find was getting Master degree at School of Industrial and Labor Relations in 2015 (thesis “Modern Work: Personal and Social Harms”). Is it you? Did you get the degree? And then what? Straight to writing book reviews?

    Read More
  124. @Emil Kirkegaard
    The g loading of a test is dependent on the battery of other tests in which it is extracted, but not very much so in most cases. See:

    http://www.sciencedirect.com/science/article/pii/S0160289607000931
    (there are a number of other earlier studies with other methods that found similar results)

    g itself is usually just measured in standardized units. You of course know this. Sometimes it is useful to distinguish explicitly between the trait and the factor since they can differ. Let's call the factor for g and the trait for GCA, general cognitive ability.

    Optimally, one switch to using a ratio scale for GCA, but there seems to be little progress towards this goal. At least, as explicitly stated to be working towards that goal. Presumably, one could build a ratio scale measurement using appropriate brain measurements. This would not help with estimating historical GCA declines since we lack detailed brain measurements from back then. One will have to rely on crude measures (reaction time, visual acuity etc.) or genetic data. The latter is more plausible, but will not capture any environmental changes in GCA.

    Note that a simple measure may be a good proxy for the mean level of GCA when using aggregate data, while not being so at the individual level. This is what Woodley et al. argues for with reaction time etc. To demonstrate this is rather trivial, so I leave that task to the reader. But it remains an assumption that is hard to test.

    I have looked at the paper you have linked (Johnson et al. “Still just 1 g: Consistent results from five test batteries”) and had to give up when I got to the Table 1 where correlations greater than 1 are reported. What some people do with factor analysis (FA) never ceases to amaze me. We know FA was invented by psychologists not mathematicians. We know that FA does not produce unique solutions. The “uniqueness’ is enforced by often arbitrary constraints. There is a list of procedures that can be used to accomplish it like various types of rotations. Each procedure is well defined however the parameters that constrain it must be selected. Different parameters like different procedures produce different results. We know that at each step between procedures some decisions are made. The decisions are subjective and often arbitrary. Often it is more like art than science. But I did not know that in the end product correlations can be larger than 1. That’s really new to me. By definition correlation cannot be larger than 1 or smaller than -1. The formula for correlation can’t produce numbers outside (-1,+1) interval. However when you do not have raw data (scores from tests – and this was the case of this paper: “We did not have access to individual participant data of any kind” ) and you use only meta data of covariance matrices the correlations are derived from other formulas. Still these formulas will provide exactly correct correlation if they are carried out correctly assuring that all conditions are fulfilled. When you start doing oblique rotations that destroys mutual orthogonality of factors (that also never ceases to amaze me what is the point) you may create some conditions between intermediate variable that derivation of correlation will fail. This is my diagnosis! What should you do when it happens? Discard the results and start over again.

    How can I accept their result “Still just 1 g” that correlations between g’s from different batteries of tests are very high when their results are mathematically inadmissible? They talk about it. It is just a feature to them not a failure of the procedure. Anyway, I have seen several papers comparing g’s from various batteries. Sometimes just via congruence (correlation between loadings) and sometimes via correlation between tests scores. Conclusions differ. For example here:

    Are cognitive g and academic achievement g one and the same g? An exploration on the Woodcock–Johnson and Kaufman tests

    http://scottbarrykaufman.com/wp-content/uploads/2012/02/Kaufman-et-al.-2012.pdf

    they concluded: “. In the title of this paper, however, we posed the question: Is COG-g and ACH-g one and the same g? The answer to this question is no. They are highly related, yet distinct constructs”

    It seems that results depends on what researchers bring in their heads in to the research. If they want g’s to be alike they are alike. When they want g’s to be not alike they are not alike. Seems that nature is very pliable and merciful to scientists at least in this field. It gives them what they want. So one should ask what kind of science it is? FA is part of the problem or rather psychometricians who do not understand what FA really is from the mathematical stand point.

    I could rant about FA some more. Instead I add a link to a book:

    Factor Analysis: Healing an Ailing Model

    http://www.univerlag.uni-goettingen.de/bitstream/handle/3/isbn-978-3-86395-133-7/Ertel_factor.pdf?sequence=1

    “Exploratory factor analysis has never been developed to anything approaching its full promise and potential, despite the eighty-year history of its efforts …”

    “How curious … that we are so little further forward in our understanding of the psychology of individual differences as a result of these advances … Can anyone identify a single publication in the last 50 years in which the use of factor analysis has led to counter-intuitive, or surprising, or genuinely enlightening outcomes?”

    “the situation of factor analytical research helps understand where the calamity comes from. An “unease in factor analysis” is generally ascribed to an arbitrariness of procedural decision taking. Arbitrariness occurs when variables for correlations are selected, when samples of individuals are formed, when the number factors to be extracted are determined, when the choice between orthogonal or oblique rotation is made, and when one rotation procedure is selected from among a large number of options”

    and this article:

    OBJECTIVITY IN FACTOR ANALYSIS http://journals.sagepub.com/doi/pdf/10.1177/001316445801800303

    One peculiarity of contemporary factor analysis is its subjectivity. In most statistical work two persons who start with the same data and calculate correctly will reach the same answer. This is not necessarily the case for factor analysis. This remains a method which depends upon arbitrary judgments by the investigator, so that skill is acquired only after long experience in estimating communalities, deciding upon the number of factors to be extracted, selecting pairs of factors for rotation, and so on. This emphasis upon human judgment seems to have resulted because the psychologist has played a larger part in its development than the mathematician.

    Also here is an interesting and funny example of when FA produces nonsense:

    Derivation of Theory by Means of Factor Analysis or Tom Swift and his Electric Factor Analysis Machine

    https://dspace.mit.edu/bitstream/handle/1721.1/47256/derivationoftheo00arms.pdf?sequence=1

    ___________
    “Optimally, one switch to using a ratio scale for GCA, but there seems to be little progress towards this goal.”

    I would like to look in to it. Thanks for bringing it up.

    Read More
    • Replies: @res

    when I got to the Table 1 where correlations greater than 1 are reported
     
    I think you mean Table 2 (on page 88, page 8 of the PDF I saw). That said, what is going on there? Each of the >1 results is footnoted to the effect of: "The g-factor correlation could be reduced to 1.00 by allowing correlations between the residuals for the [... factors/tests]; see Fig. 1." But I don't know how to interpret that. Can someone who understands the methodology used please comment and explain what a correlation >1 is supposed to mean mathematically here? (I think they are adding up correlations computed from the variances between multiple pairs of correlated variables, is this considered reasonable? it seems guaranteed to give an overestimate, possibly substantial if the variables are as intercorrelated as cognitive tests tend to be)

    The text talks about "excess correlations" and states: "Failure to acknowledge this common variance was indicated by correlations between second-order g factors in excess of 1.00, "

    The text also states: "We thus did not directly measure or test the correlations among the batteries as we could always recognize further such covariances and likely would eventually reduce the correlations among the g factors substantially. "
    But why do this rather than just measuring correlations directly?

    We also have: "We summarize them in the bottom part of Table 2 for the models
    allowing no residual or cross-battery correlations, which explains the presence in the table of factor correlations in excess of 1.00. We also note the residual and crossbattery correlations necessary to reduce any correlations in excess of 1.00 to 1.00. In no case did we add residual
    or cross-battery correlations in any situation in which a g correlation was not in excess of 1.00."
    Which seems questionable to me. If your technique is overestimating the correlations >1 and needs to be corrected why assume it is not overestimating any of the other correlations?! Not to mention why choose to reduce them to exactly 1? It seems highly unlikely that two distinct variables actually have a correlation of 1.

    P.S. I'm confused by this. I would normally take things said by Thomas Bouchard at face value, but this methodology troubles me.

    P.P.S. I read utu's comment (and the paper) more closely and the lack of individual data and required analysis from the covariance matrices seems like a reasonable explanation for why this methodology was used. That leaves only the question of whether it is generally considered valid. Perhaps a pointer to the relevant portion of a Stats 101 textbook would be helpful (I don't recall this technique being covered in any of my statistics classes though, so perhaps a more advanced textbook would be appropriate).
    , @Emil Kirkegaard
    SEM fitting problems
    SEMs can produce odd results like that. It's not unique to analysis of cognitive data as a quick googling will show you. Usually one will just need to loosen some minor constraint. This was also true for this paper where there were correlated errors between group factors from different tests. Indeed, these correlated errors are not very surprising.

    It is not wise to attack this paper on such grounds when there is a large body of congruent evidence on this question, some of which is cited in the paper.

    The numbers are not correlations, they are standardized paths. Similar but not the same. The authors were speaking loosely which is fine because these have the same scale. Ordinary correlations of course cannot exceed 1 but paths from SEM can when assumptions are violated or there is some other problem.

    Factor uniqueness
    Citing a 1958 paper is more than a little sophistic. Perhaps you have gotten inspiration from the master of this dark art: Peter H. Schönemann. He is typical of the purist type critic of which Shalizi is another example.

    The lack of unique solutions or scores is not generally a practical problem because the estimates are usually very similar. Typically the correlations are >.999 for the scores. Principal components analysis produces unique results, but they are also about the same as the factor analysis ones. For practical purposes: ¯\_(ツ)_/¯

    For those historically inclined, Art Jensen replied to this sort of sophistry nearly 35 years ago:

    Jensen, A. R. (1983). The Definition of Intelligence and Factor Score Indeterminacy. Behavioral and Brain Sciences, 6, 313—315.

    Jensen, A. R. (1987). The g beyond factor analysis. In R. R. Ronning, J. A. Glover, J. C. Conoley, and J. C. Witt (Eds.), The influence of cognitive psychology on testing. Hillsdale, NJ: Erlbaum. Pp. 87—142.

    Jensen, A. R. (1987). Differential psychology: Towards consensus. In M. Modgil and C. Modgil (Eds.), Arthur Jensen: Consensus and controversy. London: Falmer Press.

    GCA vs. achievement test
    Your paper is about an achievement test, not a GCA test. The point of the paper is that these are not the same construct, and indeed, who said they were? An earlier paper found correlations of .73 and .86 for scholastic tests and GCA tests. There are more of these type papers.

    https://www.ncbi.nlm.nih.gov/pubmed/15147489
  125. @candid_observer
    God only knows how many people over the previous century have imagined that they have uncovered some elementary mistake behind the theory of g which undermines it entirely -- going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi.

    And all of these claims have come to ruin -- exactly as one would expect, given that the good number of truly outstanding intellects who have contributed to the theory of g would not, in aggregate, be exactly likely to make and perpetuate elementary errors.

    Point is: if you think you've found an elementary error in the theory of g, then almost certainly it's you who have made one.

    g may have its problems -- but they are sophisticated and subtle, not trivial and obvious.

    “going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi.”
    “And all of these claims have come to ruin ” – they all made very legitimate points. Their ideas are not dead. There is research done on intelligence where nobody invokes silly construct g. g is totally unnecessary. But if you impose constraint on your world view that there is one factor you will end up missing some aspects of reality and will have to start sweeping inconvenient facts under the carpet. Single factor g cannot explain why for example there are different correlations between verbal and spatial intelligence and that they go in opposite directions for various ethnic groups. I can understand why Spearman liked the concept of single g. It was a physics envy in those days. So he came up with mental energy or mental power. Cool. He could divide g by temperature in Kelvins and call it mental entropy. Why not? However I can’t understand why Jensen decided to dust it off and bring it back from the attic that really belonged to 19 century? Perhaps we should dust off Otto Weininger’s theories as well. They would knock you sucks off.

    “exactly as one would expect, given that the good number of truly outstanding intellects who have contributed to the theory of g would not, in aggregate, be exactly likely to make and perpetuate elementary errors.” – you do not seem to understand the nature of this enterprise. Are you that naive? The errors technically are not errors. They are features. They are constructs. They are imposed on reality. They constrain reality. Often suffocate it. You have no clue what is the social dynamic of a group of intelligent individuals that succumb to group think. Look at climate science. How much it was corrupted.

    Read More
    • Replies: @res

    “going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi.”
    “And all of these claims have come to ruin ” – they all made very legitimate points.
     
    Perhaps you could point me to a legitimate point made by Steven J Gould? I read The Mismeasure of Man and found it a classic example of a rhetorical polemic lacking in substance. The best part was where he misrepresented Morton's skull work and in a later edition added a mea culpa footnote. But even that wasn't enough. Here is an article discussing some follow up work checking up on Gould's analysis of Morton's work: http://discovermagazine.com/2012/jan-feb/59
  126. @utu
    I have looked at the paper you have linked (Johnson et al. "Still just 1 g: Consistent results from five test batteries") and had to give up when I got to the Table 1 where correlations greater than 1 are reported. What some people do with factor analysis (FA) never ceases to amaze me. We know FA was invented by psychologists not mathematicians. We know that FA does not produce unique solutions. The "uniqueness' is enforced by often arbitrary constraints. There is a list of procedures that can be used to accomplish it like various types of rotations. Each procedure is well defined however the parameters that constrain it must be selected. Different parameters like different procedures produce different results. We know that at each step between procedures some decisions are made. The decisions are subjective and often arbitrary. Often it is more like art than science. But I did not know that in the end product correlations can be larger than 1. That's really new to me. By definition correlation cannot be larger than 1 or smaller than -1. The formula for correlation can't produce numbers outside (-1,+1) interval. However when you do not have raw data (scores from tests - and this was the case of this paper: "We did not have access to individual participant data of any kind" ) and you use only meta data of covariance matrices the correlations are derived from other formulas. Still these formulas will provide exactly correct correlation if they are carried out correctly assuring that all conditions are fulfilled. When you start doing oblique rotations that destroys mutual orthogonality of factors (that also never ceases to amaze me what is the point) you may create some conditions between intermediate variable that derivation of correlation will fail. This is my diagnosis! What should you do when it happens? Discard the results and start over again.

    How can I accept their result "Still just 1 g" that correlations between g's from different batteries of tests are very high when their results are mathematically inadmissible? They talk about it. It is just a feature to them not a failure of the procedure. Anyway, I have seen several papers comparing g's from various batteries. Sometimes just via congruence (correlation between loadings) and sometimes via correlation between tests scores. Conclusions differ. For example here:


    Are cognitive g and academic achievement g one and the same g? An exploration on the Woodcock–Johnson and Kaufman tests
    http://scottbarrykaufman.com/wp-content/uploads/2012/02/Kaufman-et-al.-2012.pdf
     
    they concluded: ". In the title of this paper, however, we posed the question: Is COG-g and ACH-g one and the same g? The answer to this question is no. They are highly related, yet distinct constructs"

    It seems that results depends on what researchers bring in their heads in to the research. If they want g's to be alike they are alike. When they want g's to be not alike they are not alike. Seems that nature is very pliable and merciful to scientists at least in this field. It gives them what they want. So one should ask what kind of science it is? FA is part of the problem or rather psychometricians who do not understand what FA really is from the mathematical stand point.

    I could rant about FA some more. Instead I add a link to a book:

    Factor Analysis: Healing an Ailing Model
    http://www.univerlag.uni-goettingen.de/bitstream/handle/3/isbn-978-3-86395-133-7/Ertel_factor.pdf?sequence=1
     

    “Exploratory factor analysis has never been developed to anything approaching its full promise and potential, despite the eighty-year history of its efforts …”
     

    “How curious … that we are so little further forward in our understanding of the psychology of individual differences as a result of these advances … Can anyone identify a single publication in the last 50 years in which the use of factor analysis has led to counter-intuitive, or surprising, or genuinely enlightening outcomes?”
     

    "the situation of factor analytical research helps understand where the calamity comes from. An “unease in factor analysis” is generally ascribed to an arbitrariness of procedural decision taking. Arbitrariness occurs when variables for correlations are selected, when samples of individuals are formed, when the number factors to be extracted are determined, when the choice between orthogonal or oblique rotation is made, and when one rotation procedure is selected from among a large number of options"
     
    and this article:

    OBJECTIVITY IN FACTOR ANALYSIS http://journals.sagepub.com/doi/pdf/10.1177/001316445801800303
     

    One peculiarity of contemporary factor analysis is its subjectivity. In most statistical work two persons who start with the same data and calculate correctly will reach the same answer. This is not necessarily the case for factor analysis. This remains a method which depends upon arbitrary judgments by the investigator, so that skill is acquired only after long experience in estimating communalities, deciding upon the number of factors to be extracted, selecting pairs of factors for rotation, and so on. This emphasis upon human judgment seems to have resulted because the psychologist has played a larger part in its development than the mathematician.
     
    Also here is an interesting and funny example of when FA produces nonsense:

    Derivation of Theory by Means of Factor Analysis or Tom Swift and his Electric Factor Analysis Machine
    https://dspace.mit.edu/bitstream/handle/1721.1/47256/derivationoftheo00arms.pdf?sequence=1
     
    ___________
    "Optimally, one switch to using a ratio scale for GCA, but there seems to be little progress towards this goal."

    I would like to look in to it. Thanks for bringing it up.

    when I got to the Table 1 where correlations greater than 1 are reported

    I think you mean Table 2 (on page 88, page 8 of the PDF I saw). That said, what is going on there? Each of the >1 results is footnoted to the effect of: “The g-factor correlation could be reduced to 1.00 by allowing correlations between the residuals for the [... factors/tests]; see Fig. 1.” But I don’t know how to interpret that. Can someone who understands the methodology used please comment and explain what a correlation >1 is supposed to mean mathematically here? (I think they are adding up correlations computed from the variances between multiple pairs of correlated variables, is this considered reasonable? it seems guaranteed to give an overestimate, possibly substantial if the variables are as intercorrelated as cognitive tests tend to be)

    The text talks about “excess correlations” and states: “Failure to acknowledge this common variance was indicated by correlations between second-order g factors in excess of 1.00, ”

    The text also states: “We thus did not directly measure or test the correlations among the batteries as we could always recognize further such covariances and likely would eventually reduce the correlations among the g factors substantially. ”
    But why do this rather than just measuring correlations directly?

    We also have: “We summarize them in the bottom part of Table 2 for the models
    allowing no residual or cross-battery correlations, which explains the presence in the table of factor correlations in excess of 1.00. We also note the residual and crossbattery correlations necessary to reduce any correlations in excess of 1.00 to 1.00. In no case did we add residual
    or cross-battery correlations in any situation in which a g correlation was not in excess of 1.00.”
    Which seems questionable to me. If your technique is overestimating the correlations >1 and needs to be corrected why assume it is not overestimating any of the other correlations?! Not to mention why choose to reduce them to exactly 1? It seems highly unlikely that two distinct variables actually have a correlation of 1.

    P.S. I’m confused by this. I would normally take things said by Thomas Bouchard at face value, but this methodology troubles me.

    P.P.S. I read utu’s comment (and the paper) more closely and the lack of individual data and required analysis from the covariance matrices seems like a reasonable explanation for why this methodology was used. That leaves only the question of whether it is generally considered valid. Perhaps a pointer to the relevant portion of a Stats 101 textbook would be helpful (I don’t recall this technique being covered in any of my statistics classes though, so perhaps a more advanced textbook would be appropriate).

    Read More
    • Replies: @Emil Kirkegaard
    Maybe try. This is a somewhat common problem as googling will show you. SEM is picky about the exact modeling choice.

    http://www.ssicentral.com/lisrel/techdocs/HowLargeCanaStandardizedCoefficientbe.pdf
    , @utu
    You do not need individual data if you want to get correlations between two g's from two different test batteries. However you need covariance matrix between tests from two batteries. And also you need mean and variances of all tests. You get g1 from battery A and g2 from battery B. Each is linear combinations of tests: g1= a1*A1+a2*A2... and g2=b1*B2+b2*B2+...where Ai and Bi are tests that produced the covariance matrices A and B and ai and bi are coefficients produced by FA's. From this linear equations you can get SD of g1 and g2 and you can calculate covariances of g1 and g2 providing that you have individual covariances cov(Ai,Bj) between tests Ai and Bj:

    cov(g1,g2)=Sum of all ai*bj*cov(Ai,Bj) and then cor(g1,g2)= cov(g1,g2)/SD(g1)/SD(g2).

    This always will produce correct value of correlations. I think or I hope they did it this way and the results are in the upper part of the Table 2. What is in the lower part it beats me. Why did they think it was relevant to list the value that are greater than 1? "Oh, look correlation is 1.07 it means we are doing really good. And, btw we can make 1 if you want." It is pretty silly. I think it indicates some degree of mindlessness as well as impunity among the practitioners of the art called factor analysis at the services of single factor dogma of Spearman.
  127. @utu
    "going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi."
    "And all of these claims have come to ruin " - they all made very legitimate points. Their ideas are not dead. There is research done on intelligence where nobody invokes silly construct g. g is totally unnecessary. But if you impose constraint on your world view that there is one factor you will end up missing some aspects of reality and will have to start sweeping inconvenient facts under the carpet. Single factor g cannot explain why for example there are different correlations between verbal and spatial intelligence and that they go in opposite directions for various ethnic groups. I can understand why Spearman liked the concept of single g. It was a physics envy in those days. So he came up with mental energy or mental power. Cool. He could divide g by temperature in Kelvins and call it mental entropy. Why not? However I can't understand why Jensen decided to dust it off and bring it back from the attic that really belonged to 19 century? Perhaps we should dust off Otto Weininger's theories as well. They would knock you sucks off.


    "exactly as one would expect, given that the good number of truly outstanding intellects who have contributed to the theory of g would not, in aggregate, be exactly likely to make and perpetuate elementary errors." - you do not seem to understand the nature of this enterprise. Are you that naive? The errors technically are not errors. They are features. They are constructs. They are imposed on reality. They constrain reality. Often suffocate it. You have no clue what is the social dynamic of a group of intelligent individuals that succumb to group think. Look at climate science. How much it was corrupted.

    “going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi.”
    “And all of these claims have come to ruin ” – they all made very legitimate points.

    Perhaps you could point me to a legitimate point made by Steven J Gould? I read The Mismeasure of Man and found it a classic example of a rhetorical polemic lacking in substance. The best part was where he misrepresented Morton’s skull work and in a later edition added a mea culpa footnote. But even that wasn’t enough. Here is an article discussing some follow up work checking up on Gould’s analysis of Morton’s work: http://discovermagazine.com/2012/jan-feb/59

    Read More
    • Agree: Wizard of Oz
    • Replies: @utu
    I agree with you about Gould arguments. He should not be in the list. I copied it and pasted.
  128. @utu
    I have looked at the paper you have linked (Johnson et al. "Still just 1 g: Consistent results from five test batteries") and had to give up when I got to the Table 1 where correlations greater than 1 are reported. What some people do with factor analysis (FA) never ceases to amaze me. We know FA was invented by psychologists not mathematicians. We know that FA does not produce unique solutions. The "uniqueness' is enforced by often arbitrary constraints. There is a list of procedures that can be used to accomplish it like various types of rotations. Each procedure is well defined however the parameters that constrain it must be selected. Different parameters like different procedures produce different results. We know that at each step between procedures some decisions are made. The decisions are subjective and often arbitrary. Often it is more like art than science. But I did not know that in the end product correlations can be larger than 1. That's really new to me. By definition correlation cannot be larger than 1 or smaller than -1. The formula for correlation can't produce numbers outside (-1,+1) interval. However when you do not have raw data (scores from tests - and this was the case of this paper: "We did not have access to individual participant data of any kind" ) and you use only meta data of covariance matrices the correlations are derived from other formulas. Still these formulas will provide exactly correct correlation if they are carried out correctly assuring that all conditions are fulfilled. When you start doing oblique rotations that destroys mutual orthogonality of factors (that also never ceases to amaze me what is the point) you may create some conditions between intermediate variable that derivation of correlation will fail. This is my diagnosis! What should you do when it happens? Discard the results and start over again.

    How can I accept their result "Still just 1 g" that correlations between g's from different batteries of tests are very high when their results are mathematically inadmissible? They talk about it. It is just a feature to them not a failure of the procedure. Anyway, I have seen several papers comparing g's from various batteries. Sometimes just via congruence (correlation between loadings) and sometimes via correlation between tests scores. Conclusions differ. For example here:


    Are cognitive g and academic achievement g one and the same g? An exploration on the Woodcock–Johnson and Kaufman tests
    http://scottbarrykaufman.com/wp-content/uploads/2012/02/Kaufman-et-al.-2012.pdf
     
    they concluded: ". In the title of this paper, however, we posed the question: Is COG-g and ACH-g one and the same g? The answer to this question is no. They are highly related, yet distinct constructs"

    It seems that results depends on what researchers bring in their heads in to the research. If they want g's to be alike they are alike. When they want g's to be not alike they are not alike. Seems that nature is very pliable and merciful to scientists at least in this field. It gives them what they want. So one should ask what kind of science it is? FA is part of the problem or rather psychometricians who do not understand what FA really is from the mathematical stand point.

    I could rant about FA some more. Instead I add a link to a book:

    Factor Analysis: Healing an Ailing Model
    http://www.univerlag.uni-goettingen.de/bitstream/handle/3/isbn-978-3-86395-133-7/Ertel_factor.pdf?sequence=1
     

    “Exploratory factor analysis has never been developed to anything approaching its full promise and potential, despite the eighty-year history of its efforts …”
     

    “How curious … that we are so little further forward in our understanding of the psychology of individual differences as a result of these advances … Can anyone identify a single publication in the last 50 years in which the use of factor analysis has led to counter-intuitive, or surprising, or genuinely enlightening outcomes?”
     

    "the situation of factor analytical research helps understand where the calamity comes from. An “unease in factor analysis” is generally ascribed to an arbitrariness of procedural decision taking. Arbitrariness occurs when variables for correlations are selected, when samples of individuals are formed, when the number factors to be extracted are determined, when the choice between orthogonal or oblique rotation is made, and when one rotation procedure is selected from among a large number of options"
     
    and this article:

    OBJECTIVITY IN FACTOR ANALYSIS http://journals.sagepub.com/doi/pdf/10.1177/001316445801800303
     

    One peculiarity of contemporary factor analysis is its subjectivity. In most statistical work two persons who start with the same data and calculate correctly will reach the same answer. This is not necessarily the case for factor analysis. This remains a method which depends upon arbitrary judgments by the investigator, so that skill is acquired only after long experience in estimating communalities, deciding upon the number of factors to be extracted, selecting pairs of factors for rotation, and so on. This emphasis upon human judgment seems to have resulted because the psychologist has played a larger part in its development than the mathematician.
     
    Also here is an interesting and funny example of when FA produces nonsense:

    Derivation of Theory by Means of Factor Analysis or Tom Swift and his Electric Factor Analysis Machine
    https://dspace.mit.edu/bitstream/handle/1721.1/47256/derivationoftheo00arms.pdf?sequence=1
     
    ___________
    "Optimally, one switch to using a ratio scale for GCA, but there seems to be little progress towards this goal."

    I would like to look in to it. Thanks for bringing it up.

    SEM fitting problems
    SEMs can produce odd results like that. It’s not unique to analysis of cognitive data as a quick googling will show you. Usually one will just need to loosen some minor constraint. This was also true for this paper where there were correlated errors between group factors from different tests. Indeed, these correlated errors are not very surprising.

    It is not wise to attack this paper on such grounds when there is a large body of congruent evidence on this question, some of which is cited in the paper.

    The numbers are not correlations, they are standardized paths. Similar but not the same. The authors were speaking loosely which is fine because these have the same scale. Ordinary correlations of course cannot exceed 1 but paths from SEM can when assumptions are violated or there is some other problem.

    Factor uniqueness
    Citing a 1958 paper is more than a little sophistic. Perhaps you have gotten inspiration from the master of this dark art: Peter H. Schönemann. He is typical of the purist type critic of which Shalizi is another example.

    The lack of unique solutions or scores is not generally a practical problem because the estimates are usually very similar. Typically the correlations are >.999 for the scores. Principal components analysis produces unique results, but they are also about the same as the factor analysis ones. For practical purposes: ¯\_(ツ)_/¯

    For those historically inclined, Art Jensen replied to this sort of sophistry nearly 35 years ago:

    Jensen, A. R. (1983). The Definition of Intelligence and Factor Score Indeterminacy. Behavioral and Brain Sciences, 6, 313—315.

    Jensen, A. R. (1987). The g beyond factor analysis. In R. R. Ronning, J. A. Glover, J. C. Conoley, and J. C. Witt (Eds.), The influence of cognitive psychology on testing. Hillsdale, NJ: Erlbaum. Pp. 87—142.

    Jensen, A. R. (1987). Differential psychology: Towards consensus. In M. Modgil and C. Modgil (Eds.), Arthur Jensen: Consensus and controversy. London: Falmer Press.

    GCA vs. achievement test
    Your paper is about an achievement test, not a GCA test. The point of the paper is that these are not the same construct, and indeed, who said they were? An earlier paper found correlations of .73 and .86 for scholastic tests and GCA tests. There are more of these type papers.

    https://www.ncbi.nlm.nih.gov/pubmed/15147489

    Read More
    • Replies: @utu
    "The lack of unique solutions or scores is not generally a practical problem because the estimates are usually very similar."

    Very vague. Non uniqueness means that you can get pretty much what you want to get from within the infinite multitude of solutions.

    If Spearman postulated that there are two intelligence factors, say, mental energy and mental momentum after a proper rotation of eigenvectors one could get two significant factors every time from every set of data. Spearman's postulate would be confirmed every time that there is mental energy and mental momentum and they do not correlate (after orthogonal rotation) or correlate (after oblique rotation). You do not want correlation, you can get it. But if you do want correlation, you can get it as well. This "science" is very flexible and accommodating. His followers would be trained to select the set of procedures (rules of factor selection and various rotations) that would lead to two factor solution every time with expected properties. This would establish a new practice and Spearman's epigones like yourself when challenged about the issue of non-uniqueness would write the same statement you have just written: "The lack of unique solutions or scores is not generally a practical problem because the estimates are usually very similar." Results would be "very similar" because different "researchers" would follow the same established practice of forcing solution into the same preselected region in the infinite space of solutions. The non-uniqueness means that all your arguments could be applied by opponents to your dogma if they were as dogmatic as you are and claimed that there are exactly two factors not one. Your true opponents however are not dogmatic. They just think that you are fooling yourself so you are mostly ignored by the mainstream science.

    Sir, one cannot escape from the fundamental problem of the non-uniqueness issue. You defend the indefensible.
  129. @res

    when I got to the Table 1 where correlations greater than 1 are reported
     
    I think you mean Table 2 (on page 88, page 8 of the PDF I saw). That said, what is going on there? Each of the >1 results is footnoted to the effect of: "The g-factor correlation could be reduced to 1.00 by allowing correlations between the residuals for the [... factors/tests]; see Fig. 1." But I don't know how to interpret that. Can someone who understands the methodology used please comment and explain what a correlation >1 is supposed to mean mathematically here? (I think they are adding up correlations computed from the variances between multiple pairs of correlated variables, is this considered reasonable? it seems guaranteed to give an overestimate, possibly substantial if the variables are as intercorrelated as cognitive tests tend to be)

    The text talks about "excess correlations" and states: "Failure to acknowledge this common variance was indicated by correlations between second-order g factors in excess of 1.00, "

    The text also states: "We thus did not directly measure or test the correlations among the batteries as we could always recognize further such covariances and likely would eventually reduce the correlations among the g factors substantially. "
    But why do this rather than just measuring correlations directly?

    We also have: "We summarize them in the bottom part of Table 2 for the models
    allowing no residual or cross-battery correlations, which explains the presence in the table of factor correlations in excess of 1.00. We also note the residual and crossbattery correlations necessary to reduce any correlations in excess of 1.00 to 1.00. In no case did we add residual
    or cross-battery correlations in any situation in which a g correlation was not in excess of 1.00."
    Which seems questionable to me. If your technique is overestimating the correlations >1 and needs to be corrected why assume it is not overestimating any of the other correlations?! Not to mention why choose to reduce them to exactly 1? It seems highly unlikely that two distinct variables actually have a correlation of 1.

    P.S. I'm confused by this. I would normally take things said by Thomas Bouchard at face value, but this methodology troubles me.

    P.P.S. I read utu's comment (and the paper) more closely and the lack of individual data and required analysis from the covariance matrices seems like a reasonable explanation for why this methodology was used. That leaves only the question of whether it is generally considered valid. Perhaps a pointer to the relevant portion of a Stats 101 textbook would be helpful (I don't recall this technique being covered in any of my statistics classes though, so perhaps a more advanced textbook would be appropriate).

    Maybe try. This is a somewhat common problem as googling will show you. SEM is picky about the exact modeling choice.

    http://www.ssicentral.com/lisrel/techdocs/HowLargeCanaStandardizedCoefficientbe.pdf

    Read More
    • Replies: @res
    Thanks for the explanation and link, Emil. Using your info I also found
    Interpreting the Results from Multiple Regression and Structural Equation Models
    http://www.structuralequations.com/resources/GraceandBollen2005ESAB.pdf
    which goes into some more detail. The one problem with Googling is it can be hard if you don't know the correct terms and your source is using sloppy terminology.

    Not sure what you meant by "Maybe try", but I think it is reasonable to object to their presentation of the SEM results on multiple grounds:
    1. The sloppy terminology as you observed (or as you said, speaking loosely). Is using the term "correlation" the norm in presenting SEM results? I think the authors of the link I gave would object to this.
    2. You noted above: "Ordinary correlations of course cannot exceed 1 but paths from SEM can when assumptions are violated or there is some other problem." Does this violation/problem not require additional investigation or elaboration to confirm validity of the model?
    3. The correction of >1 results to 1. Is this common practice?
    4. The complete lack of correction of <1 results given points 2 and 3.

    Am I right to say that as someone more familiar than I am with SEMs you have no problem with their Table 2 presentation including the terminology used?

    Agreed that using this issue to attack the entire paper is questionable and to attack the entire body of IQ research is unreasonable.

    P.S. Thanks for addressing the rest of utu's objections.

  130. @Emil Kirkegaard
    Maybe try. This is a somewhat common problem as googling will show you. SEM is picky about the exact modeling choice.

    http://www.ssicentral.com/lisrel/techdocs/HowLargeCanaStandardizedCoefficientbe.pdf

    Thanks for the explanation and link, Emil. Using your info I also found
    Interpreting the Results from Multiple Regression and Structural Equation Models

    http://www.structuralequations.com/resources/GraceandBollen2005ESAB.pdf

    which goes into some more detail. The one problem with Googling is it can be hard if you don’t know the correct terms and your source is using sloppy terminology.

    Not sure what you meant by “Maybe try”, but I think it is reasonable to object to their presentation of the SEM results on multiple grounds:
    1. The sloppy terminology as you observed (or as you said, speaking loosely). Is using the term “correlation” the norm in presenting SEM results? I think the authors of the link I gave would object to this.
    2. You noted above: “Ordinary correlations of course cannot exceed 1 but paths from SEM can when assumptions are violated or there is some other problem.” Does this violation/problem not require additional investigation or elaboration to confirm validity of the model?
    3. The correction of >1 results to 1. Is this common practice?
    4. The complete lack of correction of <1 results given points 2 and 3.

    Am I right to say that as someone more familiar than I am with SEMs you have no problem with their Table 2 presentation including the terminology used?

    Agreed that using this issue to attack the entire paper is questionable and to attack the entire body of IQ research is unreasonable.

    P.S. Thanks for addressing the rest of utu’s objections.

    Read More
    • Replies: @Emil Kirkegaard
    1.
    Textbooks on SEM are often written by purist, mathematician type people, but research papers are usually written by people that don't care too much. The standardized paths will be bounded by -1 and 1 unless there is a problem. As such, they are on the same scale as correlations, have the same interpretation, but are calculated differently and in problem cases, are not bounded by -1 to 1. Should we call them correlations or not?

    Some people also object to calling principal components analysis for a kind of factor analysis yet the empirical results of these methods are usually indistinguishable. The persons will say that the math behind them is very different, and it is, but the results are not.

    In general, I tend to favor speaking loosely when the distinction is one without practical significance.

    2.
    Sure, which they did.

    3.
    Somewhat common. In psychometric meta-analysis, adjusted correlations are sometimes >1, and are then reduced to 1. Confidence intervals are sometimes calculated in lazy ways that result in them going beyond the boundaries, e.g. of a fraction. Sometimes people just reduce them to the end of the range.

    4.
    I have no particular problems with that study. After all, it just confirmed things we knew from a bunch of other studies with a fancier method in a fancier dataset.
  131. @res

    “going back to Thomson and Thorndike, and including Stephen J Gould and Cosma Shalizi.”
    “And all of these claims have come to ruin ” – they all made very legitimate points.
     
    Perhaps you could point me to a legitimate point made by Steven J Gould? I read The Mismeasure of Man and found it a classic example of a rhetorical polemic lacking in substance. The best part was where he misrepresented Morton's skull work and in a later edition added a mea culpa footnote. But even that wasn't enough. Here is an article discussing some follow up work checking up on Gould's analysis of Morton's work: http://discovermagazine.com/2012/jan-feb/59

    I agree with you about Gould arguments. He should not be in the list. I copied it and pasted.

    Read More
  132. @Emil Kirkegaard
    SEM fitting problems
    SEMs can produce odd results like that. It's not unique to analysis of cognitive data as a quick googling will show you. Usually one will just need to loosen some minor constraint. This was also true for this paper where there were correlated errors between group factors from different tests. Indeed, these correlated errors are not very surprising.

    It is not wise to attack this paper on such grounds when there is a large body of congruent evidence on this question, some of which is cited in the paper.

    The numbers are not correlations, they are standardized paths. Similar but not the same. The authors were speaking loosely which is fine because these have the same scale. Ordinary correlations of course cannot exceed 1 but paths from SEM can when assumptions are violated or there is some other problem.

    Factor uniqueness
    Citing a 1958 paper is more than a little sophistic. Perhaps you have gotten inspiration from the master of this dark art: Peter H. Schönemann. He is typical of the purist type critic of which Shalizi is another example.

    The lack of unique solutions or scores is not generally a practical problem because the estimates are usually very similar. Typically the correlations are >.999 for the scores. Principal components analysis produces unique results, but they are also about the same as the factor analysis ones. For practical purposes: ¯\_(ツ)_/¯

    For those historically inclined, Art Jensen replied to this sort of sophistry nearly 35 years ago:

    Jensen, A. R. (1983). The Definition of Intelligence and Factor Score Indeterminacy. Behavioral and Brain Sciences, 6, 313—315.

    Jensen, A. R. (1987). The g beyond factor analysis. In R. R. Ronning, J. A. Glover, J. C. Conoley, and J. C. Witt (Eds.), The influence of cognitive psychology on testing. Hillsdale, NJ: Erlbaum. Pp. 87—142.

    Jensen, A. R. (1987). Differential psychology: Towards consensus. In M. Modgil and C. Modgil (Eds.), Arthur Jensen: Consensus and controversy. London: Falmer Press.

    GCA vs. achievement test
    Your paper is about an achievement test, not a GCA test. The point of the paper is that these are not the same construct, and indeed, who said they were? An earlier paper found correlations of .73 and .86 for scholastic tests and GCA tests. There are more of these type papers.

    https://www.ncbi.nlm.nih.gov/pubmed/15147489

    “The lack of unique solutions or scores is not generally a practical problem because the estimates are usually very similar.”

    Very vague. Non uniqueness means that you can get pretty much what you want to get from within the infinite multitude of solutions.

    If Spearman postulated that there are two intelligence factors, say, mental energy and mental momentum after a proper rotation of eigenvectors one could get two significant factors every time from every set of data. Spearman’s postulate would be confirmed every time that there is mental energy and mental momentum and they do not correlate (after orthogonal rotation) or correlate (after oblique rotation). You do not want correlation, you can get it. But if you do want correlation, you can get it as well. This “science” is very flexible and accommodating. His followers would be trained to select the set of procedures (rules of factor selection and various rotations) that would lead to two factor solution every time with expected properties. This would establish a new practice and Spearman’s epigones like yourself when challenged about the issue of non-uniqueness would write the same statement you have just written: “The lack of unique solutions or scores is not generally a practical problem because the estimates are usually very similar.” Results would be “very similar” because different “researchers” would follow the same established practice of forcing solution into the same preselected region in the infinite space of solutions. The non-uniqueness means that all your arguments could be applied by opponents to your dogma if they were as dogmatic as you are and claimed that there are exactly two factors not one. Your true opponents however are not dogmatic. They just think that you are fooling yourself so you are mostly ignored by the mainstream science.

    Sir, one cannot escape from the fundamental problem of the non-uniqueness issue. You defend the indefensible.

    Read More
    • Replies: @Emil Kirkegaard
    Lazy self-righteous critics are the worst.

    I used all combinations of factor analytic methods in psych package, 30 in total, to extract g from the VES dataset which has 19 tests. Then I correlated the scores. The mean intercorrelation was .997.
  133. @res

    when I got to the Table 1 where correlations greater than 1 are reported
     
    I think you mean Table 2 (on page 88, page 8 of the PDF I saw). That said, what is going on there? Each of the >1 results is footnoted to the effect of: "The g-factor correlation could be reduced to 1.00 by allowing correlations between the residuals for the [... factors/tests]; see Fig. 1." But I don't know how to interpret that. Can someone who understands the methodology used please comment and explain what a correlation >1 is supposed to mean mathematically here? (I think they are adding up correlations computed from the variances between multiple pairs of correlated variables, is this considered reasonable? it seems guaranteed to give an overestimate, possibly substantial if the variables are as intercorrelated as cognitive tests tend to be)

    The text talks about "excess correlations" and states: "Failure to acknowledge this common variance was indicated by correlations between second-order g factors in excess of 1.00, "

    The text also states: "We thus did not directly measure or test the correlations among the batteries as we could always recognize further such covariances and likely would eventually reduce the correlations among the g factors substantially. "
    But why do this rather than just measuring correlations directly?

    We also have: "We summarize them in the bottom part of Table 2 for the models
    allowing no residual or cross-battery correlations, which explains the presence in the table of factor correlations in excess of 1.00. We also note the residual and crossbattery correlations necessary to reduce any correlations in excess of 1.00 to 1.00. In no case did we add residual
    or cross-battery correlations in any situation in which a g correlation was not in excess of 1.00."
    Which seems questionable to me. If your technique is overestimating the correlations >1 and needs to be corrected why assume it is not overestimating any of the other correlations?! Not to mention why choose to reduce them to exactly 1? It seems highly unlikely that two distinct variables actually have a correlation of 1.

    P.S. I'm confused by this. I would normally take things said by Thomas Bouchard at face value, but this methodology troubles me.

    P.P.S. I read utu's comment (and the paper) more closely and the lack of individual data and required analysis from the covariance matrices seems like a reasonable explanation for why this methodology was used. That leaves only the question of whether it is generally considered valid. Perhaps a pointer to the relevant portion of a Stats 101 textbook would be helpful (I don't recall this technique being covered in any of my statistics classes though, so perhaps a more advanced textbook would be appropriate).

    You do not need individual data if you want to get correlations between two g’s from two different test batteries. However you need covariance matrix between tests from two batteries. And also you need mean and variances of all tests. You get g1 from battery A and g2 from battery B. Each is linear combinations of tests: g1= a1*A1+a2*A2… and g2=b1*B2+b2*B2+…where Ai and Bi are tests that produced the covariance matrices A and B and ai and bi are coefficients produced by FA’s. From this linear equations you can get SD of g1 and g2 and you can calculate covariances of g1 and g2 providing that you have individual covariances cov(Ai,Bj) between tests Ai and Bj:

    cov(g1,g2)=Sum of all ai*bj*cov(Ai,Bj) and then cor(g1,g2)= cov(g1,g2)/SD(g1)/SD(g2).

    This always will produce correct value of correlations. I think or I hope they did it this way and the results are in the upper part of the Table 2. What is in the lower part it beats me. Why did they think it was relevant to list the value that are greater than 1? “Oh, look correlation is 1.07 it means we are doing really good. And, btw we can make 1 if you want.” It is pretty silly. I think it indicates some degree of mindlessness as well as impunity among the practitioners of the art called factor analysis at the services of single factor dogma of Spearman.

    Read More
  134. @res
    Thanks for the explanation and link, Emil. Using your info I also found
    Interpreting the Results from Multiple Regression and Structural Equation Models
    http://www.structuralequations.com/resources/GraceandBollen2005ESAB.pdf
    which goes into some more detail. The one problem with Googling is it can be hard if you don't know the correct terms and your source is using sloppy terminology.

    Not sure what you meant by "Maybe try", but I think it is reasonable to object to their presentation of the SEM results on multiple grounds:
    1. The sloppy terminology as you observed (or as you said, speaking loosely). Is using the term "correlation" the norm in presenting SEM results? I think the authors of the link I gave would object to this.
    2. You noted above: "Ordinary correlations of course cannot exceed 1 but paths from SEM can when assumptions are violated or there is some other problem." Does this violation/problem not require additional investigation or elaboration to confirm validity of the model?
    3. The correction of >1 results to 1. Is this common practice?
    4. The complete lack of correction of <1 results given points 2 and 3.

    Am I right to say that as someone more familiar than I am with SEMs you have no problem with their Table 2 presentation including the terminology used?

    Agreed that using this issue to attack the entire paper is questionable and to attack the entire body of IQ research is unreasonable.

    P.S. Thanks for addressing the rest of utu's objections.

    1.
    Textbooks on SEM are often written by purist, mathematician type people, but research papers are usually written by people that don’t care too much. The standardized paths will be bounded by -1 and 1 unless there is a problem. As such, they are on the same scale as correlations, have the same interpretation, but are calculated differently and in problem cases, are not bounded by -1 to 1. Should we call them correlations or not?

    Some people also object to calling principal components analysis for a kind of factor analysis yet the empirical results of these methods are usually indistinguishable. The persons will say that the math behind them is very different, and it is, but the results are not.

    In general, I tend to favor speaking loosely when the distinction is one without practical significance.

    2.
    Sure, which they did.

    3.
    Somewhat common. In psychometric meta-analysis, adjusted correlations are sometimes >1, and are then reduced to 1. Confidence intervals are sometimes calculated in lazy ways that result in them going beyond the boundaries, e.g. of a fraction. Sometimes people just reduce them to the end of the range.

    4.
    I have no particular problems with that study. After all, it just confirmed things we knew from a bunch of other studies with a fancier method in a fancier dataset.

    Read More
  135. @utu
    "The lack of unique solutions or scores is not generally a practical problem because the estimates are usually very similar."

    Very vague. Non uniqueness means that you can get pretty much what you want to get from within the infinite multitude of solutions.

    If Spearman postulated that there are two intelligence factors, say, mental energy and mental momentum after a proper rotation of eigenvectors one could get two significant factors every time from every set of data. Spearman's postulate would be confirmed every time that there is mental energy and mental momentum and they do not correlate (after orthogonal rotation) or correlate (after oblique rotation). You do not want correlation, you can get it. But if you do want correlation, you can get it as well. This "science" is very flexible and accommodating. His followers would be trained to select the set of procedures (rules of factor selection and various rotations) that would lead to two factor solution every time with expected properties. This would establish a new practice and Spearman's epigones like yourself when challenged about the issue of non-uniqueness would write the same statement you have just written: "The lack of unique solutions or scores is not generally a practical problem because the estimates are usually very similar." Results would be "very similar" because different "researchers" would follow the same established practice of forcing solution into the same preselected region in the infinite space of solutions. The non-uniqueness means that all your arguments could be applied by opponents to your dogma if they were as dogmatic as you are and claimed that there are exactly two factors not one. Your true opponents however are not dogmatic. They just think that you are fooling yourself so you are mostly ignored by the mainstream science.

    Sir, one cannot escape from the fundamental problem of the non-uniqueness issue. You defend the indefensible.

    Lazy self-righteous critics are the worst.

    I used all combinations of factor analytic methods in psych package, 30 in total, to extract g from the VES dataset which has 19 tests. Then I correlated the scores. The mean intercorrelation was .997.

    Read More
    • Replies: @utu
    " psych package" - this may explain your results if you indeed carried out the 30 different extraction of g. The non-uniqueness is a fundamental issue. The original sin of factor analysis was committed at its conception by Spearman. He wanted to get one factor so he formulated the problem mathematically with only one factor. No other factors were even considered because they could not exist form the definition of the problem as posed by Spearman. So in a sense he had a unique solution. Only 20 or so years later the mathematics of the problem was expanded and the issue of non-uniqueness popped up. Since then it seems that some psychologists behind the factorial analysis project spend lots of time to minimize (like yourself) the issue of non-uniqueness or completely hide it. Here is interesting paper from 1995:

    Spearman and the origin and development of factor analysis (D. J. Bartholornew)
     

    http://onlinelibrary.wiley.com/doi/10.1111/j.2044-8317.1995.tb01060.x/abstract
     

    "Psychologists, who were still the main users of factor anlaysis, continued on their separate way developing the language and mystique of the subject which made the gap between themselves and the statisticians hard to close. What seemed to the psychologists like genuine science as distinct ftom mathematical manipulation, appeared to others as subjective, arbitrary and essentially unscientific."
     
    This may describe you:

    "Spearman had become stuck in a methodological cul-de-sac but it has to be added that he was not concerned with developing a new multivariate technique. He was a psychologist interested in human ability and his methodological concerns were subservient to that end."
     
    The non-uniqueness implies that the issue whether one or two factors cannot be resolved mathematically with FA:

    "Without a basis of empirical evidence pointing to a general factor the case for following Spearman, rather than Thurstone, was less than compelling. What is not entirely clear is whether both protagonists clearly understood that a ‘general factor’ interpretation and a ‘multiple factor’ interpretation were not inconsistent and, usually, would be equivalent ways of describing the same factor structure. This means that there is no statistical way of distinguishing between them"
     
  136. Perhaps those radically sceptical about IQ testing, g and anything but multiple intelligences with little correlation could offer a view as to where you go when you and your parent and teacher friends observe the following and ask the related follow up questions

    1. There are bright and even brighter kids who started showing they were smart from age 2 or earlier and went on standing out not only in everything to do with words and numbers but in learning the rules of games, remembering the shopping list and finding their way home etc.

    So what are the simplest quickest tests which will help you decide which kids should get a scholarship to make sure that lack of money doesn’t stop them getting the preparation needed for to university entry? Or to decide that they should be given a push to a good trade education, or accounting say?

    2. There are kids who, like their parents and siblings very often, have very poor memories, can’t follow directions well, can’t understand the rules of games etc.

    What are the simplest quickest tests for filtering out those who are worth giving extra help to so they can get a regular job one day from those who are never going to be independent net taxpayers in a literate modestly numerate society?

    Perhaps these days one should look to neuroscience and memory tests, and thus rely on the important factors of speed of processing and how good the working memory is and how far the individual can find adequate workarounds.

    Any chance your most useful single score would look like g?

    Read More
    • Replies: @Art

    Perhaps these days one should look to neuroscience and memory tests, and thus rely on the important factors of speed of processing and how good the working memory is and how far the individual can find adequate workarounds.
     
    The number one test of intellectual prowess is the ability to visualize movement in one’s mind. Being able to visualize what is going to happen next is the ultimate intelligence. Being able to slice movement into definable steps requires great concentration. That is what Newton, Darwin, Einstein did so well.

    The second test is for the quickness of memory. Being able to accurately access stored subconscious memories is a great mental asset. The education system of today has it all wrong – remembering facts is the number one path to intelligence.
  137. @Emil Kirkegaard
    Lazy self-righteous critics are the worst.

    I used all combinations of factor analytic methods in psych package, 30 in total, to extract g from the VES dataset which has 19 tests. Then I correlated the scores. The mean intercorrelation was .997.

    ” psych package” – this may explain your results if you indeed carried out the 30 different extraction of g. The non-uniqueness is a fundamental issue. The original sin of factor analysis was committed at its conception by Spearman. He wanted to get one factor so he formulated the problem mathematically with only one factor. No other factors were even considered because they could not exist form the definition of the problem as posed by Spearman. So in a sense he had a unique solution. Only 20 or so years later the mathematics of the problem was expanded and the issue of non-uniqueness popped up. Since then it seems that some psychologists behind the factorial analysis project spend lots of time to minimize (like yourself) the issue of non-uniqueness or completely hide it. Here is interesting paper from 1995:

    Spearman and the origin and development of factor analysis (D. J. Bartholornew)

    http://onlinelibrary.wiley.com/doi/10.1111/j.2044-8317.1995.tb01060.x/abstract

    “Psychologists, who were still the main users of factor anlaysis, continued on their separate way developing the language and mystique of the subject which made the gap between themselves and the statisticians hard to close. What seemed to the psychologists like genuine science as distinct ftom mathematical manipulation, appeared to others as subjective, arbitrary and essentially unscientific.”

    This may describe you:

    “Spearman had become stuck in a methodological cul-de-sac but it has to be added that he was not concerned with developing a new multivariate technique. He was a psychologist interested in human ability and his methodological concerns were subservient to that end.”

    The non-uniqueness implies that the issue whether one or two factors cannot be resolved mathematically with FA:

    “Without a basis of empirical evidence pointing to a general factor the case for following Spearman, rather than Thurstone, was less than compelling. What is not entirely clear is whether both protagonists clearly understood that a ‘general factor’ interpretation and a ‘multiple factor’ interpretation were not inconsistent and, usually, would be equivalent ways of describing the same factor structure. This means that there is no statistical way of distinguishing between them”

    Read More
  138. @Matthew Sarraf
    This is an interesting finding. It does nothing to bring into doubt Dr. Woodley of Menie's dysgenic theory, however.

    A paper published shortly after my review, "Selection against variants in the genome associated with educational attainment," finds a substantial decrease of an educational attainment polygenic score in Icelanders over time (the study examines genetic data from 129,808 Icelanders born between 1910 and 1990; this is without question a representative sample): http://www.pnas.org/content/114/5/E727.abstract. Dr. Woodley of Menie notified me of this research and pointed out, as I anticipated upon first hearing of the paper, that the equation that the authors use to convert the polygenic score decline to a per decade IQ point decline, 0.038 x (30/3.74) = 0.30 IQ points, assumes an unrealistically low additive heritability of IQ: 30%. The adult additive heritability of IQ is typically pegged at 80-85%, with the additive heritability of g likely at 85-87%. Thus the Icelandic data in fact indicate a genotypic g decline of 0.81-0.88 points per decade (on an IQ scale; I am using 80% as a conservative estimate of the additive heritability of g and 87% as a realistic estimate to arrive at the 0.81-0.88 range). While already quite close to Dr. Woodley of Menie's estimated g decline of 1-1.5 points per decade, this is only the decline in g from genetic selection. Once we include Dr. Woodley of Menie and Mr. Fernandes' estimated decline in g from mutation accumulation and other sources of damage to developmental stability (in the paper cited as Woodley of Menie & Fernandes, 2016b in my review), 0.16 points per decade, the overall per decade g decline rises to 0.97-1.04 points. Particular demographic changes may add another 0.25 points of g lost per decade, bringing the overall estimated decline in g to 1.22-1.29 points per decade, entirely consistent with what Dr. Woodley of Menie has been saying for years. In any case, the decrease in g due to genetic selection, the reality of which is confirmed in the Iceland paper about as directly as possible, is nearly a full point alone. So we find a diminution of g in the 1-1.5 points per decade range without availing ourselves of reaction time data, and a loss of g per decade nearly in that range even if we assume that only genetic selection is depressing g. Assume, arguendo, that the decadal reduction of g has been the mere 0.81 points per decade arrived at above with the 80% heritability estimate. Ignore all other possible contributory factors. 0.81 points of g lost a decade from 1850 to 2010 would amount to a total reduction of 12.96 points -- quite alarming for a very conservative estimate!

    I have not yet been able to read the study on myopia and visual reaction time (VRT) in detail. But even if myopia goes with longer VRT and myopia is becoming more prevalent (which it is), this would have no bearing on the secular trend toward greater auditory reaction time that Dr. Woodley of Menie and his colleagues have found. I doubt if the changing prevalence of myopia can explain more than a small fraction of the increase in VRT that Dr. Woodley of Menie and his colleagues have documented. Even when significant slowing is added to Galton's VRT samples, the remaining retardation of VRT indicates a g loss of ~10 points (on an IQ scale). As I argue in my review, attacking the dysgenic theory by picking at individual data sets and indicators is unlikely to bear fruit -- the nomological net of evidence for the theory is very robust, especially now that we have the aforementioned genetic selection data, and so is not likely to be undone without a parsimonious alternative explanation of declines in the various indicators that together seem to have nothing in common apart from a relation to g. If myopia decreases color acuity, increasing rates of myopia may explain why the estimate of dysgenesis on g from color acuity is much too high. But I am optimistic that some of the decline in color acuity is due to temporal reduction of g. Note that I do not suggest in my review that Dr. Woodley of Menie's research has made certain the precise magnitude of declines in g, only that it has shown that significant declines in g have been almost certainly occurring. With good genetic selection data now at hand, we are moving in on a more concrete estimate, which is probably in the 1-1.5 points of g lost per decade range that, as previously stated, Dr. Woodley of Menie predicted years ago.

    My understanding is that additive (ie narrow sense) heritability is about .6 and broad sense heritability is about .8-.85.

    Your “conservative estimate” of a decline of 12.96 IQ points would, putting aside the Flynn effect, produce a 92% decline in people with IQs over 145. The decline in people with IQs over 160 would be about 96%. This strains credibility on its face and for several reasons. First, the Flynn effect mainly impacted the lower echelons of IQ. Both math and science continue to progress. Also, this rate of decline does not accord with demographic records of differential fertility among different IQ strata.

    On the other hand, if this sort of hyperbole functions to Crack the Western establishment out of its nihilistic slumbers, it may be that the end justifies the means…

    Read More
    • Replies: @Matthew Sarraf
    As the Kong et al. 2017 data make clear, "credibility" intuitions radically underestimate the degree to which genetic selection reduces general intelligence.

    Heiner Rindermann has done simulations, which I believe are soon to be published in a research monograph, showing that quite mild inverse correlations between IQ and fertility predict a 1 point per decade decline in general intelligence. The common underestimate of a decline in IQ of 0.3 points per decade seems to result from failure to account for the fact that IQ relates not only to the number of children one has but also to the timing of procreation.

    The heritability of general intelligence is typically thought to be exclusively due to additive genetic effects, as the first sentence of this paper indicates (though this paper seeks to challenge that consensus): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3276760/.

    Your claim that progress in math and science has "continued" is at minimum misleading. I recommend that you read Charles Murray's Human Accomplishment (2003) to appreciate the substantial slowing of intellectual progress since the mid-19th century. Also see the following post, especially the note from the email correspondent at the end: http://charltonteaching.blogspot.com/2014/03/comment-to-greg-cochrane-on-decline-of.html?m=1.
  139. @Wizard of Oz
    Perhaps those radically sceptical about IQ testing, g and anything but multiple intelligences with little correlation could offer a view as to where you go when you and your parent and teacher friends observe the following and ask the related follow up questions

    1. There are bright and even brighter kids who started showing they were smart from age 2 or earlier and went on standing out not only in everything to do with words and numbers but in learning the rules of games, remembering the shopping list and finding their way home etc.

    So what are the simplest quickest tests which will help you decide which kids should get a scholarship to make sure that lack of money doesn't stop them getting the preparation needed for to university entry? Or to decide that they should be given a push to a good trade education, or accounting say?

    2. There are kids who, like their parents and siblings very often, have very poor memories, can't follow directions well, can't understand the rules of games etc.

    What are the simplest quickest tests for filtering out those who are worth giving extra help to so they can get a regular job one day from those who are never going to be independent net taxpayers in a literate modestly numerate society?

    Perhaps these days one should look to neuroscience and memory tests, and thus rely on the important factors of speed of processing and how good the working memory is and how far the individual can find adequate workarounds.

    Any chance your most useful single score would look like g?

    Perhaps these days one should look to neuroscience and memory tests, and thus rely on the important factors of speed of processing and how good the working memory is and how far the individual can find adequate workarounds.

    The number one test of intellectual prowess is the ability to visualize movement in one’s mind. Being able to visualize what is going to happen next is the ultimate intelligence. Being able to slice movement into definable steps requires great concentration. That is what Newton, Darwin, Einstein did so well.

    The second test is for the quickness of memory. Being able to accurately access stored subconscious memories is a great mental asset. The education system of today has it all wrong – remembering facts is the number one path to intelligence.

    Read More
  140. @Craken
    My understanding is that additive (ie narrow sense) heritability is about .6 and broad sense heritability is about .8-.85.

    Your "conservative estimate" of a decline of 12.96 IQ points would, putting aside the Flynn effect, produce a 92% decline in people with IQs over 145. The decline in people with IQs over 160 would be about 96%. This strains credibility on its face and for several reasons. First, the Flynn effect mainly impacted the lower echelons of IQ. Both math and science continue to progress. Also, this rate of decline does not accord with demographic records of differential fertility among different IQ strata.

    On the other hand, if this sort of hyperbole functions to Crack the Western establishment out of its nihilistic slumbers, it may be that the end justifies the means...

    As the Kong et al. 2017 data make clear, “credibility” intuitions radically underestimate the degree to which genetic selection reduces general intelligence.

    Heiner Rindermann has done simulations, which I believe are soon to be published in a research monograph, showing that quite mild inverse correlations between IQ and fertility predict a 1 point per decade decline in general intelligence. The common underestimate of a decline in IQ of 0.3 points per decade seems to result from failure to account for the fact that IQ relates not only to the number of children one has but also to the timing of procreation.

    The heritability of general intelligence is typically thought to be exclusively due to additive genetic effects, as the first sentence of this paper indicates (though this paper seeks to challenge that consensus): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3276760/.

    Your claim that progress in math and science has “continued” is at minimum misleading. I recommend that you read Charles Murray’s Human Accomplishment (2003) to appreciate the substantial slowing of intellectual progress since the mid-19th century. Also see the following post, especially the note from the email correspondent at the end: http://charltonteaching.blogspot.com/2014/03/comment-to-greg-cochrane-on-decline-of.html?m=1.

    Read More
    • Replies: @candid_observer
    I took a look at the comment made by the "email correspondent" on Charlton's blog. I have to say I find it pretty unconvincing. It suggests that today's mathematicians just don't measure up to those of, say, the nineteenth century because, in the quite subjective opinion of the correspondent, the achievements of today's mathematicians seem far less impressive, or took longer to come by.

    But of course this is mostly comparing apples with oranges. Math is vastly more complex today than in previous centuries. A very able math student in, say, the early nineteenth century might have been able to master most of the basic results in mathematics by age 20. That would be simply impossible today: there are too many subspecialties and each is far too developed unto itself. There are no low lying fruit (apples or oranges). This makes it very hard indeed to make fair comparisons between the difficulty of new results today and that of new results in the nineteenth century. If one had to go by the length and complexity of proofs, those of today's big results certainly would appear to be much longer on average than those of the nineteenth century. Wiles' proof of Fermat's Last Theorem would definitely count as one such. So complex was it that it required considerable effort by other mathematicians even to verify it -- indeed, his first version of the proof contained a subtle error that he himself didn't catch.

  141. @Matthew Sarraf
    As the Kong et al. 2017 data make clear, "credibility" intuitions radically underestimate the degree to which genetic selection reduces general intelligence.

    Heiner Rindermann has done simulations, which I believe are soon to be published in a research monograph, showing that quite mild inverse correlations between IQ and fertility predict a 1 point per decade decline in general intelligence. The common underestimate of a decline in IQ of 0.3 points per decade seems to result from failure to account for the fact that IQ relates not only to the number of children one has but also to the timing of procreation.

    The heritability of general intelligence is typically thought to be exclusively due to additive genetic effects, as the first sentence of this paper indicates (though this paper seeks to challenge that consensus): https://www.ncbi.nlm.nih.gov/pmc/articles/PMC3276760/.

    Your claim that progress in math and science has "continued" is at minimum misleading. I recommend that you read Charles Murray's Human Accomplishment (2003) to appreciate the substantial slowing of intellectual progress since the mid-19th century. Also see the following post, especially the note from the email correspondent at the end: http://charltonteaching.blogspot.com/2014/03/comment-to-greg-cochrane-on-decline-of.html?m=1.

    I took a look at the comment made by the “email correspondent” on Charlton’s blog. I have to say I find it pretty unconvincing. It suggests that today’s mathematicians just don’t measure up to those of, say, the nineteenth century because, in the quite subjective opinion of the correspondent, the achievements of today’s mathematicians seem far less impressive, or took longer to come by.

    But of course this is mostly comparing apples with oranges. Math is vastly more complex today than in previous centuries. A very able math student in, say, the early nineteenth century might have been able to master most of the basic results in mathematics by age 20. That would be simply impossible today: there are too many subspecialties and each is far too developed unto itself. There are no low lying fruit (apples or oranges). This makes it very hard indeed to make fair comparisons between the difficulty of new results today and that of new results in the nineteenth century. If one had to go by the length and complexity of proofs, those of today’s big results certainly would appear to be much longer on average than those of the nineteenth century. Wiles’ proof of Fermat’s Last Theorem would definitely count as one such. So complex was it that it required considerable effort by other mathematicians even to verify it — indeed, his first version of the proof contained a subtle error that he himself didn’t catch.

    Read More
  142. Make life easy and having a brain is less of an advantage, possibly a disadvantage.

    Read More
Current Commenter says:

Leave a Reply - Comments on articles more than two weeks old will be judged much more strictly on quality and tone


 Remember My InformationWhy?
 Email Replies to my Comment
Submitted comments become the property of The Unz Review and may be republished elsewhere at the sole discretion of the latter
Subscribe to This Comment Thread via RSS Subscribe to All James Thompson Comments via RSS