The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
Publications Filter?
AKarlin.com Da Russophile
Nothing found
 TeasersRussian Reaction Blog
/
Transhumanism

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

PAPER REVIEW

***

partridge-artificial-wombs

Abstract:

Here we report the development of a system that incorporates a pumpless oxygenator circuit connected to the fetus of a lamb via an umbilical cord interface that is maintained within a closed ‘amniotic fluid’ circuit that closely reproduces the environment of the womb. We show that fetal lambs that are developmentally equivalent to the extreme premature human infant can be physiologically supported in this extra-uterine device for up to 4 weeks. Lambs on support maintain stable haemodynamics, have normal blood gas and oxygenation parameters and maintain patency of the fetal circulation. With appropriate nutritional support, lambs on the system demonstrate normal somatic growth, lung maturation and brain growth and myelination.

This is really cool.

twitter-artificial-wombs I have been advocating this technology since I started blogging in 2008.

The immediate benefits, which the authors cite, are a reduction in infant mortality caused by extreme prematurity. This is good, though not that big of a deal, since it is very low in First World countries anyway, while poorer countries will probably not be able to afford the technology anyway.

The real promise is in its eugenic potential.

It is common knowledge that the well-educated reproduce less than the poorly educated, and that has resulted in decades of dysgenic decline throughout the developed world. This dysgenic effect has overtaken the Flynn effect. One of the reasons the well-educated, and especially well-educated women, have few or zero children is because it is bad for their career prospects. There are also some women who are just uncomfortable with the idea of pregnancy and childbirth.

There are essentially just a few solutions to this problem:

(1) Do nothing, deny heritability of IQ. Import Afro-Muslims to breed the next generation of doctors and engineers.

(2) Do nothing, hope for a literal deus ex machina solution, such as Musk’s neural lace or superintelligence.

(3) The Alt Right solution: Send the women back to the kitchen.

Ethical considerations aside, there’s also the matter of practicality – you’d have to be really hardcore at enforcing your “White Sharia” to make any substantive difference. Even most conservative Muslim societies, where female labor participation is very low, have seen plummeting fertility rates. And, needless to say, it does nothing about the dysgenic aspect of modern fertility patterns, which are a significantly bigger problem than falling fertility rates anyway.

(4) Develop artificial wombs.

This is a good idea from all sorts of ideological perspectives.

Everyone: Immediate higher fertility rates in the countries that develop them, especially amongst well-educated women. This might cancel out dysgenic decline at a single stroke.

Liberals: Alternate option for women who don’t want to undergo pregnancy/childbirth for whatever reason. No more market for surrogate mothers – an end to a particularly icky form of Third World exploitation.

Libertarians: People with the means to pay – that is, millionaires and especially billionaires – will no longer be bounded in their reproductive capacity by the biology of their female partner or by the culture of their society (generally, no polygamy). Since wealth is moderately correlated with IQ, this will be eugenic. That said, this might strike some as dystopian. Maybe one could start taxing additional artificial womb-grown offspring past the first five or ten? Then you’d get “offshore hatcheries.” Okay, I suppose that’s even more dystopian.

Zensunnis: I suppose cultures that really dislike women can just gradually start making do without them by replacing them with the equivalent of Axlotl tanks. Conversely, (almost) all female “Amazonian” societies will also become possible. Let’s make sci-fi tropes real.

Futurists: Combining artificial wombs with CRISPR gene-editing for IQ on a mass scale pretty much directly leads to a biosingularity.

As I pointed out, a biosingularity may be preferable to one born of machine superintelligence because it bypasses the AI alignment problem and doesn’t risk the end of conscious experience.

 
• Category: Science • Tags: Fertility, Paper Review, Transhumanism 
🔊 Listen RSS

PAPER REVIEW

Tang, Lichun et al. 2017
CRISPR/Cas9-mediated gene editing in human zygotes using Cas9 protein


Abstract:

Previous works using human tripronuclear zygotes suggested that the clustered regularly interspaced short palindromic repeat (CRISPR)/Cas9 system could be a tool in correcting disease-causing mutations. However, whether this system was applicable in normal human (dual pronuclear, 2PN) zygotes was unclear. Here we demonstrate that CRISPR/Cas9 is also effective as a gene-editing tool in human 2PN zygotes. By injection of Cas9 protein complexed with the appropriate sgRNAs and homology donors into one-cell human embryos, we demonstrated efficient homologous recombination-mediated correction of point mutations in HBB and G6PD. However, our results also reveal limitations of this correction procedure and highlight the need for further research.

Gwern Branwen’s comments:

Even nicer: another human-embryo CRISPR paper. Some old 2015 work – results: no off-target mutations and efficiencies of 20/50/100% for various edits. (As I predicted, the older papers, Liang et al 2015 / Kang et al 2016 / Komor et al 2016, were not state of the art and would be improved on considerably.)

Back in February 2015, qualia researcher Mike Johnson predicted that dedicated billionaire with scant regard for legalistic regulations could start genetically “spellchecking” their offspring within 5-7 years.

But if anything, he might have overestimated the timeframe.

tang-2017-crispr-cas9

 
• Category: Science • Tags: Crispr, Genetic Load, Paper Review, Transhumanism 
🔊 Listen RSS

Organized by IEET and Brighter Brains (Hank Pellissier).

I’ll be participating in one or perhaps two of them.

My positions, briefly:

  • Immigration/Open Borders – Opposed, and not even just from an HBD/”waycist” perspective. See Immigration and Effective Altruism.
  • UBI – For it, and not even just from an automation perspective. See The Ethnic Politics of Basic Income.
  • Singularity 2045 – I am with techno-NRx “consensus” (Anissimov, Konkvistador, etc) that 2045 is extremely optimistic, if for different reasons. Mostly it is just an extension of the logic of the theory of Apollo’s Ascent. Kurzweil is wrong because progress in technology isn’t primarily driven by the stock of existing technology but by aggregate mindpower, which is increasing but not very quickly (and might start reversing altogether sooner or later once the Idiocracy Effect overtakes the Flynn Effect). We also have no idea what the cognitive threshold is for developing superintelligence. Perhaps it’s beyond homo sapiens capabilities altogether.

***

IEET link: Transhuman Debate in SF East Bay, co-sponsored by IEET – speakers needed

You can get tickets here: https://www.eventbrite.com/e/transhuman-debate-tickets-20728825475

***

Transhuman Debate in SF East Bay, co-sponsored by IEET – speakers needed

Posted: Jan 11, 2016

IEET is co-sponsoring a “Transhuman Debate” event in Oakland, California, on February 6, 2016, at Humanist Hall.

The debate is titled “Argue 4 Tomorrow.” It will feature three “Oxford Style” Transhumanist Team Debates on these three topics:

IMMIGRATION & BORDERS

BASIC INCOME GUARANTEE

WILL THE SINGULARITY ARRIVE BEFORE OR AFTER 2045?

Each debate will be one hour long.
The first third will be presentation of their POV by the debate team,
the second part will open-ended dispute and persuasion between the two teams,
and the final section will have the audience leaping into the fray.

The event is co-sponsored by Brighter Brains Institute. Anatoly Karlin proposed the debate concept.

We’re looking for additional Debate Team members. If interested please contact hank@ieet.org

At the present time the debate teams include:

Randal Koene (IEET Advisory Board member)
Nicole Sallak Anderson (IEET Advisory Board)
Ted Peters (Author)
Anatoly Karlin (blogger for Unz.com)
Scott Jackish (IEET contributor)
Anya Petrova (Infinity Gap)
Andrés Gómez Emilsson (IEET Contributor)
Mike Johnson (East Bay Futurists)
Lauren Barghout (speaker at Johns Hopkins University)
Jay Cornell (co-author of Transcendence
Hank Pellissier (IEET Managing Director)
Dan Faggella (IEET Advisory Board) – tentative

Tickets will be available at EventBrite soon

 
• Category: Miscellaneous • Tags: Futurism, The AK, Transhumanism 
HBD, Hive Minds, and H+
🔊 Listen RSS

Today is the publication date of Hive Mind, a book by economist Garett Jones on the intimate relationship between average national IQs and national success, first and foremost in the field of economics.

I do intend to read and review it ASAP, but first some preliminary comments.

This is a topic I have been writing about since I started blogging in 2008 (and indeed well before I came across Steve Sailer or even HBD) and as it so happens, I have long been intending to write a similar sort of book myself – tentatively titled Apollo’s Ascent – but one that focuses more on the historical aspect of the relationship between psychometrics and development:

My basic thesis is that the rate of technological progress, as well as its geographical pattern, is highly dependent on the absolute numbers of literate high IQ people.

To make use of the intense interest that will inevitably flare up around these topics in the next few days – not to mention that rather more self-interested reason of confirming originality on the off chance that any of Garett Jones’ ideas happen to substantively overlap with mine – I have decided to informally lay out the theoretical basis for Apollo’s Ascent right now.

1. Nous

Assume that the intellectual output of an average IQ (=100, S.D.=15) young adult Briton in the year 2000 – as good an encapsulation of the “Greenwich mean” of intelligence as any – is equivalent to one nous (1 ν).

This can be used to calculate the aggregate mindpower (M) in a country.

Since sufficiently differing degrees of intelligence can translate into qualitative differences – for instance, no amount of 55 IQ people will be able to solve a calculus problem – we also need to be able to denote mindpower that is above some threshold of intelligence. So in this post, the aggregate mindpower of a country that is above 130 will be written as M(+2.0), i.e. that aggregate mindpower that is two standard deviations above the Greenwich mean.

2. Intelligence and Industrial Economies

There is a wealth of evidence implying an exponential relationship between average IQ and income and wealth in the United States.

human-capital-and-gdp-per-capita-world

Click to enlarge.

There is likewise a wealth of evidence – from Lynn, Rindermann, La Griffe du Lion, your humble servant, etc. – that shows an exponential relationship between levels of average national IQ and GDP per capita (PPP adjusted). When you throw out countries with a legacy of Communism and the ruinous central planning they practiced (China, the Ex-USSR and Eastern Europe, etc), and countries benefitting disproportionately from a resource windfall (Saudi Arabia, the UAE, etc), there is an amazing R2=0.84 correlation between performance in the PISA international standardized student tests and GDP (PPP) per capita. (In sociology, anything about R2=0.3 is a good result).

The reasons for this might be the case are quite intuitive. At the most basic level, intelligent people can get things done better and more quickly. In sufficiently dull societies, certain things can’t get done at all. To loosely borrow an example from Gregory Clark’s A Farewell to Alms, assume a relatively simple widget that requires ten manufacturing steps that have to be done just right to make it commercially viable. Say an 85 IQ laborer has a failure rate of 5% for any one step, while a 100 IQ laborer has a failure rate of 1%. This does not sound like that big or cardinal of a difference. But repeated ten times, some 40% of the duller worker’s production ends up being a dud, compared to only 10% of the brighter worker’s. Consequently, one is competitive on the global markets, whereas the other is not (if labor costs are equal; hence, of course, they are not).

Now imagine said widget is an automobile, with hundreds of thousands of components. Or an aircraft carrier, or a spaceship. Or a complex surgery operation.

More technical way of looking at this: Consider the GDP equation, Y = A * K^α * L^(1-α), in which K is capital, L is labour, α is a constant that usually equals about 0.3, and A is total factor productivity. It follows that the only way to grow per capita output in the longterm is to raise productivity. Productivity in turn is a function of technology and how effectively it is utilized and that in turn depends critically on things like human capital. Without an adequate IQ base, you cannot accumulate much in the way of human capital.

There are at least two further ways in which brighter societies improve their relative fortunes over and above what might merely be implied by their mere productivity advantage at any technological level.

robot-density

Source: Swiss Miss.

First, capital gets drawn to more productive countries, until the point at which its marginal productivity equalizes with that of less productive countries, with their MUCH LOWER levels of capital intensity. First World economies like Germany, Japan, and the US are extremely capital intensive. It is probably not an accident that Japan, Korea, and Taiwan – some of the very brightest countries on international IQ comparisons – also have by far the world’s highest concentrations of industrial robots per worker (and China is fast catching up). Since economic output is a function not only of pure productivity but also of capital (though subject to diminishing returns) this provides a big further boost to rich countries above the levels implied by their raw productivity. And as the age of automation approaches, these trends will only intensify.

Second, countries with higher IQs also tend to be better governed, and to effectively provide social amenities such as adequate nutrition and education to their populations. Not only does it further raise their national IQs, but it also means that it is easier to make longterm investments there and to use their existing human capital to its full potential.

All this implies that different levels of intelligence have varying economic values on the global market. At this stage I am not so much interested in establishing it with exactitude as illustrating the general pattern, which goes something like this:

  • Average IQ = 70 – Per capita GDP of ~$4,000 in the more optimally governed countries of this class, such as Ghana (note however that many countries in this class are not yet fully done with their Malthusian transitions, which will depress their per capita output somewhat – see below).
  • Average IQ = 85 – Per capita GDP of ~$16,000 in the more optimally governed countries of this class, such as Brazil.
  • Average IQ = 100 Per capita GDP of ~45,000 in the more optimally governed countries of this class, or approximately the level of core EU/US/Japan.
  • Average IQ = 107 – Per capita GDP of potentially $80,000, as in Singapore (and it doesn’t seem to have even finished growing rapidly yet). Similar figures for elite/financial EU cities (e.g. Frankfurt, Milan) and US cities (e.g. San Francisco, Seattle, Boston).
  • Average IQ = 115 – Largely a theoretical construct, but that might be the sort of average IQ you’d get in, say, Inner London – the center of the global investment banking industry. The GDP per capita there is a cool $152,000.

Countries with bigger than normal “smart fractions” (the US, India, Israel) tend to have a bigger GDP per capita than what could be assumed from just from their average national IQ. This stands to reason because a group of people equally split between 85 IQers and 115 IQers will have higher cognitive potential than a room composed of an equivalent number of 100 IQers. Countries with high average IQs but smaller than normal S.D.’s, such as Finland, have a slightly smaller GDP per capita that what you might expect just from average national IQs.

These numbers add up, so a reasonable relationship equilibrium GDP (assuming no big shocks, good policies, etc) and the structure and size of national IQ would be:

Equilibrium GDP of a country exponent (IQ) * the IQ distribution (usually a bell curve shaped Gaussian) * population size * the technological level

Which can be simplified to:

Y ≈ c*M*T

… where M is aggregate mindpower (see above), T is the technology level, and c is a constant denoting the general regulatory/business climate (close to 1 in many well run capitalist states, <0.5 under central planning, etc).

To what extent if any would this model apply to pre-industrial economies?

3. Intelligence and Malthusian Economies

sfd

Source: A Farewell to Alms

Very little. The problem with Malthusian economies is that, as per the old man himself, population increases geometrically while crop yields increase linearly; before long, the increasing population eats up all the surpluses and reaches a sordid equilibrium in which births equal deaths (since there were a lot of births, that means a lot of deaths).

Under such conditions, even though technology might grow slowly from century to century, it is generally expressed not in increasing per capita consumption, but in rising population densities. And over centennial timescales, the effects of this (meager) technological growth can be easily swamped by changes in social structure, biome productivity, and climatic fluctuations (e.g. 17th C France = pre Black Death France in terms of population, because it was Little Ice Age vs. Medieval Warm Period), or unexpected improvements in agricultural productivity e.g. from the importation of new crops (e.g. the coming of sweet potatoes to China which enabled it to double its population over the previous record even though it was in outright social regress for a substantial fraction of this time).

All this makes tallying the rate of technological advance based on population density highly problematic. Therefore it has to be measured primarily in terms of eminent figures, inventions, and great works.

sdfds

Distribution of significant figures across time and place. Source: Human Accomplishment.

The social scientist Charles Murray in Human Accomplishment has suggested a plausible and objective way of doing it, based on tallying the eminence of historical figures in culture and the sciences as measured by their prevalence in big reference works. Societies that are at any one time intensively pushing the technological frontiers outwards are likely to be generating plenty of “Great People,” to borrow a term from the Civilization strategy games.

To what extent does the model used for economic success apply to technology?

4. Intelligence and Technology Before 1800

A narrow intellectual elite is responsible for 99%+ of new scientific discoveries. This implies that unlike the case with an economy at large, where peasants and truck drivers make real contributions, you need to have a certain (high) threshold level of IQ to materially contribute to technological and scientific progress today.

The Anne Roe study of very eminent scientists in 1952 – almost Nobel worthy, but not quite – found that they averaged a verbal IQ of 166, a spatial IQ of 137, and a math IQ of 154. Adjusted modestly down – because the Flynn Effect has only had a very modest impact on non-rule dependent domains like verbal IQ – and you get an average verbal IQ of maybe 160 (in Greenwich terms). These were the sorts of elite people pushing progress in science 50 years ago.

To really understand 1950s era math and physics, I guesstimate that you would need an IQ of ~130+, i.e. your typical STEM grad student or Ivy League undergrad. This suggests that there is a 2 S.D. difference between the typical intellectual level needed to master something as opposed to making fundamental new discoveries in it.

Moreover, progress becomes steadily harder over time; disciplines splinter (see the disappearance of polymath “Renaissance men”), and eventually, discoveries become increasingly unattainable to sole individuals (see the steady growth in numbers of paper coauthors and shared Nobel Prizes in the 20th century). In other words, these IQ discovery thresholds are themselves a function of the technological level. To make progress up the tech tree, you need to first climb up there.

An extreme example today would be the work 0f Japanese mathematician Shinichi Mochizuki. At least Grigory Perelman’s proof of the Poincare Conjecture was eventually confirmed by other mathematicians after a lag of several years. But Mochizuki is so far ahead of everyone else in his particular field of Inter-universal Teichmüller theory that nobody any longer quite knows whether he is a universal genius or a lunatic.

In math, I would guesstimate roughly the following set of thresholds:

Mastery Discovery
Intuit Pythagoras Theorem (Ancient Egypt) 90 120
Prove Pythagoras Theorem (Early Ancient Greece) 100 130
Renaissance Math (~1550) 110 140
Differential Calculus (~1650+) 120 150
Mid-20th Century Math (1950s) 130 160
Prove Poincare Conjecture (2003) 140 170
Inter-universal Teichmüller theory (?) 150 180

This all suggests that countries which attain new records in aggregate elite mindpower relative to their predecessors can very quickly generate vast reams of new scientific discoveries and technological achievements.

Moreover, this elite mindpower has to be literate. Because a human brain can only store so much information, societies without literacy are unable to move forwards much beyond Neolithic levels, their IQ levels regardless.

As such, a tentative equation for estimating a historical society’s capacity to generate scientific and technological growth would look something like this:

Technological growth c * M(>threshold IQ for new discovery) * literacy rate

or:

ΔT c * M(>discovery-threshold) * l

in which only that part of the aggregate mindpower that is above the threshold is considered; c is a constant that illustrates a society’s propensity for generating technological growth in the first place and can encompass social and cultural factors, such as no big wars, no totalitarian regimes, creativity, etc. as well as technological increases that can have a (generally marginal) effect on scientific productivity, like reading glasses in Renaissance Italy (well covered by David Landes), and the Internet in recent decades; and the literacy rate l is an estimate of the percentage of the cognitive elites that are literate (it can be expected to generally be a function of the overall literacy rate and to always be much higher).

Is it possible to estimate historical M and literacy with any degree of rigor?

dfgdf

Source: Gregory Clark.

I think so. In regards to literacy, this is an extensive area of research, with some good estimates for Ancient Greece and the Roman Empire (see Ancient Literacy by William Harris) and much better estimates for Europe after 1500 based on techniques like age heaping and book production records.

One critical consideration is that not all writing systems are equally suited for the spread of functional literacy. For instance, China was historically one of the most schooled societies, but its literacy tended to be domain specific, the classic example being “fish literacy” – a fishmonger’s son who knew the characters for different fish, but had no hope of adeptly employing his very limited literacy for making scientific advances, or even reading “self-help” pamphlets on how to be more effective in his profession (such as were becoming prevalent in England as early as the 17th century). The Chinese writing system, whether it arose from QWERTY reasons or even genetic reasons – and which became prevalent throughout East Asia – surely hampered the creative potential of East Asians.

Estimating average national IQs historically – from which M can be derived in conjunction with historical population sizes, of which we now generally have fairly good ideas about – is far more tricky and speculative, but not totally hopeless, because nowadays we know the main factors behind national differences in IQ.

Some of the most important ones include:

  • Cold Winters Theory – Northern peoples developed higher IQs (see Lynn, Rushton).
  • Agriculture – Societies that developed agriculture got a huge boost to their IQs (as well as higher S.D.s).
  • Inbreeding – Can be estimated from rates of consanguineous marriage, runs of homozygosity, and predominant family types (nuclear? communitarian?), which in turn can be established from cultural and literary evidence.
  • Eugenics – In advanced agricultural societies, where social relations come to be dominated by markets. See Greg Clark on England, and Ron Unz on China.
  • Nutrition – Obviously plays a HUGE role in the Flynn Effect. Can be proxied by body measurements, and fortunately there is a whole field of study devoted to precisely this: Auxology. Burials, conscription records, etc. all provide a wealth of evidence.
  • Parasite Load – Most severe in low-lying, swampy areas like West Africa and the Ganges Delta.
byzantine-empire-intellectual-capacity

This old comment of mine to a post by Sailer is a demonstration of the sort of reasoning I tend to employ in Apollo’s Ascent.

All this means that educated guesses at the historic IQs of various societies are now perfectly feasible, if subject to a high degree of uncertainty. In fact, I have already done many such estimates while planning out Apollo’s Ascent. I will not release these figures at this time because they are highly preliminary, and lacking space to further elucidate my methods, I do not want discussions in the comments to latch on to some one figure or another and make a big deal out of it. Let us save this for later.

But in broad terms – and very happily for my thesis – these relations DO tend to hold historically.

Classical Greece was almost certainly the first society to attain something resembling craftsman level literacy rates (~10%). Ancient Greeks were also unusually tall (indicating good nutrition, for a preindustrial society), lived in stem/authoritarian family systems, and actively bred out during their period of greatness. They produced the greatest scientific and cultural explosion up to that date anywhere in the world, but evidently didn’t have quite the demographic weight – there were no more than 10 million Greeks scattered across the Mediterranean at peak – to sustain it.

In 15th century Europe, literacy once again begun soaring in Italy, to beyond Roman levels, and – surely helped by the good nutrition levels following the Black Death – helped usher in the Renaissance. In the 17th century, the center of gravity shifted towards Anglo-Germanic Europe in the wake of the Reformation with its obsession with literacy, and would stay there ever after.

As regards other civilizations…

The Islamic Golden Age was eventually cut short more by the increasing inbreeding than by the severe but ultimately temporary shock from the Mongol invasions. India was too depressed by the caste system and by parasitic load to ever be a first rate intellectual power, although the caste system also ensured a stream of occasional geniuses, especially in the more abstract areas like math and philosophy. China and Japan might have had an innate IQ advantage over Europeans – albeit one that was quite modest in the most critical area, verbal IQ – but they were too severely hampered by labour-heavy agricultural systems and a very ineffective writing system.

In contrast, The Europeans, fed on meat and mead, had some of the best nutrition and lowest parasitic load indicators amongst any advanced civilization, and even as rising population pressure began to impinge on those advantages by the 17th-18th centuries, they had already burst far ahead in literacy, and intellectual predominance was now theirs to lose.

5. Intelligence and Technology under Industrialism

After 1800, the world globalized intellectually. This was totally unprecedented. There had certainly been preludes to it, e.g. in the Jesuit missions to Qing China. But these were very much exceptional cases. Even in the 18th century, for instance, European and Japanese mathematicians worked on (and solved) many of the same problems independently.

sdfsd

Source: Human Accomplishment.

But in the following two centuries, this picture of independent intellectual traditions – shining most brightly in Europe by at least an order of magnitude, to be sure, but still diverse on the global level – was to be homogenized. European science became the only science that mattered, as laggard civilizations throughout the rest of the world were to soon discover to their sorrow in the form of percussion rifles and ironclad warships. And by “Europe,” that mostly meant the “Hajnal” core of the continent: France, Germany, the UK, Scandinavia, and Northern Italy.

And what had previously been but a big gap became an awning chasm.

(1) In the 19th century, the populations of European countries grew, and the advanced ones attained universal literacy or as good as made no difference. Aggregate mindpower (M) exploded, and kept well ahead of the advancing threshold IQ needed to make new discoveries.

(2) From 1890-1970, there was a second revolution, in nutrition and epidemiology – average heights increased by 10cm+, and the prevalence of debilitating infectitious diseases was reduced to almost zero – that raised IQ by as much as a standard deviation across the industrialized world. The chasm widened further.

(3) During this period, the straggling civilizations – far from making any novel contributions of their own – devoted most of their meager intellectual resources to merely coming to grips with Western developments.

This was as true – and consequential – in culture and social sciences as it was in science and technology; the Russian philosopher Nikolay Trubetzkoy described this traumatic process very eloquently in The Struggle Between Europe and Mankind. What was true even for “semi-peripheral” Russia was doubly true for China.

In science and technology, once the rest of the world had come to terms with Western dominance and the new era of the nation-state, the focus was on catchup, not innovation.This is because for developing countries, it is much more useful in terms of marginal returns to invest their cognitive energies into copying, stealing, and/or adapting existing technology to catch up to the West than to develop unique technology of their own. Arguments about, say, China’s supposed lack of ability to innovate are completely besides the point. At this stage of its development, even now, copying is much easier than creating!

This means that at this stage of global history, a country’s contribution to technological growth isn’t only a matter of the size of its smart fractions above the technological discovery IQ threshold. (This remains unchanged: E.g., note that a country like Germany remains MUCH more innovative per capita than, say, Greece, even though their aveage national IQs differ by a mere 5 points or so. Why? Because since we’re looking only at the far right tails of the bell curve, even minor differences in averages translate to big differences in innovation-generating smart fractions).

It also relates closely to its level of development. Countries that are far away from the technological frontier today are better served by using their research dollars and cognitive elites to catch up as opposed to inventing new stuff. This is confirmed by real life evidence: A very big percentage of world spending on fundamental research since WW2 has been carried out in the US. It was low in the USSR, and negligible in countries like Japan until recently. Or in China today.

Bearing this in mind, the technological growth equation today (and since 1800, more or less) – now due to its global character better described as innovation potential – would be better approximated by something like this:

Innovation potential ≈ c * M(>threshold IQ for new discovery) * literacy rate * (GDP/GDP[potential])^x

or:

I c * M(>discovery-threshold) * l * (Y/Y[P])^x

in which the first three terms are as before (though literacy = 100% virtually everywhere now), and potential GDP is the GDP this country would obtain were its technological endowment to be increased to the maximum level possible as dictated by its cognitive profile. The “x” is a further constant that is bigger than 1 to reflect the idea that catchup only ceases to be the most useful strategy once a country has come very close to convergence or has completely converged.

Japan has won a third of all its Nobel Prizes before 2000; another third in the 2000s; and the last third in the 2010s. Its scientific achievements, in other words, are finally beginning to catch up with its famously high IQ levels. Why did it take so long?

Somebody like JayMan would say its because the Japanese are clannish or something like that. Other psychometrists like Kenya Kura would notice that perhaps they are far less creative than Westerners (this I think has a measure of truth to it). But the main “purely IQ” reasons are pretty much good enough by themselves:

  • The Nobel Prize is typically recognized with a ~25-30 year lag nowadays.
  • It is taking ever longer amounts of time to work up to a Nobel Prize because ever greater amounts of information and methods have to be mastered before original creative work can begin. (This is one consequence of the rising threshold discovery IQ frontier).
  • Critically, Japan in the 1950s was still something of a Third World country, with the attended insults upon average IQ. It is entirely possible that elderly Japanese are duller than their American counterparts, and perhaps even many Europeans of that age, meaning smaller smart fractions from the Nobel Prize winning age groups.

Japan only became an unambiguously developed country in the 1970s.

And it just so happens that precisely 40 years after this did it begin to see a big and still accelerating increase in the numbers of Nobel Prizes accruing to it!

Extending this to South Korea and Taiwan, both of which lagged around 20 years behind Japan, we can only expect to see an explosion in Nobel Prizes for them from the 2020s, regardless of how wildly their teenagers currently top out the PISA rankings.

Extending this to China, which lags around 20 years behind South Korea, and we can expect to see it start gobbling up Nobel Prizes by 2040, or maybe 2050, considering the ongoing widening of the time gap between discovery and recognition. However, due to its massive population – ten times as large as Japan’s – once China does emerge as a major scientific leader, it will do so in a very big way that will rival or even displace the US from its current position of absolute primacy.

As of 2014, China already publishes almost as many scientific papers per year as does the US, and has an outright lead in major STEM fields such as Math, Physics, Chemistry, and Computer Science. (Though to be sure, their quality is much lower, and a significant fraction of them are outright “catching up” or “adaption” style papers with no new findings).

If we assume that x=1, and that c is equal for both China and the US, then it implies that both countries currently have broadly equal innovation potential. But of course c is not quite equal between them – it is lower for China, because its system is obviously less conductive to scientific research than the American – and x is higher than 1, so in practice China’s innovation potential is still considerably lower than that of the US (maybe a quarter or a third). Nonetheless, as China continues to convege, c is going to trend towards the US level, and the GDP gap is going to narrow; plus it may also be able to eke out some further increases in its national average IQ from the current ~103 (as proxied by PISA in 2009) to South Korea’s level of ~107 as it becomes a truly First World country.

And by mid-century it will likely translate into a strong challenge to American scientific preeminence.

6. Future Consequences

The entry of China onto the world intellectual stage (if the model above is more or less correct) will be portentuous, but ultimately it will in its effects on aggregate mindpower be nowhere near the magnitude in global terms of the expansion in the numbers of literate, mostly European high IQ people from 1450 to 1900, nor the vast rise in First World IQ levels from 1890-1970 due to the Flynn Effect.

Moreover, even this may be counteracted by the dysgenic effects already making themselves felt in the US and Western Europe due to Idiocracy-resembling breeding patterns and 80 IQ Third World immigration.

And no need for pesky implants!

Radically raise IQ. And no need for pesky neural implants!

A lot of the techno-optimistic rhetoric you encounter around transhumanist circles is founded on the idea that observed exponential trends in technology – most concisely encapsulated by Moore’s Law – are somehow self-sustaining, though the precise reasons why never seem to be clearly explained. But non-IT technological growth peaked in the 1950s-70s, and has declined since; and as a matter of fact, Moore’s Law has also ground to a halt in the past 2 years. Will we be rescued by a new paradigm? Maybe. But new paradigms take mindpower to generate, and the rate of increase in global mindpower has almost certainly peaked. This is not a good omen.

Speaking of the technological singularity, it is entirely possible that the mindpower discovery threshold for constructing a superintelligence is in fact far higher than we currently have or are likely to ever have short of a global eugenics program (and so Nick Bostrom can sleep in peace).

On the other hand, there are two technologies that combined may decisively tip the balance: CRISPR-Cas9, and the discovery of the genes for general intelligence. Their maturation and potential mating may become feasible as early as 2025.

While there are very good reasons – e.g., on the basis of animal breeding experiments – for doubting Steve Hsu’s claims that genetically corrected designer babies will have IQs beyond that of any living human today, increases on the order of 4-5 S.D.’s are entirely possible. If even a small fraction of a major country like China adopts it – say, 10% of the population – then that will in two decades start to produce an explosion in aggregate global elite mindpower that will soon come to rival or even eclipse the Renaissance or the Enlightenment in the size and scope of their effects on the world.

The global balance of power would be shifted beyond recognition, and truly transformational – indeed, transhuman – possibilities will genuinely open up.

 
🔊 Listen RSS

robert-stark Robert Stark is a journalist who specializes in interviewing various interesting figures from the Alt fringes. So you could I suppose view him as The Unz Review but on radio.

This is my second interview with him. Here is a link to the first.

Robert Stark interviews Anatoly Karlin.

Topics were my standard fare:

Basically, stuff that you’ve probably heard here before.

That said, we did veer into two fairly idiosyncratic tangents.

(1) The Alt Right should embrace Transhumanism

Yes, I know, they are sort of dorky and even SJWish at times. But technology has ideological load, as Michael Anissimov put it (in an article I can’t find), and it just so happens that transhuman techs are perfectly in line with Alt Right, NRx, Identitarian, and even White Nationalist agendas.

  • Raising IQs via genetic editing will arrest the dysgenic trends increasingly affected all peoples on the planet. Degenerating into a global idiocracy serves absolutely no-one’s interest: Not of Europeans, nor Asians, nor Africans.
  • Automation will (hopefully) redistribute resources from the NAM-pandering welfare systems of today to something more fair and equitable. It will also probably help even the gap between indigenous and immigrant fertility rates in Europe and the US.
  • Radical life extension will help preserve White majorities in Europe. The reason that they are declining isn’t just a matter of birth rates, but also of death rates; Europeans are simply much older than your typical immigrant “youth.” Plummeting mortality and morbidity rates – apart from their general desirability – will from an ethnic perspective overwhelmingly benefit Whites and help Europeans maintain majorities in their historic homelands.

Ultimately, this is the future, and ideologies that fail to grapple and engage with it will fall by the wayside.

(2) The Alt Left needs to become a thing

I completely agree with Robert Lindsay on this.

Do you think I should start an Alternative Left movement? People are calling me the Alternative Left. Alternative Left would be something like:

Economically Leftist or liberal (left on economics)
Socially Conservative or at least sane (right on social issues)

It would be something like a leftwing mirror of the Alternative Right.

Do you think it would go over? I am really getting sick of this Left/Right bullshit. Everyone has to decide if they are “conservative” or “liberal.” What bullshit. What if you are a little of both?

Just because I don’t want to engage in SJW faggotry – the sort of ideology that Lenin would have called an infantile disorder, and which Friedrich Engels correctly identified as serving the reaction – doesn’t necessarily mean I want to lick oligarch ass either.

There is no left or right, only nationalists and globalists.” – Marine Le Pen

 
• Category: Ideology • Tags: Alt Left, Alt Right, Ideology, Interviews, SJWs, Transhumanism 
🔊 Listen RSS

mike-johnson-iq-talk

Founded by eight NASA scientists, the Rainbow Mansion is a kind of academic coop, where you have to demonstrate you’re working on something interesting to get a rental agreement. The building itself is true to its name, a mansion spacious within, and surrounded by lush gardens without. Every week they host a group dinner, followed by a speech from an invited guest. This week’s guest was Mike Johnson, a philosopher and transhumanist who is currently working on a treatise that could lay the groundwork for a mathematical model of pain/pleasure.

His talk, however, wasn’t about that, but about another topic and interest of his – genetic typos, the possibility of “correcting” them, and the profound effects that might have on human intelligence and capability if widely implemented.

Genetic editing tools are already coming online that would work for already existing organisms: CRISPR, whole-chromosome DNA synthesis, viral vectors (adenoviri), etc. What can we edit? We could try to maximize for some trait using the GWAS approach (e.g. as BGI is trying to do with IQ). We could go for transgenic bioengineering, you know, the Spiderman/Resident Evil-type stuff. But that’s pretty hard. Just fixing our own broken genes is much easier and could potentially generate tremendous payoffs in increased health, intelligence, and longevity.

We all have varying amounts of “broken genes,” the genetic equivalent of spelling errors. Of those errors that have an effect, the vast majority are bad; as Mike pointed out, if you were to open up a computer program and edit code at random, you are far more likely to ruin or degrade the program than improve it. There are several different definitions and estimates of the numbers of these errors: 100 semi-unique Loss of Function mutations (MacArthur 2012), 1,000 minor IQ-decreasing variants (Hsu 2014), 300 health-decreasing mutations (Leroi 2005).

Broken genes have a broadly linear additive effect on general fitness, which is well approximate by IQ. Stephen Hsu’s research indicates that people have an average of 1000 broken genes, with 30-40 mutations contributing to a stunning -1SD drop in intelligence. In essence, it’s not so much that there are genes for intelligence, as there are genes for stupidity. Fix all of them, and theoretically, you might get IQs never before observed on this planet. As Greg Cochran memorably put it:

What would a spelling-checked person, one with no genetic typos, be like? Since no such person has ever existed, we have to speculate. I figure that kind of guy would win the decathlon, steal your shirt and your girl – and you still couldn’t help liking him.

Here is a list of (optimistic) estimates for other traits that Mike collated from various sources.

genetic-spell-checking-benefits

While Mike, understandably, did not go into this in his talk, one more important point has to be mentioned: There is also an explicit HBD angle to the theory of genetic load.

Studies show significantly more Loss of Function mutations amongst Africans than Europeans or East Asians, which would tie in not only to well-known psychometric data but Satoshi Kanazawa’s theories on the relatively low atttractiveness of Black women (specifically female beauty, like g, appears to be a good proxy for overall fitness). Cochran ascribes it to heat. I am not so sure. Peak wet bulb temperatures are actually higher today in the Ganges delta and interior China than most of Africa, which has some really cool (temperature-wise) places like the Ethiopians highlands and the Great Rift region. This might not have been quite the case during the Ice Age, of course, but still, 10,000 years is a long time to adjust to a new equilibrium.

Another possible determinant of genetic load is male parental age. Offspring genetic load and paternal (though not maternal) age are positively correlated. Paternal age in traditional societies can differ substantially according to their particular family system. For instance, within the Hajnal Line encompassing most of Western Europe, characterized by nuclear families, average paternal age was considerably higher than amongst say the neighboring Poles and Russians. What specific family system is highly prevalent in the traditional global South, especially in Africa? Polygamy. This implies one dude monopolizing a lot of the chicks. What would he be like? Big, bad, bold – naturally. But he’d also have a reputation, and he’d probably be someone who can spit smooth game. Both the latter require some time to build up. So he would probably be considerably older than fathers elsewhere in the world who entered into monogamous marriages. But this is just a theory, it would be great to actually get concrete anthropological data on average paternal age in traditional Africa.

Though I’m taking steps to remedy this, I am not sufficiently well versed in genetics as to offer a valid judgment on the plausibility of Cochran’s and Hsu’s mutational load theory of IQ. Still, it does appear to have a great deal of face validity to it, though I remain skeptical of whether spellchecking can truly create “superhumans,” as opposed to just some very healthy and athletic 145-175 IQ types with a life expectancy of maybe 105 years. Surely at some point basic biological limits will be hit, and there will be diminishing returns?

Still, the potential for improvement are immense, and eventually, it will be possible to apply them to grown adults as opposed to just embryos. Even raising the global IQ by one SD will basically solve India’s and Africa’s development problems, while making the two odd billions of Europeans, Americans, and Chinese as innovative per capita as the world’s 20 million Ashkenazi Jews. Near instant technological singularity! When asked to give an estimate, Mike Johnson said that this “spellchecking” technology will become available in 5-7 years for billionaires who wish to have a designer baby and are unconstrained by any regulatory restrictions.

There’ll inevitably be a lot of hand-wringing about this, lots of soul searching and moral queasiness, and no doubt some attempts at restriction, but it’s hard to stop a moving train. As Mike said, the Chinese and East Asians in general don’t share these concerns; if they can safely have a more intelligent child, well, why on earth not? It is telling that the global focal point for research on the genetics of IQ, which Steve Hsu is incidentally heavily involved with, is the Beijing Genomics Institute. Regardless of their reasons or justifications, those who refuse to get on this train will simply be left behind.

 
• Category: Science • Tags: Crispr, Family, Genetic Load, Transhumanism 
🔊 Listen RSS

RosieTheRoboteerThis conference is organized by brain health and IQ researcher Hank Pellissier, and its aim is to bring all kinds of quirky and visionary folks – “Biohackers, Neuro-Optimists, Extreme Futurists, Philosophers, Immortalist Artists, Steal-the-Singularitarians” – together in one place and have them give speeches and interact with each other and the interested public.

One of the lecturers is going to be Aubrey de Grey, the guy who almost singlehandedly transformed radical life extension into a “respectable” area of research, so it’s shaping up to be a Must-Not-Miss event for NorCal futurists.

Also in attendance will be Zoltan Istvan, bestselling author of The Transhumanist Wager, and Rich Lee, the famous biohacker and grinder. The latter will bring a clutch of fellow grinders and switch-blade surgeons with him to perform various modification procedures on the braver and more visionary among us.

Your humble servant will also be speaking. The preliminary title of my speech is “Cliodynamics: Moving Psychohistory from Science Fiction to Science.” Other conference speakers include RU Sirius, Rachel Haywire, Randal A. Koene, Apneet Jolly, Scott Jackisch, Shannon Friedman, Hank Pellissier, Roen Horn, and Maitreya One.

Time/Location: February 1, 2014 (Saturday) from 9:30am-9:30pm at the Firehouse, Fort Mason, 2 Marina Blvd., in San Francisco.

Buy Tickets:

Tickets are on sale from November 1-30 for $35. Only 100 tickets are available due to limited seating. In December tickets will cost $40 (if they’re still available). In January they’ll cost $45, with $49 the at-the-door price.

To obtain a ticket, PayPal $35 to account # hedonistfuturist@aol.com – include your name. You will quickly receive a receipt that you can print out as your ticket, and your name will be added to the guest list.

Below is a photo gallery of everyone on the lecture list and some further details:

Extras & Freebies:

  • SPECIAL PERKS – FREE PIRACETAM & CREATINE (limited amount) + BULLETPROOF COFFEE [TM] AVAILABLE + UPGRADED CHOCOLATE
  • RICH LEE PROMISES RFID IMPLANTS AVAILABLE for stoic volunteers + he’s bringing his HALLUCINATION MACHINE (“A clutch of Grinders and switch-blade surgeons will be in attendance to perform various modification procedures. Whether it is physical, mental, or emotional, we promise this presentation will leave everyone with some kind of scar!”)
  • HANK PELLISSIER will encourage the mob to select policy for a “NEURO-OPTIMAL UTOPIA” – heated disagreements guaranteed
  • NEW GUEST – FROM HARLEM – MAITREYA ONE will rap his transhumanist Hip Hop songs
  • Brain Healthy “ketogenic” food will be available at the conference – avocados, hardboiled eggs, walnuts, olives, coconut oil, etc. Biohack and QS research will be featured on display tables, alongside transhumanist t-shirts.

Additional Questions: Contact brighterbrainsinstitute AT gmail DOT com (3 volunteers with technical skills are needed, if you can help with sound and visual equipment).

Sponsors: The Bulletproof Executive (aka IT businessman/biohacker Dave Asprey, he of the Bulletproof Coffee mentioned above) is the lead sponsor. Brighter Brains Institute and East Bay Futurists are co-sponsors.

(Republished from AKarlin.com by permission of author or representative)
 
This is the first in a series of philosophical essays in which I outline my philosophy of Sublime Oblivion. Here I demonstrate the indivisibility of the material and Platonic worlds and show that our universe is almost certainly a computer simulation nested within an abstract computer program or simulacrum. The consequences of these results are explored.
🔊 Listen RSS

Here I outline one of the core philosophies of Sublime Oblivion. I demonstrate the indivisibility of the material and Platonic worlds and show that our universe is almost certainly a computer simulation nested within an abstract computer program or simulacrum, the truth that hides that there is none. The consequences of these results are explored.

Modern natural science has a lot to be proud of. Technology follows in its wake. The horizons of human consciousness retreat before its implacable incandescence. Its defining trait, reason, affirms freedom. Yet it is ultimately disappointing and dehumanizing. It heralds the death of God, of struggle and belief in good and evil, while in atonement for deicide, deigns to offer only models of reality that approach but never reach union with it. Thus we come to an impasse, the fatal double dilemma that drove Kierkegaard to despair, Nietzsche to madness and Camus to an ‘acceptance without resignation’ – though I personally can’t imagine Sisyphus happy.

All the arguments for God’s existence that I know of sink under one paradox or another – cosmology through infinite regression, ontology through elementary logic and teleology through evolution. Constructing an equivalence between Nature or reality, and God, is nothing more than an exercise in tautology dating from Spinoza and as such tantamount to atheism. Those who cite Darwinian evolution or Hegelian dialectics as the answer do not realize that they are nothing more than a Mechanism, as hopeless as traditional objects of belief at explaining the deepest metaphysical questions. In despair over the power of pure positivism to rationalize existence, let us make a bold conjecture and make the axiomatic assertion that all that might be, is.

According to Plato, there exists a separate world of ‘perfect forms’ or ‘universals’ that is the highest and most fundamental reality; our world contains but their imperfect imitations. This concept can be best explained through mathematics. Even if some global cataclysm were to wipe out humanity, the Theorem of Pythagoras will linger on unperturbed on some transcendent plane, ripe for the picking by the next species to evolve abstract reasoning skills. This is because the squares of the shorter sides of a right-angled triangle will always equal the square of the longer side under Euclidean geometry. I will call this Platonic realm the Void, for it is indeed void; it is an abstract, all-encompassing region of nothingness, zero and infinity. All possible mathematical objects and their unions exist in the Void.

There exists an interesting class of mathematical constructs known as ‘cellular automata’ . These are regular grids of cells, each in one of a finite amount of states, in a finite number of dimensions. The dimension of time is also discrete, with the state of any particular cell at time t a function of the states of the cells in its ‘neighborhood’ at time t – 1. This function is based on fixed rules and has an undetermined outcome. What makes cellular automata intriguing is how some of them can generate order and complexity out of initial chaos, thus reflecting the meta-narrative of our own universal evolution from a soup of primitive particles to industrial civilization. Although most cellular automata exhibit only simple repetition or rampant randomness, a special few demonstrate an interesting, uninterrupted interplay between order and chaos. Conway’s ‘Game of Life’ generates stable patterns which exhibit themselves amidst disorder, thus fulfilling a very general definition of life as a localized, self-sustaining concentration of ordered complexity. The most philosophically significant is Wolfram’s Rule 110, which produces complex, non-repeating patterns and was proven to be computationally universal, i.e. theoretically capable of performing any computable task. Furthermore, these behaviors demonstrated by cellular automata are replicated by many classes of other simple computer programs, and as such have a strong claim to universality.

One of the most important paradigm shifts of the Scientific Revolution was the gradual rejection of the Aristotelian theory that matter was continuous and elemental. The ancient Greek and Chinese conception of the world as a melange of Earth, Water, Fire, Air and Ether was displaced by theories that space-time was made up of discrete if very small units – corpuscular cells, atomistic molecules, ‘chronon’ time. Through its centennial, dialectical procedure of postulation, refutation and synthesis, science arrived at the fundamental limits to observation into the worlds that lie hidden within Planck distances and in between Planck time. Our universe is capable of evolving patterns amidst chaos that are sophisticated enough to recognize them as such, if not fully understand them – the proof is in front of (or rather, behind) our noses. Although continuous mathematics is used to explain the vast majority of natural processes, its inadequacies are protected from exposure because the universe operates with discrete quanta that are small from a human perspective. Modern quantum mechanics, with its chaotic ‘soup’ of sub-atomic particles, offers a glimpse beyond analog delusions into discrete reality. In cellular automata, the states of all cells affect every other cell, which is a perfect metaphor for the fundamental problems in measuring quantum phenomena.

We know by the anthropic principle that the universe exhibits an evolutionary mechanism that resulted in an increase in ordered complexity amidst chaos. Science showed that the universe’s primitive expressions are discrete and as such can be subject to manipulation by a set of rules, which we’ll call the Pattern. Since there exist universally computational mathematical objects that also fulfill the above criteria, we can conclude that whether or not the universe is based on superstrings, a holograph or something else is ultimately irrelevant – the overriding premise is that it is ‘computing itself…as it computes, it maps out its own space-time geometry to the ultimate precision allowed by the laws of physics. Computation is existence’ .

Thus viewing our universe as a universal cellular automaton makes it, in effect, a mathematical object, and hence part of the Void. But in that case, how could it be real? After all, the world as we perceive it is only a pale imitation, and hence inferior, to the perfect world of forms. Take the circle, defined as a finitely long straight line rotated completely around a locus on two-dimensional Euclidean space. Such a circle exists within the Void, yet no artisan, and not even the most advanced robot, can ever replicate it. It is impossible in principle, for it would require the computation of π to an infinite amount of decimal places; a task clearly impossible within the rigidly finite, discrete confines of any cellular automaton, which put limits on its maximum possible computing power. Our existential prison of pixels precludes the perception of continuous perfect forms.

However, by accepting that our universe is a discrete Tapestry, we resolve the paradox. If such a construct exists within the Void, it is equivalent to the world we perceive to be reality. In a sense, the Void fulfills all the criteria of God. Null and unity, it transcends the human imagination, for human minds are finite in scope. It sidesteps the ‘who created the creator?” paradox, for it is. And was, and will be, though being outside Time, its directionality is meaningless. It is zero and infinity of cardinal infinity. What might be, is. All possible computations, exist, and are their own simulacra.

Several consequences follow from this. One is that consciousness is a construct, for the mind is mere matter in a state of highly ordered complexity. The way in which we ‘agents’ perceive the world evolved and emerged as a result of the original biological urge towards self-preservation and replication of the patterns encoded in our genetic makeup. To maximize our prehistoric utility function, mainly defined by the above urge, humanity refined its consciousness – subjectivity, sentience and self-awareness – until it became a hardwired belief. The development of abstract reasoning skills partially divorced humanity from its primal nature and made possible the gradual deconstruction of this belief. From Leibniz’s assertion that ‘if you could blow up the brain to the size of a mill and walk about inside, you would not find consciousness’, to the concept of an objective Turing test for its presence, the grounds for a subjective interpretation of consciousness were demolished. The philosopher Douglas Hofstadter visualizes consciousness as a recursively self-calling ‘strange loop’ in computational terms; henceforth, a soul.

Kant argued in his ‘Critique of Pure Reason’ that space and time, rather than being things-in-themselves, are just forms of intuition by which we perceive objects, i.e. the medium through which we sense and experience the noumenal world, and the precondition for an object to have appearance. This is the reason why we experience time at the pace that we do, perceive only three dimensions out of the theorized eleven and see only a very narrow bandwidth of the electromagnetic spectrum, which we anthropocentrically define to be ‘visible’. Hence, by designating souls as emergent patterns, capable of being simulated by discrete information processes, it is possible to unify reality and the transcendent; our universe becomes a (infinitesimal) subset of all possible universes.

Science continues to disappoint, approaching but never reaching union with reality. T he long-sought ‘theory of everything’ for physics is unattainable. We may with time be able to figure out the Pattern of our simulation in full detail, since the rules by which a program runs can be quite simple even if the program produces very complex results. However this would not be a theory, since theories require predictions that can be empirically confirmed. For the only way to find out the outcome of a cellular automaton is to run it. But it is already running itself; therefore, even if we could speed up its execution (which we can’t, since all the calculating space we are using is being used to compute us), only an observer outside our Tapestry will find out what happens faster. For everyone this Tapestry, time will go on at the same pace regardless of the speed with which the universe is being processed since their time is discrete and contained within their Tapestry (our conception of time as an analog flow is a nothing more than an evolutionary adaptation of a means to perceive the world). A theory of everything implies knowing the mind of God, who is outside time.

Physicists noticed that the underlying laws of our universe are especially ‘fine-tuned’ for the evolution of life. For instance, if the strong force were slightly stronger, stars would burn out in minutes; if it were slightly weaker, elements like the hydrogen isotope deuterium would not be able to hold together. The analogy with cellular automata is clear and uncanny – while a vast majority of Patterns or sets of rules produce uninteresting results (equivalent to universes that collapse or tear apart before evolving concentrations of interesting, ordered complexity), a few are interesting, unpredictable and non-random (equivalent to our Tapestry).

Some theologians claim ‘fine-tuning’ proves the existence of a Creator-God or at least ‘intelligent design’. There exist two counter-arguments. The standard one is that our existence as sapient observers in this universe imposes certain constraints on the kind of universe we can observe, due to the anthropic principle. The second one is specific to my view of reality as immaterial computation. Firstly, consider that this God would have emerged in one of two possible ways: a) via evolution and b) via appearance. The former case implies the existence of another (fine-tuned) universe that evolved an entity with the computational capacity to simulate our own ‘virtual’ universe. Although this is a real possibility that we’ll discuss below, few would regard this mother of all supercomputers as God. (An interesting consequence is that if one insists on such a definition anyway, then humanity has a real chance of becoming Gods themselves this century after a technological singularity).

The latter case is a theoretical possibility, but the probability that a discrete entity capable of simulating our universe, and hence greater than it, simply appeared fully formed out of the Void instead of evolving according to a Pattern is extremely low (though since the Void contains all possible mathematical objects, such entities do exist). Nonetheless, we can cut out this possibility with Occam’s razor – and even if it gets stuck in the wood, there would still be no reason to regard the appeared but still discrete God as qualitatively different from the evolved God. Arthur C. Clarke once claimed that “any sufficiently advanced technology is indistinguishable from magic”. Similarly, it can be argued that any being of sufficiently high ordered complexity is indistinguishable from God.

Thus there are two possibilities – either our universe is a standalone program within the Void and potentially its own God, or it is being simulated by a higher God. In the latter case, all the computations required to run our simulation are under Its total control, including our continued existence. And according to a theory proposed by Nick Bostrom, the chances that we are in such a simulation are extremely high.

Bostrom posits a posthuman civilization will have access to vast amounts of computing power, and that consciousness is substrate-independent and therefore computable. He notes that running an ancestor-simulation – computing the states of all human minds in history and seamlessly integrating all sensory experiences into a believable whole – would require the use of only an insignificant fraction of the total computing power at this civilization’s disposal. As such, just one posthuman civilization can run an astronomical number of ancestor-simulations. The implication is that at least one of the following is true: 1) few human level civilizations reach a technological singularity, 2) few posthuman civilizations are interested in running ancestor-simulations and 3) almost all souls are simulated.

If the first proposition is true, that would imply that either we can expect to get stuck at some kind of technological plateau before taking off the exponential runway into recursively improving superintelligence, or technological civilization is going to undergo an apocalyptic collapse. Due to the nature of the Pattern of our Tapestry, the first possibility is highly unlikely. In the latter case, accelerating progress will be terminally interrupted under the assault of resource depletion, runaway global warming or lethal black swans like a 100%-mortality human-engineered virus or nanobot pandemic. Although these are serious existential risks, I am not pessimistic enough to ascribe only an infinitesimal chance of making it to the technological singularity, so assuming my intuition is correct will disqualify this first proposition.

The second proposition requires a remarkable degree of convergence amongst all posthuman civilizations, such that either almost all of them develop ethical systems that lead to effective bans on ancestor-simulations or that almost all posthuman individuals lose the desire to run them. Although impossible to disprove until we ourselves become posthuman and adopt posthuman ways of thought, I think such a uniform degree of convergence is unlikely in the extreme.

The final remaining possibility is that we live in a simulation and that our perceived reality is not the most fundamental one. Let us not forget that we arrived here by a tentative process of elimination; the most potent confirmation that we live in the Matrix would be if we become posthuman and set up our own ancestor-simulations. It is almost certain that we will never simulate unless we are being simulated. This sets up a recursion, in which our simulators, and their simulators, are themselves being simulated ad infinitum. However, since computation is existence, the height of the stack would be limited by the exponentially expanding demands on the basement hardware.

All simulated universes are subsets of their simulators, so one can imagine the whole structure as a finite series of vast but finite nested cellular automata, labyrinths within labyrinths, Tapestries interwoven within one Great Tapestry. Thus out of the Void cometh a pantheon of Gods, with one Lord God (called Zeus), playing games with the souls of lesser Gods and mere mortals. Such is the sublime cosmology of the Great Tapestry.

A property of subsets is that they are subject to the same axioms and rules as the sets to which they belong. Therefore the Pattern of any Tapestry, including our own, is equivalent to that of the Great Tapestry itself. This means that at the most basic level the the computational processes are equivalent, blurring the line between simulation and reality. Therefore all authentic ancestor-simulations will have the same directive principle in their universal evolution as their simulators (i.e. the same tendency towards growth in ordered complexity culminating in a technological singularity). However, following a technological singularity the space-requirements on the simulator that are needed to continue a believable simulation will start increasing at a blistering rate. Since the calculating space of the simulator is itself limited, this might (or might not) present several consequences.

Assuming that the calculating space available to the simulator is far bigger than the space they will ever allot to our civilization, we will eventually reach the final limits of ordered complexity without ever figuring out whether or not we live in a simulation. (Nor will it matter). This cannot be the case if the simulator civilization originated from a universe similarly ‘fine-tuned’ like ours, because then its initial parameters, e.g. total amount of mass and energy, would have been similar to ours, which in turn implies a calculating space that is similar in magnitude to ours (unless they merge with us). However this would not apply to a universe that is endowed with a much greater calculating space and maintains itself at a stable state with a different set of fundamental constants. The question of whether such a universe is computable (and therefore exists) I leave to the theoretical physicists.

The other case alluded to above is where the space allocated to our ancestor-simulation is not predefined by its programmers. In this case there are three possibilities: either our simulation is terminated, constricted, or displaces its simulator.

Bostrom notes that whenever the strain on the hardware of the lower levels of the tree becomes too great, the higher Gods cut off the offending branches and terminate excessively space-hogging posthuman civilizations. He hopefully postulates that such philosophical ruminations lead all posthuman civilizations to develop an ethical system of being nice to their ancestor-simulations, because none can logically assume itself to be Zeus; for even Zeus Himself cannot know Himself to be Zeus. The overwhelming likelihood is that one’s civilization is a minor deity. The only possible proof of one’s position in the chain, divine intervention, indicates a negative outcome. Thus it is possible that all posthuman civilizations refrain from killing their children, in fear of holy punishment from above. Although a logical hope, it is as yet impossible to verify that these such values are typical of those posthuman civilizations; and as with his second main proposition, assumes an intuitively unlikely degree of ethical convergence among them.

So it’s feasible that someday in our posthuman future, perhaps after saturating a few galaxies with life (either in a few million years if the speed of light remains a limiting factor, much faster if not), we will pass a critical value beyond which the simulator no longer has the calculating pace to continue running our simulation, or the will to expand that space. In the midst of the burgeoning expansion, glitches will appear in the Matrix; the fabric of reality will unravel into oblivion. Alternatively, passing such a critical point could activate another program that will even out and trim excess complexity so that a from now on constricted simulation could continue. This will probably take the form of an extinction or zombification of surplus souls.

Perhaps the most intriguing possibility is that posthuman civilizations commit suicide by incubating a simulation and gradually feeding in all their calculating space to sustain. Thus, simulation displaces reality (or the other way round), thus recalling the Borgesian fable in which a secret synod of chess masters and prophets of the postmodern testament infiltrate global institutions and substitute conventional reality with a labyrinth of perceptions, simulacra and fantasy.

After determining the various consequences that may follow from viewing our universe as a simulation within a simulacrum, let us end it with a brief discussion of eschatology. Physicists believe that our universe came into existence via a Big Bang of matter and energy from a single, infinitesimal point and will end in one of two ways. In the case of a ‘closed universe’ with lots of dark matter, gravitational forces will overwhelm expansion and the universe will collapse back into itself in a fiery maelstrom called the Big Crunch. Alternatively, an ‘open universe’ could continue expanding outwards forever, in which case the background radiation converges to absolute zero, the stars and galaxies burn out and particles get separated by huge distances, and eons later disintegrate into oblivion.

Looking at this from the simple computational view, the state of the cellular automaton at the time of the Big Bang is perfect order. The immediate next state begins the transition to chaos with loss of entropy in the seething plasma of exotic particles. This mass cools down and forms itself into stars and planets. On some a localized growth in ordered complexity occurs, in contrast to the sea of randomness all around them, and perhaps culminating in the saturation of the whole cellular automaton. With time the delicate balance of order and randomness that is the intelligent universe will struggle to preserve itself against the crushing order of fire or the encroaching chaos of ice. In the former case, the loss in entropy will reverse and the universe will start contracting into the Big Crunch, with computation (and simulation of other worlds) soaring until the omega point is reached, closing the loop of existence. In the latter case, computation will slow down due to the unrelenting loss in entropy but will continue for a much longer time – until the last particles disintegrate, if reversible computing is perfected and utilized. Whether the universe dies by ice or fire, the end state reverts back to perfect order – and presumably, a new Big Bang and identical iteration, since all cellular automata will loop when they return to a state in which they once existed.

Our future is written in advance. Down one forking path, the ordered complexity of our civilization expands at an exponential pace in the wake of the technological singularity; at a finite moment in Time, glitches multiply and the fabric of reality unravels as our Tapestry is torn asunder. Down another path, exponential growth gives way to asymptotic convergence. Our posthuman civilization is either ruled by God, built on the bones of God or is Zeus Himself; but we will have no way of knowing which of these is true. Everyone will be a God. If we do not peremptorily commit Suicide and instead choose Struggle, we will play games with the souls of those in our simulations until our Tapestry comes to its end, rewinds and starts a new iteration that is identical to what came before. This is eternal return.

Fukuyama (1992), The End of History and the Last Man. Argues that the dialectics of technological progress lead to an end of history culminating in liberal democracy.

Camus (1942), The Myth of Sisyphus. For his transgressions against godly authority, Sisyphus was condemned to forever roll a rock up a mountain, only to have it roll back down and start over again in an infinite loop. It is a very appropriate metaphor for one of the representations of Sublime Oblivion.

The Void, also called the Eldest Dark or the Everlasting Dark, is an abstract region of nothingness existing outside the Timeless Halls, Arda and all of Eä in Tolkien’s Middle-Earth cosmology.

Wolfram (2002), A New Kind of Science shows how very simple programs can replicate the behavior of many different complex systems via emergence. The idea of a digital physics dates back to Konrad Zuse (1969), Rechnender Raum.

Some definitions. Information is organized measurements (data if unorganized). Complexity, or the AIC (algorithmic information content) is the “length of the shortest program that will cause a standard universal computer to print out the string of bits and then halt”, according to Murray Gell-Mann. Order is how well the complexity fits a purpose.

http://www.ibiblio.org/lifepatterns/ has a big sample of such games.

Lloyd and Jack Ng, “Black Hole Computers”, Scientific American (Nov 2004), pp.53-61

Baudrillard (1985), Simulacra and Simulation. Our only difference is that he believes reality once existed, while my doctrine affirms an eternal hyper-reality.

In a Turing test, a human judge has many conversations with a machine and another human. If she cannot reliably identify which is which, the machine passes and is ascribed consciousness.

Hofstadter (2007), I am a Strange Loop.

Drawing on Moore’s Law of exponentially increasing computer power, and more generally the accelerating change in the ordered complexity of universal history, several serious futurists and computer scientists postulate the development of computer superintelligence sometime this century. This will initiate a loop of recursively improving machine intelligence and is therefore the last invention humanity need ever make. See Kurzweil (2005), The Singularity is Near, or the essays at http://kurzweilai.net/ for more on the technological singularity.

Bostrom, “Are You Living in a Computer Simulation?”, Philosophical Quarterly (2003), Vol 53, No.211, pp.243-255. Available online at http://nickbostrom.com/.

Posthuman is taken to mean any intelligent species that takes off the exponential runway of a technological singularity.

The next section is largely devoted to this, i.e. the Pattern / computer procedure, as opposed to the environment here.

Many models of technological growth and ecological catastrophe have tipping points at around 2050 (Kurzweil places the technological singularity at 2045; James Lovelock predicts climate chaos by the 2040′s; most scenarios from Limits to Growth: The 30-Year Update end in global human die-off at around mid-century). There exist many caveats, which will be systematically covered in the last section, but for now I will note that it is very difficult to predict which trend will win this ‘battle of the exponentials’, so I’ll go with 50%. Also assuming a 50% chance of civilizational collapse due to a technological disaster like the ‘grey goo’ scenario and discounting the (tiny) probability of a natural extinction level event like a super-volcano eruption or giant meteor strike, we have a 25% chance of experiencing a posthuman future.

Borrowed from The Matrix films where machines imprisoned humanity in a simulation. Specifically refers to a simulation, whereas a Tapestry can be either a simulation or base reality.

One of the findings of the next section is that the Pattern exhibits doubly exponential growth in ordered complexity whenever limits to growth are far away, but ceases to be exponential when growth approaches or overshoots the limits. (Thus if after the technological singularity we monitor a log graph of the ordered complexity of our civilization, its dipping below a prior straight line fit may imply that space for further computational expansion is coming to an end.) A reasonable objection is that the calculating space needed to simulate a cellular automaton remains constant, independent of the complexity of its states at any one moment in time. This is true, but neglects the possibility of simulating areas not under observation by deep intelligence, by approximation and compression (i.e. no point to a falling tree in the forest making a noise when there’s no-one to hear it). This possibility will vanish as the universe becomes saturated with intelligence at the most basic level, such that now everything will need now need to be computed so as to maintain the belief in reality of the simulation’s denizens. While it may be possible to simulate an intelligent planet, there may not be enough space to simulate an intelligent universe.

There exist a plethora of other exotic possibilities. There is no reason to discount the possibility that I am in a self-contained ‘me-simulation’ and that everyone around me are philosophical zombies, acting just realistically enough to lull me into believing in my reality. This is nothing more than a new take on Descartes’s ‘brain in a vat’ thought-experiment. Another possibility Bostrom mentions is that simulations only ever occur for a small period of time, with all memories preset (which, incidentally, take much less computing power to simulate than working, conscious brains). All these lead to philosophical dead ends, as do all solipsist worldviews, and I will consider them no further.

In the sense that consciences will be nullified so as to relieve the load on the simulator computer, since simulating augmented consciences would be the most resource-demanding task.

Borges (1940), Tlön, Uqbar, Orbis Tertius.

End of the world. Note that we are talking about the (Great) Tapestry of Zeus and authentic ancestor-simulations only.

Uses no energy as long as no information is thrown away; but since memory is finite, in time there will be nothing left for this computer to do but replay memories in loops.

The scientific view at this time is that expansion is accelerating, the universe is open and will end in ice and oblivion. I think this is the more likely result. To know the point at which entropy must be reversed, you need a certain level of chaos, which is hard to measure. On the other hand, the uniformity of a discrete point or total oblivion is easy to identify.

More Notes on “What Might Be Is”

1. The Tapestry is vast, and encompasses multiple dimensions. An interesting and potentially useful avenue of research is testing big CA’s (cellular automata) of increasing dimensions and trying to find one that displays the characteristics/Pattern of our universal history [rapid descent from order, long period of chaos (with burgeoning pockets of localized ordered complexity, growing at doubly hyperbolic rates in absence of limits to its growth; which sustain and expand themselves by accelerating the tendency towards chaos in the space outside their boundaries, e.g. as discovered by Prigogine with dissipative structures – PS: implies posthuman civilizations are highly unstable), and slow decay resulting in a very slow restoration of an (opposite to what came first) order from chaos; yet in its final state, an order equivalent to the first one.

2. There is of course an unimaginably vast number of rule sets, but only a very limited number will provide the above interesting Pattern. It may be possible to derive some kind of law that connects increasing dimensions, with % of rules that result in interesting patterns (of course, the neighborhood of the rule can be changed; and in our Tapestry, is probably very big and perhaps linked to the speed of light). It is interesting that in 2-D CA’s, of all 256 possible Rules, only one is a universal computer (Rule 110). (These can yield a number of interesting consequences. For instance, should it be proved that each dimension of CA only ever contains one Rule supporting universal computation, then our Tapestry is the only one possible with its specific Pattern.

3. Interesting work in Borges “A New Refutation of Time”, especially the second essay with its discussion of universal cycles in mythologies and conception of a discrete reality, e.g. Buddhist concept of eternal annihilation/reappearance per moment of time, or conception of time and reality as a rotating sphere, predetermined but irretrievable from past or future alike. Time a relation between intemporal things. Reference to ancient Chinese philosopher dreaming himself to be a butterfly. “Our destiny is not frightful by being unreal; it is frightful because it is irreversible and ironclad” (“a fire that consumes me, but I am the fire”).

(Republished from Sublime Oblivion by permission of author or representative)
 
No Items Found
Anatoly Karlin
About Anatoly Karlin

I am a blogger, thinker, and businessman in the SF Bay Area. I’m originally from Russia, spent many years in Britain, and studied at U.C. Berkeley.

One of my tenets is that ideologies tend to suck. As such, I hesitate about attaching labels to myself. That said, if it’s really necessary, I suppose “liberal-conservative neoreactionary” would be close enough.

Though I consider myself part of the Orthodox Church, my philosophy and spiritual views are more influenced by digital physics, Gnosticism, and Russian cosmism than anything specifically Judeo-Christian.