The Unz Review - Mobile
A Collection of Interesting, Important, and Controversial Perspectives Largely Excluded from the American Mainstream Media
Email This Page to Someone

 Remember My Information



=>
Publications Filter?
AKarlin.com Da Russophile
Nothing found
 TeasersRussian Reaction Blog
/
Nick Bostrom

Bookmark Toggle AllToCAdd to LibraryRemove from Library • BShow CommentNext New CommentNext New Reply
🔊 Listen RSS

Smil, Vaclav – Global Catastrophes and Trends (2008)
Category: futurism, climate change, geopolitics, catastrophes; Rating: 5/5
Summary: Google Books

Vaclav Smil, an energy theorist and language connoisseur, brings his talents to bear on this idiosyncratic, incisive and balanced book on the global future. From the outset, he outlines his skepticism in universal theories of history and attempts at quantifying current trends to make point forecasts (e.g. predictions that nuclear power would make energy too cheap to meter in the halcyon days of the industry). Instead, he emphasizes the role played by the sheer complexity of human systems and their discontinuities – for instance, who could have imagined that a generation after the death of Mao, China would be the workshop of the world helping underwrite US military dominance?

Having established “How (Not) to Look Ahead”, Smil introduces his method – analyzing key variables categorized by a) unpredictable events – “catastrophes”, b) powerful trends (the effects of globalization, global demography, the energy transition), and c) the shifting balance of power between the Great Power (the marginalization of Japan, an unstable Islam, Russia’s partial resurgence, the uncertain rise of China and an increasingly faltering United States). It is one a method I highly favor and I agree with most of the arguments he makes in his book, albeit there are a few major exceptions.

Fatal Discontinuities

First, he classifies the catastrophes or “fatal discontinuities” into: 1) known catastrophic risks (asteroid strikes, earthquakes, super-eruptions), 2) plausible catastrophic risks (nuclear war, pandemic) and 3) speculative risks (“grey goo” or takeover by machines). [There is another classification of existential risks by Nick Bostrom].

The likelihood of world-changing natural disasters occurring is vanishingly small. Though floods and earthquakes killing up to 100,000′s of people happen about once or twice per decade, their global effects are very limited. An asteroid capable of terminating industrial civilization will need to have a diameter of about 2km+ (by darkening the sky with micro-particles and destroying the ozone layer), but the chances of such asteroids striking the Earth decrease exponentially with greater size. In any case the majority of large Near-Earth Objects have already been identified and identified as safe. Predicting super-eruptions is much harder, though again based on the geological record the chances of an unprecedented catastrophe are minimal – which would have to be on the scale of the Toba, Sumatra event 72,000 years ago, which ejected 2,000km3 of ejecta and reduced the world human population to 10,000. An example of a modern threat is a super-eruption of Yellowstone, which is about due though we’d have to be extremely unlucky to have it blow up during our lifetimes. Another possibility are submarine landslides forming tsunamis, such as at La Palma, the Canary Islands, where a 500km3 slide would create a mega-tsunami with repeated walls of water up to 25m striking Florida.

The second category includes pandemics and mega-wars. During the last generation, the onslaught against disease stalled and went into partial reverse, with a growing list of contagious diseases (the most significant of which is HIV / AIDS), failures in eradication (e.g. polio) and antibiotic resistance (multi-drug resistant TB – which now finished off many AIDS sufferers). There also remains the specter of an influenza pandemi c, which will be deeply disruptive and potentially highly virulent. Though a repeat of 1957 or 1968, or the current swine flu for that matter, aren’t going to have much effect, the consequences of the return of a Spanish Flu-like pandemic (1918) will be devastating. Arising out of the natural disease reservoir of South China, the flu can spread more rapidly (air transport, globalization, greater urban populations) and a mortality profile hard on the younger cohorts (15-30 years) will have devastating effects on aging European societies. Globalization will shut down as countries close borders, with highly disruptive effects on national economies. However, we are much better prepared for handling a pandemic today than in 1918 due to better nutrition and technological advances such as mechanical respirators, antibiotics for treating secondary infections, antivirals, and math models for optimizing quarantines and vaccinations.

Just as another pandemic is almost certain to happen, so there will continue to be violent conflict, terrorism, genocides, perhaps even another large-scale democide or mega-war with tens to hundreds of millions of casualties – despite that the incidence of violent conflict fell by 40% since the early 1990′s and the agreed reductions in the US and Russian nuclear arsenals. Some may be transformational and fundamentally change the course of world history (Smil identifies the Taiping Rebellion, the American Civil War, WW1 and WW2 as transformational). The risk remains of an accidental nuclear war between the US and Russia killings hundreds of millions, or the rise of an revisionist, expansionist power unleashing WW3. The potential deaths accruing to war are several OM’s (orders of magnitude) higher than for all natural catastrophes.

Smil points out that terrorism is 1) nothing new, having gone through four “waves” – a) Russia’s narodnaya volya assassinations, b) decolonization, c) PLO, IRA, Basque ETA, and Western left-wing groups favoring bombings and aircraft hijackings, and d) modern Islamic terrorism beginning with the Iranian Revolution / Hezbollah, later extending to the Palestinian intifada and al-Qaeda, at the symbolic start of a new century (1400) by the Islamic calendar, 2) has rarely been effective with a few exceptions like 9/11 (and even there its value lay mostly in symbolism – [the spirit of terrorism], disproportionate public fear and official overreaction), for the chances of dying from terrorism are extremely low. Since producing mass casualties is extremely difficult, terrorists have to settle for “mass disruption” instead of “mass destruction”.

His final category of fatal discontinuity are “imaginable surprises”, such as annihilation of the Earth by exotic particle experiments, unforeseen climatic shifts (e.g. a drastic cooling), grey goo eating the biosphere within a few days, etc. He correctly doesn’t put much stock into these sci-fi scenarios.

Unfolding Trends

Smil makes some general observations about trend analysis. First, they tend to follow a pattern of incremental engineering process (cheaper, more efficient) and gradual diffusion, yet are sometimes marked by profound discontinuities, e.g. fertility transitions, the continuing failure to control nuclear fusion. Surprises can occur because a) long-term trends aren’t recognized in time, such as the Soviet Union’s post-1965 stagnation, b) can’t predict which trends will become embedded in society, and which ones will veer off course, c) their unknowable effects on human society (e.g. will the oil peak be moderated by a smooth transition to gas or renewables, or does it herald the end of industrial civilization?). With that said, Smil now focuses on three things: 1) the coming energy transition, 2) Great Power dynamics and 3) the future of globalization.

Smil now moves into his forte – global energy systems. The first point he makes is that the basis of today’s industrial system was formed a long time ago and that improvements since then paled in significance. “The most important concatenation of these fundamental advances took place between 1867 and 1914″, when engineers realized electricity generation, steam and water turbines, internal combustion engines, inexpensive steel, aluminium, explosives, synthetic fertilizers, electronic components, thus laying the “technical foundations of the twentieth century” [much like men like Marx, Bismarck and Garibaldi laid its ideological foundations]. A second Golden Age occurred in the 1930′s and 1940′s, which saw “the introduction of gas turbines, nuclear fission, electronic computing, semiconductors, key plastics, insecticides and herbicides”.

This technological base requires huge, uninterrupted supplies of energy for its existence. The sources of energy remain constant for long periods due to the difficulty of substitution, which involves discarding old infrastructures and building anew. As a share of world total primary energy supply (TPES), coal went from 95% in 1900 (excluding phytomass), to just 28% in 2005, while crude oil rose from 4% in 1900 to 27% in 1950 and 46% in 1975, but dropped to 36% by 2005. Natural gas expanded significantly since the mid-century, reaching 24% of global TPES by 2005. All together, fossil fuels supplied 88% of global TPES in 2005, compared to 93% in 1975. Despite all the talk about environmentalism and energy security, there has been no walk; ours is still a predominantly fossil-fuel based civilization.

In the future, Smil foresees that a) there will be no oil peak, b) coal is unlimited except by concerns over climate change and c) gas will rise in importance because of its relatively low carbon per unit of energy ratio and advances in LNG technology.

Though I am in qualified agreement with b) and c), Smil ridiculing of the oil peakists in a) is singularly unconvincing. He claims the Hubbert model is “simplistic” in that it is “based on rigidly predetermined reserves” and ignores “innovative advances or price shifts”. The first point is flat out wrong. It applies to Hubbert’s first model, but in his later work he devised a method that did away with the need for guesstimates of URR (ultimately recoverable reserves) – and which gives pretty much the same results, indicating that the effects of technology and higher prices are limited. Taking the case of the US, despite the discovery of oil off Alaska and the Gulf, despite there having been more exploration in the country than in the rest of the world combined, despite the periods of high prices during 1973-1986 and 2002-2008, despite its light regulatory environment and access to cheap credit – American oil production has declined relentlessly since the early 1970′s. Quite simply, the evidence indicates that the power of depletion will eventually defeat ever greater and smarter extraction attempts. Read one of these overviews from 2007 or 2009 for a more indepth explanation of peak oil.

However, I agree with Smil that the transition to other non-fossil fuel sources will be a drawn out process, considering that most of the “prime movers” in our society are oil-based (the steam turbines that generate 70% of global electricity output, the gasoline-fueled internal combustion engine, the diesel engine, the gas turbine, and the induction electric motor). [I would note that these difficulties are going to be aggravated by peak oil].

Addition difficulties include a) the scale of the shift, b) lower energy density of replacement fuels, c) substantially lower power density of renewable energy extraction, d) intermittence of renewable flows and e) uneven distribution of renewable resource extraction.

1) Global civilization uses fossil energy at a rate of 12 TW, a twenty-fold increase from the late 1890′s (total world TPES is around 13 TW). Only solar power has a significantly larger than current TPES is solar flux at 122 PW, which is 4 OM greater; otherwise, wind (<10 TW), ocean waves (<5 TW), and today energy / geothermal (<1 TW). Though Earth’s net primary productivity (NPP) / terrestrial photosynthesis yields solid fuels (biomass) at the range of 55-60 TW, exploiting it will further degrade vital ecosystemic services, and besides humanity already appropriates 30-40% of global NPP as food, feed, fiber and fuel (with wood and crop residue accounting for 10% of current TPES).

2) Coal and oil are far more energy-dense than wood and in general biomass cultivation will take up 4-5 OM more space than conventional oil / gas infrastructure. “In order to energize the existing residential, industrial and transportation infrastructures inherited from the fossil-fueled era, a solar-based society would have to concentrate diffuse flows to bridge power density gaps of 2-3 OM”. As an example, even using Brazilian ethanol from sugar cane to replace all current gasoline, diesel and kerosene used in transport would require the subjugation of 1/3 of the world’s cultivated lands – or all agricultural land in the tropics. Corn ethanol has half the power density of sugar cane ethanol. Large-scale adoption will have catastrophic impacts on food self-sufficiency.

[source]

3) Renewables don’t satisfy base load power requirements of an industrial society. Load factors are 75%+ for coal-powered power stations or 90%+ for nuclear power stations, whereas wind power is just 20-25%.

4) Renewable flows are also unevenly distributed, just like 60%+ of easy hydrocarbons are locked up in the Persian Gulf Zagros Basin. Jakarta has as little sun as Edmonton (shared with equatorial zone). Many areas are either too still or too windy, i.e. will be heavily damaged by hurricanes.

5) Costs won’t necessarily decline. To the contrary, protovoltaic silicon prices have more doubled; prices of steel, aluminium, plastics, etc, for wind turbines also drastically increased due to the underlying rise in oil prices.

Smil reiterates some pretty standard arguments on nuclear and hydrogen. The nuclear industry expanded quickly until the 1970′s, but stalled at that point because it previously hadn’t included costs like state-subsidized nuclear R&D, decommissioning costs and waste disposal (and later negative PR like Chernobyl). Hydrogen is not a realistic option barring the mass spread of cheap solar power. Concludes that this energy transition will be fundamentally different from previous one, which was driven by declining resource availability (deforestation), higher quality of fossil fuels (energy density, easier storage, more flexibility) and lower cost of coal and hydrocarbons. According to Smil, none of these factors apply to the fossil economy – though he expresses some concern over its contribution to climate change.

Having outlined his idea of the main trend of the next fifty years, Smil turns to a standard analysis of the shifting balance of international power between the US, China, Japan, Russia, Islam, and Europe. He cautions against subscribing to the conventional wisdom, pointing out that a) the Soviet collapse and Japan’s post-1980′s stagnation were largely unforeseen, b) the tendency of the US to surprise, going from decline / deindustrialization in the 1980′s to a vigorous “new economy” in the 1990′s before becoming fiscally and militarily overstretched in the 2000′s.

Geopolitical Trends

Smil does not believe Europe holds out much promise, unlike some delusional commentators. It is in long-term, centennial economic decline relative to the rest of the world and its economies are mired in inefficiency, unemployment and bureaucracy, and are less technologically dynamic than Japan or the US. Both Britain and Spain face separatist challenges and are economic basketcases. France is over-regulated dirigisme and has problems with integrating its 10% Muslim population (remember the burning banlieues?), but is at least demographically healthy – unlike Italy and Germany, which are rapidly aging and about to depopulate rapidly with very negative economic effects (they might be in a fertility trap, in which ever smaller generations need to pay higher tax burdens which limits their reproductive freedom). In particular, Italy is sinking back into corruption and Mafia influence, its artisanal manufacturing is being destroyed by Chinese competition and there remain huge gaps between the Nord and Mezzogiorno. He reiterates Mark Steyn’s Eurabia theory arguments (crudely summarized as lots of under-reported young, fertility, fanatical Muslims simmering in ghettoes), which has a number of holes in it. Finally, the EU structure itself is disconnected from national electorates and reality in general, and has no inspiring sense of mission; further expansion will just weaken it further. [Agreed with most things - I believe the EU by 2020 will be a much less significant institution and European nations will be tottering, preoccupied with trying to solve their own internal problems].

After a period of euphoric hubris in the 1980′s, when it seemed Japan would be number one, the country crashed into a long, ongoing period of stagnation marked by crippling deflation, the fall of the Nikkei from a peak at 39,000 in December 1989 to below 10,000, and the appearance of the NEET generation (not in employment, education or training). Though it remains rich, well-off and technologically advanced, there is a moral anomie as long-term jobs vanished and fertility plunged to around 1.2 children per woman. Smil is pessimistic on Japan due to a) its ingrained conservatism [though would the recent electoral win by the Democratic Party of Japan later be regarded in the same vein as the Meiji reforms?], b) the continued hostility of neighbors reinforces its security dependence on the US, especially to counter challenges from China and North Korea, and c) the start of depopulation in 2005, retirement wave in 2010′s as the 1950′s baby boomers retire, and the prospective massive aging of the population (medium age 50 by 2025, more 80+ than 0-14 year olds by 2050). Japanese culture does not accept immigrants and it will not be saved by robots.

The author sees Islam being in a fractured state (secular / spiritual, Sunni / Shia / others, etc) in a difficult relationship with modernity, fighting the same internal civil war that charactered early modern Christianity. His short exegesis of the Koran finds that there is support for many interpretations of just how restrictive Islam has to be, and this forms an ideological battleground between the extremists and moderates. Signs of this backwardness include the Iranian fatwa against Rushdie, the prevalence of bizarre conspiracy theories on the Arab street, and Islamic countries accounting for just 2% of the world’s scientific publications. [To this we can add the Mohammed cartoons controversy and the 2003 UN Arab development report that produced the astonishing statistic that more books are translated into Spanish per year than have been translated into Arabic in all history]. There are several inequalities within the ummah (e.g. oil-rich Saudi Arabia and Pakistan) and internal instability, in part cased by the demographic explosion [usually in water-stressed environments, I'd add] which results in youth bulges – young men with no job prospects who are susceptible to joining violent groupings. Even as the region simmers, the outside world will be forced to take an interest due to its stranglehold over the world’s oil supplies (the five Persian Gulf nations produced about 1/3 of the world’s oil in 2005, and this figure is projected to rise substantially).

It is evident he knows his stuff when talking about Russia, or at least is well-read on it. Contrary to most analysts, he believes it is resurgent in a real way, even though its longer-term prospects are uncertain. He lists its strengths as being an energy superpower (especially with respect to gas) with a big intellectual capacity and a formidable military that is being rearmed with newer-generation weapons. However, he foresees significant challenges in the form of its cyclical, hydrocarbons-based economy [as confirmed by the 2008 crisis, though the deeper problem is dependence on foreign credit], its unstable democracy, the Islamist insurgence in the Caucasus, and above all its negative demographic trends [I've written a lot about this, just search the site].

China is gradually returning to its old position of global economic predominance, its growth helped by Deng Xiaoping’s economic liberalization, FDI, the one-child policy, a cheap, disciplined and relatively skilled labor force, mass urbanization and migration to the coasts, and a certain degree of innovation (state-funded research facilities, as well as flouting of IP and large-scale industrial espionage). It is “a Communist government guaranteeing a docile work force that labors without rights and often in military camp conditions in Western-financed factories so that multi-national companies can expand their profits, increase Western trade deficits, and shrink non-Asian manufacturing”. It is economically mercantile, seeking resources around the world and if current growth trends continue, China could match US military spending by 2020. However, there are substantial problems with a) the population (severe 118:100 male-female imbalance, rapid aging and undeveloped pension system), b) the economy (huge rural-urban inequality, high taxes on peasantry and violent expropriations by business-state symbiosis), c) the environment (deforestation and soil erosion from Maoist era, little arable land per capita that is shrinking from salinization, desertification and urban expansion, needs more food but irrigation is constrained by water shortages and crops are already very intensively fertilized, falling water tables and toxic rivers, very poor air quality and now leading CO2 emitter), and d) cultural mediocrity (not as much soft power as the US).

India is nowhere near as powerful as China, and the same factors limiting the latter militate against India. It’s GDP is twice smaller; though its Gini index of income inequality is better (35 versus 45), this is a product of its underdevelopment, besides its deep social stratification / de facto caste system persists; malnutrition, immunization rates and adult illiteracy are all much worse in India; China has 3x the electricity-generating capacity and 17x the container port capacity. Though democratic, it is likewise deeply corrupt, bureaucratic and ecologically degraded. It faces a nuclear-armed Pakistan and the prospect of tens of millions of Bangladeshi refugees spilling over once their country sinks under the rising seas.

Smil is an all-round pessimist, believing the United States may go the way of the Roman Empire. According to him, its woes include increasing economic and foreign policy challenges [see Shifting Winds], uncontrolled Hispanic immigration that threatens its long-term territorial integrity and Protestant “work ethic” values, and perennial budget deficits (in particular the structural nature of the current account deficit, formed due to its reliance on oil imports to sustain the suburban arrangements and the collapse of its domestic industrial base – mundane manufacturing, the auto industry, and now even aerospace and the food industry. It has a poor education system (see results of PISA international standardized tests), retiring baby boomers about to cash in on state obligations and their savings, obesity and a general cultural decline. However, the possibility of open discussion of these failings is a persistent American strength.

He then proceeds to make the argument that “US leadership is in its twilight phase” and that the “coming transition will be unprecedented” due to the global nature of its hegemony. He plausibly affirms that no nation is strong enough to replace the US as the sole superpower, meaning that there will probably be more chaos, instability and wars. Smil predicts that in sum the world will regret its passing.

Smil concludes with an analysis of globalization, making the points that it is an ongoing historical process originating in the 16th C and blossoming from the 1950′s with the arrival of the tanker revolution, now blossoming in the intricate production chains and JIT system exemplified by Wal-Mart’s relation with China. There is a stabilizing force, interdependence, which expands the economic scope of every globalized nation far beyond the limited autarkies of history, but at the same time makes them ever more vulnerable to disruption of these links; the destabilizing force is the growing inequality between nations (e.g. failed states), though a caveat is that when calculated by population there is an improvement mainly thanks to China (but nullified when taking into account the intra-national growth of inequality – which increase since 1970 in all the major countries like the US [35 to 47], Japan [25 to 37], China [25 to 50], Russia [25 to 40]. There is now no global “middle class”, according to Smil, which makes the system unstable. [Here I disagree - East-Central Europe, Latin America and even China fit the bill here].

Environmental Change & Conclusion

This next long section is a detailed analysis of the likely course and effects of global warming. Most of the stuff is pretty basic and I’ve already summarized in my reviews of Six Degrees (Mark Lynas) and The Last Generation (Fred Pearce).

His most interesting discussions are of human influence of the nitrogen cycle (which they’ve affected to a far greater degree than the carbon cycle) and the spread of antibiotic resistance. “Losses of nitrogen from synthetic fertilizers and manures, nitrogen added through biofixation by leguminous crops and nitrogen oxides released from combustion of fossil fuels are now adding about as much reactive nitrogen (c.159 Mt N/year) to the biosphere as natural biofixation and lighting does” (in contrast human interference in carbon cycle through land use changes and fossil fuel burning amounts to 10% of annual photosynthetic fixation of the element and sulfur is equal to 1/3. This leads to mass leaching, eutrophication, growth of algae and phytoplankton, and the subsequent decomposition deoxygenates water and kills bottom-dwelling aquatic species. The worst hypoxic zones are the Gulf of Mexico, the lagoon of the Great Barrier Reef, the Baltic Sea, the Black Sea, the Mediterranean, and the North Sea. Nitrogen oxides formed during combustion contribute to photochemical smog in urban areas around the world and acid rain. It’s use will increase as Asia demands higher crop yields and Africa needs to stop its increasing nutrient mining.

The other worrying trend he discusses at length is the rise of antibiotic resistance on the part of pathogens, as peniccilin and its descendants become increasingly less effective. This is inevitable, but is much facilitated by widespread self-medication, over-prescription and poor sanitation in hospitals. If these negative trends continue, influenza deaths will sky-rocket due to the inability to treat bacterial pneumonia, and treating tuberculosis and typhoid fever will become very difficult. A nightmare scenario can arise if this is accompanied by increasing malnutrition and AIDS, which make people far more susceptible to these secondary diseases.

In the last chapter, “Dealing with Risk and Uncertainty”, Smil sums up and embellishes his ideas, asserts the necessity of properly quantifying risks, cautions on the fallacies of linear extrapolation of current trends, and notes that even during a collapse there are silver linings, using the construction of the basilica of Santa Sabina in Rome (422-483) during the waning years of the Roman Empire (ended in 476) as an example.

In conclusion, this is a very good and entertaining book. There are some East European-style grammatical mistakes and perhaps a bit too much personal boasting, but otherwise it provides a realistic appraisal of the real potential catastrophes facing humanity (i.e. big wars and pandemics, not terrorism, earthquakes or “grey goo”) and the dominant trends of the next fifty years (geopolitical flux / non-polarity, climate change & pollution, the energy transition). He approaches the subject very rigorously-scientifically so one gets a good perspective of possible futures, my only major disagreements with him being on his disbelief in the oil peak theory and paying too little attention to the social and geopolitical ramifications of climate change (he doesn’t really consider the catastrophic possibilities, sticking to the middle-of-the-road consensual IPCC forecasts).

(Republished from Sublime Oblivion by permission of author or representative)
 
This is the first in a series of philosophical essays in which I outline my philosophy of Sublime Oblivion. Here I demonstrate the indivisibility of the material and Platonic worlds and show that our universe is almost certainly a computer simulation nested within an abstract computer program or simulacrum. The consequences of these results are explored.
🔊 Listen RSS

Here I outline one of the core philosophies of Sublime Oblivion. I demonstrate the indivisibility of the material and Platonic worlds and show that our universe is almost certainly a computer simulation nested within an abstract computer program or simulacrum, the truth that hides that there is none. The consequences of these results are explored.

Modern natural science has a lot to be proud of. Technology follows in its wake. The horizons of human consciousness retreat before its implacable incandescence. Its defining trait, reason, affirms freedom. Yet it is ultimately disappointing and dehumanizing. It heralds the death of God, of struggle and belief in good and evil, while in atonement for deicide, deigns to offer only models of reality that approach but never reach union with it. Thus we come to an impasse, the fatal double dilemma that drove Kierkegaard to despair, Nietzsche to madness and Camus to an ‘acceptance without resignation’ – though I personally can’t imagine Sisyphus happy.

All the arguments for God’s existence that I know of sink under one paradox or another – cosmology through infinite regression, ontology through elementary logic and teleology through evolution. Constructing an equivalence between Nature or reality, and God, is nothing more than an exercise in tautology dating from Spinoza and as such tantamount to atheism. Those who cite Darwinian evolution or Hegelian dialectics as the answer do not realize that they are nothing more than a Mechanism, as hopeless as traditional objects of belief at explaining the deepest metaphysical questions. In despair over the power of pure positivism to rationalize existence, let us make a bold conjecture and make the axiomatic assertion that all that might be, is.

According to Plato, there exists a separate world of ‘perfect forms’ or ‘universals’ that is the highest and most fundamental reality; our world contains but their imperfect imitations. This concept can be best explained through mathematics. Even if some global cataclysm were to wipe out humanity, the Theorem of Pythagoras will linger on unperturbed on some transcendent plane, ripe for the picking by the next species to evolve abstract reasoning skills. This is because the squares of the shorter sides of a right-angled triangle will always equal the square of the longer side under Euclidean geometry. I will call this Platonic realm the Void, for it is indeed void; it is an abstract, all-encompassing region of nothingness, zero and infinity. All possible mathematical objects and their unions exist in the Void.

There exists an interesting class of mathematical constructs known as ‘cellular automata’ . These are regular grids of cells, each in one of a finite amount of states, in a finite number of dimensions. The dimension of time is also discrete, with the state of any particular cell at time t a function of the states of the cells in its ‘neighborhood’ at time t – 1. This function is based on fixed rules and has an undetermined outcome. What makes cellular automata intriguing is how some of them can generate order and complexity out of initial chaos, thus reflecting the meta-narrative of our own universal evolution from a soup of primitive particles to industrial civilization. Although most cellular automata exhibit only simple repetition or rampant randomness, a special few demonstrate an interesting, uninterrupted interplay between order and chaos. Conway’s ‘Game of Life’ generates stable patterns which exhibit themselves amidst disorder, thus fulfilling a very general definition of life as a localized, self-sustaining concentration of ordered complexity. The most philosophically significant is Wolfram’s Rule 110, which produces complex, non-repeating patterns and was proven to be computationally universal, i.e. theoretically capable of performing any computable task. Furthermore, these behaviors demonstrated by cellular automata are replicated by many classes of other simple computer programs, and as such have a strong claim to universality.

One of the most important paradigm shifts of the Scientific Revolution was the gradual rejection of the Aristotelian theory that matter was continuous and elemental. The ancient Greek and Chinese conception of the world as a melange of Earth, Water, Fire, Air and Ether was displaced by theories that space-time was made up of discrete if very small units – corpuscular cells, atomistic molecules, ‘chronon’ time. Through its centennial, dialectical procedure of postulation, refutation and synthesis, science arrived at the fundamental limits to observation into the worlds that lie hidden within Planck distances and in between Planck time. Our universe is capable of evolving patterns amidst chaos that are sophisticated enough to recognize them as such, if not fully understand them – the proof is in front of (or rather, behind) our noses. Although continuous mathematics is used to explain the vast majority of natural processes, its inadequacies are protected from exposure because the universe operates with discrete quanta that are small from a human perspective. Modern quantum mechanics, with its chaotic ‘soup’ of sub-atomic particles, offers a glimpse beyond analog delusions into discrete reality. In cellular automata, the states of all cells affect every other cell, which is a perfect metaphor for the fundamental problems in measuring quantum phenomena.

We know by the anthropic principle that the universe exhibits an evolutionary mechanism that resulted in an increase in ordered complexity amidst chaos. Science showed that the universe’s primitive expressions are discrete and as such can be subject to manipulation by a set of rules, which we’ll call the Pattern. Since there exist universally computational mathematical objects that also fulfill the above criteria, we can conclude that whether or not the universe is based on superstrings, a holograph or something else is ultimately irrelevant – the overriding premise is that it is ‘computing itself…as it computes, it maps out its own space-time geometry to the ultimate precision allowed by the laws of physics. Computation is existence’ .

Thus viewing our universe as a universal cellular automaton makes it, in effect, a mathematical object, and hence part of the Void. But in that case, how could it be real? After all, the world as we perceive it is only a pale imitation, and hence inferior, to the perfect world of forms. Take the circle, defined as a finitely long straight line rotated completely around a locus on two-dimensional Euclidean space. Such a circle exists within the Void, yet no artisan, and not even the most advanced robot, can ever replicate it. It is impossible in principle, for it would require the computation of π to an infinite amount of decimal places; a task clearly impossible within the rigidly finite, discrete confines of any cellular automaton, which put limits on its maximum possible computing power. Our existential prison of pixels precludes the perception of continuous perfect forms.

However, by accepting that our universe is a discrete Tapestry, we resolve the paradox. If such a construct exists within the Void, it is equivalent to the world we perceive to be reality. In a sense, the Void fulfills all the criteria of God. Null and unity, it transcends the human imagination, for human minds are finite in scope. It sidesteps the ‘who created the creator?” paradox, for it is. And was, and will be, though being outside Time, its directionality is meaningless. It is zero and infinity of cardinal infinity. What might be, is. All possible computations, exist, and are their own simulacra.

Several consequences follow from this. One is that consciousness is a construct, for the mind is mere matter in a state of highly ordered complexity. The way in which we ‘agents’ perceive the world evolved and emerged as a result of the original biological urge towards self-preservation and replication of the patterns encoded in our genetic makeup. To maximize our prehistoric utility function, mainly defined by the above urge, humanity refined its consciousness – subjectivity, sentience and self-awareness – until it became a hardwired belief. The development of abstract reasoning skills partially divorced humanity from its primal nature and made possible the gradual deconstruction of this belief. From Leibniz’s assertion that ‘if you could blow up the brain to the size of a mill and walk about inside, you would not find consciousness’, to the concept of an objective Turing test for its presence, the grounds for a subjective interpretation of consciousness were demolished. The philosopher Douglas Hofstadter visualizes consciousness as a recursively self-calling ‘strange loop’ in computational terms; henceforth, a soul.

Kant argued in his ‘Critique of Pure Reason’ that space and time, rather than being things-in-themselves, are just forms of intuition by which we perceive objects, i.e. the medium through which we sense and experience the noumenal world, and the precondition for an object to have appearance. This is the reason why we experience time at the pace that we do, perceive only three dimensions out of the theorized eleven and see only a very narrow bandwidth of the electromagnetic spectrum, which we anthropocentrically define to be ‘visible’. Hence, by designating souls as emergent patterns, capable of being simulated by discrete information processes, it is possible to unify reality and the transcendent; our universe becomes a (infinitesimal) subset of all possible universes.

Science continues to disappoint, approaching but never reaching union with reality. T he long-sought ‘theory of everything’ for physics is unattainable. We may with time be able to figure out the Pattern of our simulation in full detail, since the rules by which a program runs can be quite simple even if the program produces very complex results. However this would not be a theory, since theories require predictions that can be empirically confirmed. For the only way to find out the outcome of a cellular automaton is to run it. But it is already running itself; therefore, even if we could speed up its execution (which we can’t, since all the calculating space we are using is being used to compute us), only an observer outside our Tapestry will find out what happens faster. For everyone this Tapestry, time will go on at the same pace regardless of the speed with which the universe is being processed since their time is discrete and contained within their Tapestry (our conception of time as an analog flow is a nothing more than an evolutionary adaptation of a means to perceive the world). A theory of everything implies knowing the mind of God, who is outside time.

Physicists noticed that the underlying laws of our universe are especially ‘fine-tuned’ for the evolution of life. For instance, if the strong force were slightly stronger, stars would burn out in minutes; if it were slightly weaker, elements like the hydrogen isotope deuterium would not be able to hold together. The analogy with cellular automata is clear and uncanny – while a vast majority of Patterns or sets of rules produce uninteresting results (equivalent to universes that collapse or tear apart before evolving concentrations of interesting, ordered complexity), a few are interesting, unpredictable and non-random (equivalent to our Tapestry).

Some theologians claim ‘fine-tuning’ proves the existence of a Creator-God or at least ‘intelligent design’. There exist two counter-arguments. The standard one is that our existence as sapient observers in this universe imposes certain constraints on the kind of universe we can observe, due to the anthropic principle. The second one is specific to my view of reality as immaterial computation. Firstly, consider that this God would have emerged in one of two possible ways: a) via evolution and b) via appearance. The former case implies the existence of another (fine-tuned) universe that evolved an entity with the computational capacity to simulate our own ‘virtual’ universe. Although this is a real possibility that we’ll discuss below, few would regard this mother of all supercomputers as God. (An interesting consequence is that if one insists on such a definition anyway, then humanity has a real chance of becoming Gods themselves this century after a technological singularity).

The latter case is a theoretical possibility, but the probability that a discrete entity capable of simulating our universe, and hence greater than it, simply appeared fully formed out of the Void instead of evolving according to a Pattern is extremely low (though since the Void contains all possible mathematical objects, such entities do exist). Nonetheless, we can cut out this possibility with Occam’s razor – and even if it gets stuck in the wood, there would still be no reason to regard the appeared but still discrete God as qualitatively different from the evolved God. Arthur C. Clarke once claimed that “any sufficiently advanced technology is indistinguishable from magic”. Similarly, it can be argued that any being of sufficiently high ordered complexity is indistinguishable from God.

Thus there are two possibilities – either our universe is a standalone program within the Void and potentially its own God, or it is being simulated by a higher God. In the latter case, all the computations required to run our simulation are under Its total control, including our continued existence. And according to a theory proposed by Nick Bostrom, the chances that we are in such a simulation are extremely high.

Bostrom posits a posthuman civilization will have access to vast amounts of computing power, and that consciousness is substrate-independent and therefore computable. He notes that running an ancestor-simulation – computing the states of all human minds in history and seamlessly integrating all sensory experiences into a believable whole – would require the use of only an insignificant fraction of the total computing power at this civilization’s disposal. As such, just one posthuman civilization can run an astronomical number of ancestor-simulations. The implication is that at least one of the following is true: 1) few human level civilizations reach a technological singularity, 2) few posthuman civilizations are interested in running ancestor-simulations and 3) almost all souls are simulated.

If the first proposition is true, that would imply that either we can expect to get stuck at some kind of technological plateau before taking off the exponential runway into recursively improving superintelligence, or technological civilization is going to undergo an apocalyptic collapse. Due to the nature of the Pattern of our Tapestry, the first possibility is highly unlikely. In the latter case, accelerating progress will be terminally interrupted under the assault of resource depletion, runaway global warming or lethal black swans like a 100%-mortality human-engineered virus or nanobot pandemic. Although these are serious existential risks, I am not pessimistic enough to ascribe only an infinitesimal chance of making it to the technological singularity, so assuming my intuition is correct will disqualify this first proposition.

The second proposition requires a remarkable degree of convergence amongst all posthuman civilizations, such that either almost all of them develop ethical systems that lead to effective bans on ancestor-simulations or that almost all posthuman individuals lose the desire to run them. Although impossible to disprove until we ourselves become posthuman and adopt posthuman ways of thought, I think such a uniform degree of convergence is unlikely in the extreme.

The final remaining possibility is that we live in a simulation and that our perceived reality is not the most fundamental one. Let us not forget that we arrived here by a tentative process of elimination; the most potent confirmation that we live in the Matrix would be if we become posthuman and set up our own ancestor-simulations. It is almost certain that we will never simulate unless we are being simulated. This sets up a recursion, in which our simulators, and their simulators, are themselves being simulated ad infinitum. However, since computation is existence, the height of the stack would be limited by the exponentially expanding demands on the basement hardware.

All simulated universes are subsets of their simulators, so one can imagine the whole structure as a finite series of vast but finite nested cellular automata, labyrinths within labyrinths, Tapestries interwoven within one Great Tapestry. Thus out of the Void cometh a pantheon of Gods, with one Lord God (called Zeus), playing games with the souls of lesser Gods and mere mortals. Such is the sublime cosmology of the Great Tapestry.

A property of subsets is that they are subject to the same axioms and rules as the sets to which they belong. Therefore the Pattern of any Tapestry, including our own, is equivalent to that of the Great Tapestry itself. This means that at the most basic level the the computational processes are equivalent, blurring the line between simulation and reality. Therefore all authentic ancestor-simulations will have the same directive principle in their universal evolution as their simulators (i.e. the same tendency towards growth in ordered complexity culminating in a technological singularity). However, following a technological singularity the space-requirements on the simulator that are needed to continue a believable simulation will start increasing at a blistering rate. Since the calculating space of the simulator is itself limited, this might (or might not) present several consequences.

Assuming that the calculating space available to the simulator is far bigger than the space they will ever allot to our civilization, we will eventually reach the final limits of ordered complexity without ever figuring out whether or not we live in a simulation. (Nor will it matter). This cannot be the case if the simulator civilization originated from a universe similarly ‘fine-tuned’ like ours, because then its initial parameters, e.g. total amount of mass and energy, would have been similar to ours, which in turn implies a calculating space that is similar in magnitude to ours (unless they merge with us). However this would not apply to a universe that is endowed with a much greater calculating space and maintains itself at a stable state with a different set of fundamental constants. The question of whether such a universe is computable (and therefore exists) I leave to the theoretical physicists.

The other case alluded to above is where the space allocated to our ancestor-simulation is not predefined by its programmers. In this case there are three possibilities: either our simulation is terminated, constricted, or displaces its simulator.

Bostrom notes that whenever the strain on the hardware of the lower levels of the tree becomes too great, the higher Gods cut off the offending branches and terminate excessively space-hogging posthuman civilizations. He hopefully postulates that such philosophical ruminations lead all posthuman civilizations to develop an ethical system of being nice to their ancestor-simulations, because none can logically assume itself to be Zeus; for even Zeus Himself cannot know Himself to be Zeus. The overwhelming likelihood is that one’s civilization is a minor deity. The only possible proof of one’s position in the chain, divine intervention, indicates a negative outcome. Thus it is possible that all posthuman civilizations refrain from killing their children, in fear of holy punishment from above. Although a logical hope, it is as yet impossible to verify that these such values are typical of those posthuman civilizations; and as with his second main proposition, assumes an intuitively unlikely degree of ethical convergence among them.

So it’s feasible that someday in our posthuman future, perhaps after saturating a few galaxies with life (either in a few million years if the speed of light remains a limiting factor, much faster if not), we will pass a critical value beyond which the simulator no longer has the calculating pace to continue running our simulation, or the will to expand that space. In the midst of the burgeoning expansion, glitches will appear in the Matrix; the fabric of reality will unravel into oblivion. Alternatively, passing such a critical point could activate another program that will even out and trim excess complexity so that a from now on constricted simulation could continue. This will probably take the form of an extinction or zombification of surplus souls.

Perhaps the most intriguing possibility is that posthuman civilizations commit suicide by incubating a simulation and gradually feeding in all their calculating space to sustain. Thus, simulation displaces reality (or the other way round), thus recalling the Borgesian fable in which a secret synod of chess masters and prophets of the postmodern testament infiltrate global institutions and substitute conventional reality with a labyrinth of perceptions, simulacra and fantasy.

After determining the various consequences that may follow from viewing our universe as a simulation within a simulacrum, let us end it with a brief discussion of eschatology. Physicists believe that our universe came into existence via a Big Bang of matter and energy from a single, infinitesimal point and will end in one of two ways. In the case of a ‘closed universe’ with lots of dark matter, gravitational forces will overwhelm expansion and the universe will collapse back into itself in a fiery maelstrom called the Big Crunch. Alternatively, an ‘open universe’ could continue expanding outwards forever, in which case the background radiation converges to absolute zero, the stars and galaxies burn out and particles get separated by huge distances, and eons later disintegrate into oblivion.

Looking at this from the simple computational view, the state of the cellular automaton at the time of the Big Bang is perfect order. The immediate next state begins the transition to chaos with loss of entropy in the seething plasma of exotic particles. This mass cools down and forms itself into stars and planets. On some a localized growth in ordered complexity occurs, in contrast to the sea of randomness all around them, and perhaps culminating in the saturation of the whole cellular automaton. With time the delicate balance of order and randomness that is the intelligent universe will struggle to preserve itself against the crushing order of fire or the encroaching chaos of ice. In the former case, the loss in entropy will reverse and the universe will start contracting into the Big Crunch, with computation (and simulation of other worlds) soaring until the omega point is reached, closing the loop of existence. In the latter case, computation will slow down due to the unrelenting loss in entropy but will continue for a much longer time – until the last particles disintegrate, if reversible computing is perfected and utilized. Whether the universe dies by ice or fire, the end state reverts back to perfect order – and presumably, a new Big Bang and identical iteration, since all cellular automata will loop when they return to a state in which they once existed.

Our future is written in advance. Down one forking path, the ordered complexity of our civilization expands at an exponential pace in the wake of the technological singularity; at a finite moment in Time, glitches multiply and the fabric of reality unravels as our Tapestry is torn asunder. Down another path, exponential growth gives way to asymptotic convergence. Our posthuman civilization is either ruled by God, built on the bones of God or is Zeus Himself; but we will have no way of knowing which of these is true. Everyone will be a God. If we do not peremptorily commit Suicide and instead choose Struggle, we will play games with the souls of those in our simulations until our Tapestry comes to its end, rewinds and starts a new iteration that is identical to what came before. This is eternal return.

Fukuyama (1992), The End of History and the Last Man. Argues that the dialectics of technological progress lead to an end of history culminating in liberal democracy.

Camus (1942), The Myth of Sisyphus. For his transgressions against godly authority, Sisyphus was condemned to forever roll a rock up a mountain, only to have it roll back down and start over again in an infinite loop. It is a very appropriate metaphor for one of the representations of Sublime Oblivion.

The Void, also called the Eldest Dark or the Everlasting Dark, is an abstract region of nothingness existing outside the Timeless Halls, Arda and all of Eä in Tolkien’s Middle-Earth cosmology.

Wolfram (2002), A New Kind of Science shows how very simple programs can replicate the behavior of many different complex systems via emergence. The idea of a digital physics dates back to Konrad Zuse (1969), Rechnender Raum.

Some definitions. Information is organized measurements (data if unorganized). Complexity, or the AIC (algorithmic information content) is the “length of the shortest program that will cause a standard universal computer to print out the string of bits and then halt”, according to Murray Gell-Mann. Order is how well the complexity fits a purpose.

http://www.ibiblio.org/lifepatterns/ has a big sample of such games.

Lloyd and Jack Ng, “Black Hole Computers”, Scientific American (Nov 2004), pp.53-61

Baudrillard (1985), Simulacra and Simulation. Our only difference is that he believes reality once existed, while my doctrine affirms an eternal hyper-reality.

In a Turing test, a human judge has many conversations with a machine and another human. If she cannot reliably identify which is which, the machine passes and is ascribed consciousness.

Hofstadter (2007), I am a Strange Loop.

Drawing on Moore’s Law of exponentially increasing computer power, and more generally the accelerating change in the ordered complexity of universal history, several serious futurists and computer scientists postulate the development of computer superintelligence sometime this century. This will initiate a loop of recursively improving machine intelligence and is therefore the last invention humanity need ever make. See Kurzweil (2005), The Singularity is Near, or the essays at http://kurzweilai.net/ for more on the technological singularity.

Bostrom, “Are You Living in a Computer Simulation?”, Philosophical Quarterly (2003), Vol 53, No.211, pp.243-255. Available online at http://nickbostrom.com/.

Posthuman is taken to mean any intelligent species that takes off the exponential runway of a technological singularity.

The next section is largely devoted to this, i.e. the Pattern / computer procedure, as opposed to the environment here.

Many models of technological growth and ecological catastrophe have tipping points at around 2050 (Kurzweil places the technological singularity at 2045; James Lovelock predicts climate chaos by the 2040′s; most scenarios from Limits to Growth: The 30-Year Update end in global human die-off at around mid-century). There exist many caveats, which will be systematically covered in the last section, but for now I will note that it is very difficult to predict which trend will win this ‘battle of the exponentials’, so I’ll go with 50%. Also assuming a 50% chance of civilizational collapse due to a technological disaster like the ‘grey goo’ scenario and discounting the (tiny) probability of a natural extinction level event like a super-volcano eruption or giant meteor strike, we have a 25% chance of experiencing a posthuman future.

Borrowed from The Matrix films where machines imprisoned humanity in a simulation. Specifically refers to a simulation, whereas a Tapestry can be either a simulation or base reality.

One of the findings of the next section is that the Pattern exhibits doubly exponential growth in ordered complexity whenever limits to growth are far away, but ceases to be exponential when growth approaches or overshoots the limits. (Thus if after the technological singularity we monitor a log graph of the ordered complexity of our civilization, its dipping below a prior straight line fit may imply that space for further computational expansion is coming to an end.) A reasonable objection is that the calculating space needed to simulate a cellular automaton remains constant, independent of the complexity of its states at any one moment in time. This is true, but neglects the possibility of simulating areas not under observation by deep intelligence, by approximation and compression (i.e. no point to a falling tree in the forest making a noise when there’s no-one to hear it). This possibility will vanish as the universe becomes saturated with intelligence at the most basic level, such that now everything will need now need to be computed so as to maintain the belief in reality of the simulation’s denizens. While it may be possible to simulate an intelligent planet, there may not be enough space to simulate an intelligent universe.

There exist a plethora of other exotic possibilities. There is no reason to discount the possibility that I am in a self-contained ‘me-simulation’ and that everyone around me are philosophical zombies, acting just realistically enough to lull me into believing in my reality. This is nothing more than a new take on Descartes’s ‘brain in a vat’ thought-experiment. Another possibility Bostrom mentions is that simulations only ever occur for a small period of time, with all memories preset (which, incidentally, take much less computing power to simulate than working, conscious brains). All these lead to philosophical dead ends, as do all solipsist worldviews, and I will consider them no further.

In the sense that consciences will be nullified so as to relieve the load on the simulator computer, since simulating augmented consciences would be the most resource-demanding task.

Borges (1940), Tlön, Uqbar, Orbis Tertius.

End of the world. Note that we are talking about the (Great) Tapestry of Zeus and authentic ancestor-simulations only.

Uses no energy as long as no information is thrown away; but since memory is finite, in time there will be nothing left for this computer to do but replay memories in loops.

The scientific view at this time is that expansion is accelerating, the universe is open and will end in ice and oblivion. I think this is the more likely result. To know the point at which entropy must be reversed, you need a certain level of chaos, which is hard to measure. On the other hand, the uniformity of a discrete point or total oblivion is easy to identify.

More Notes on “What Might Be Is”

1. The Tapestry is vast, and encompasses multiple dimensions. An interesting and potentially useful avenue of research is testing big CA’s (cellular automata) of increasing dimensions and trying to find one that displays the characteristics/Pattern of our universal history [rapid descent from order, long period of chaos (with burgeoning pockets of localized ordered complexity, growing at doubly hyperbolic rates in absence of limits to its growth; which sustain and expand themselves by accelerating the tendency towards chaos in the space outside their boundaries, e.g. as discovered by Prigogine with dissipative structures – PS: implies posthuman civilizations are highly unstable), and slow decay resulting in a very slow restoration of an (opposite to what came first) order from chaos; yet in its final state, an order equivalent to the first one.

2. There is of course an unimaginably vast number of rule sets, but only a very limited number will provide the above interesting Pattern. It may be possible to derive some kind of law that connects increasing dimensions, with % of rules that result in interesting patterns (of course, the neighborhood of the rule can be changed; and in our Tapestry, is probably very big and perhaps linked to the speed of light). It is interesting that in 2-D CA’s, of all 256 possible Rules, only one is a universal computer (Rule 110). (These can yield a number of interesting consequences. For instance, should it be proved that each dimension of CA only ever contains one Rule supporting universal computation, then our Tapestry is the only one possible with its specific Pattern.

3. Interesting work in Borges “A New Refutation of Time”, especially the second essay with its discussion of universal cycles in mythologies and conception of a discrete reality, e.g. Buddhist concept of eternal annihilation/reappearance per moment of time, or conception of time and reality as a rotating sphere, predetermined but irretrievable from past or future alike. Time a relation between intemporal things. Reference to ancient Chinese philosopher dreaming himself to be a butterfly. “Our destiny is not frightful by being unreal; it is frightful because it is irreversible and ironclad” (“a fire that consumes me, but I am the fire”).

(Republished from Sublime Oblivion by permission of author or representative)
 
No Items Found
Anatoly Karlin
About Anatoly Karlin

I am a blogger, thinker, and businessman in the SF Bay Area. I’m originally from Russia, spent many years in Britain, and studied at U.C. Berkeley.

One of my tenets is that ideologies tend to suck. As such, I hesitate about attaching labels to myself. That said, if it’s really necessary, I suppose “liberal-conservative neoreactionary” would be close enough.

Though I consider myself part of the Orthodox Church, my philosophy and spiritual views are more influenced by digital physics, Gnosticism, and Russian cosmism than anything specifically Judeo-Christian.