This was a reading earlier in the year. It is being posted now because I forgot to do so back then.
Manuel DeLanda’s A Thousand Years of Nonlinear History is divided into three parts, each covering mainly European history from 1000 to 2000 AD. The first part is an account of urban development, the second is a genetic history of Europe, and the third is a history of language. Three basic theses tie each part together. First, DeLanda argues that the historical processes in each case are entirely material; cities, genetics and language can all be described in terms of matter-energy flows. Second, there is an connection between human institutions and natural structures (such as geological strata) that is not merely metaphorical. Third, the processes are nonlinear—that is, there are no successive stages, but apparently successive moments that can coexist and affect one another.
The Infiltration of Physics into History
The book opens with this line: “. . . all structures that surround us and form our reality (mountains, animals and plants, human languages, social institutions) are the products of specific historical processes.”
19th century thermodynamics gave us time’s arrow and irreversible historical processes; evolution showed that living things are not embodiments of essences but piecemeal constructions. However, both shared the assumption of only a single outcome: thermal equilibrium or the fittest design. Then, history stops. The physicist Ilya Prigogine changed thermodynamics by showing it was only valid for closed systems. Pump in energy—i.e. move a system far from equilibrium—and more outcomes are possible.
Instead of a single stability, there are co-existing forms of varying complexity — static, periodic, and chaotic attractors. Switching from one stable state to another is called a bifurcation, and minor fluctuations can play a role in the outcome. To know a current stability, we need to know the history of its bifurcations. Attractors and bifurcations are present in any nonlinear system, one in which there is feedback between components. Whether a system of molecules or organisms, as long as there is energy in them, there will be “endogenously generated stable states, as well as sharp transitions between states. . .” As biology incorporated nonlinearity, the notion of the fittest design feel apart.
He says that “. . .much as history has infiltrated physics, we must now allow physics to inflitrate human history.” Just as a chemical compound like water undergoes phase transitions (ice, water, vapour) at critical points of temperature intensity, “so a human society may be seen as a ‘material’ capable of undergoing these changes of state as it reaches critical mass in terms of density of settlement, amount of energy consumed, or even intensity of interaction.” For example: hunter/gatherer groups lived far apart, and generated few calories, making them comparable to a gas. With the development of basic agriculture came an increase instability, comparable to a liquid. Once surpluses were stored and redistributed, leading to governments, human society became comparable to a crystal.
These are not stages; just like with water, they can all coexist, one added to the other. There can also be variations on the states — the Huns/Mongols first domesticated animals, not plants—so were more like a river than a pool. When they did get a solid state under Ghengis Khan, it was more glass than crystal. In other words, “human history did not follow a straight line, as if everything pointed towards civilized societies as humanity’s ultimate goal. On the contrary, at each bifurcation alternative stable states were possible, and once actually achieved, they coexisted and interacted with one another.”
There is much more to matter than those simple phase transitions. Forms of matter-energy have the potential for self-organization. There are coherent waves called solitons (e.g. tsunamis and lasers), stable states called attractors which sustain coherent cycles, and finally nonlinear combinatories, which are combinations of solitons and attractors. These nonlinear combinatorials are the source of novelty.
Any description of humans requires intentional entities like beliefs and desires since they affect institutions. What matters is not always the plan, but the unintended collective consequences of human decisions. Analyzing self-organized matter cannot be done only analytically, piece by piece. There are emergent properties, i.e. interactions between parts which are more than the parts. Analyzing self-organized systems as an aggregate cannot account for these interactions.
In the introduction, I claimed that there were three guiding theses, but the first holds pride of place:
“In a very real sense, reality is a single matter-energy undergoing phase transitions of various kinds, with each new layer of accumulated ‘stuff’ simply enriching the reservoir of nonlinear dynamics and nonlinear combinatorics available for the generation of novel structures and processes.”
Philosophy and Simulation: Emergence
In his book Philosophy and Simulation, DeLanda gives a brief description of how emergence functions, and how it is different from a more common sense notion of causality.
The classic example of causality in physics is a collision between two objects. The overall affect is addition, even if multiple molecules are involved. No surprises, nothing new is produced, just addition. But when two molecules interact chemically a new entity can emerge—as when hydrogen and oxygen interact to form water. Water has properties not possessed by its components; hydrogen and oxygen are gasses at room temperature, while water is liquid. Adding either element to a fire fuels it, while adding water extinguishes it.
The fact that novel properties could arise was supposed to have philosophical implications for the nature of scientific explanation. The absence of novelty in physical interactions was supposed to mean that their effects could be reduced to general principles or laws.
But the synthesis of water does something new; not an absolute novelty in the sense of something that had never existed, but in the relative sense that something emerges that was not in the interacting entities as causes. This led some philosophers to believe that emergent effects could not be explained, or “that an effect is emergent only for as long as a law from which it can be deduced has not yet been found.” This went on to be a full fledged way of thinking in the 20th century: emergence was intrinsically unexplainable, and ideas like the élan vital appeared. This ended up with a sort of mysticism, that emergent facts must be treated as brute facts, with a sort of natural piety.
Emergence was a suspect concept; “It was only the passage of time and the fact that mathematical laws like those of classical physics were not found in chemistry or biology—or for that matter, in the more historical fields of physics, like geology or climatology—that would rescue the concept from intellectual oblivion.” There were no simple laws from which all effects could be deduced; the classical axiomatic dream withered.
Today, a scientific explanation is not identified with a logical operation, but with a more creative effort to show the mechanisms that produce a given effect. The early emergentists dismissed this, because they could not imagine something more complex than a clockwork mechanism. But other physical mechanisms are nonlinear. There are technological examples: steam engines, thermostats, and transistors. Outside technology, there are many examples in chemistry and biology.
The big epistemological difference between now and the early 20th century is that emergence does not have to be accepted as a brute fact. It can be explained without being explained away. The ontological status remains the same; it still refers to something that is objectively irreducible.
What sort of entities are irreducible? The original examples were things like life and mind, but these ideas are only reified generalities, not real things. Even if one had no problem believing in this things, “it is hard to see how we could specify mechanisms of emergence for life or mind in general, as opposed to accounting for the emergent properties and capacities of concrete wholes like a metabolic circuit or an assembly of neurons.”
DeLanda is attempting to explain the existence of concrete wholes, “the identity of which is determined historically by the processes that initiated and sustain the interactions between their parts. The historically contingent identity of these wholes is defined by their emergent properties, capacities, and tendencies.”
There is a difference between properties and capacities. Take a knife. Sharpness is a property, and we can see it with a cross section of the blade. Sharpness is emergent since the atoms must be arranged in a certain way to produce the sharp shape.
The knife also has the capacity to cut things. This is different from the property of sharpness because the capacity to cut may never actually be used. A capacity may remain only potential. When the capacity does “become actual it is not as a state, like the state of being sharp, but as an event, an event that is always double: to cut-to be cut. The reason or this is that the knife’s capacity to affect is contingent on the existence of other things, cuttable things, that have the capacity to be affected by it.” Properties exist without reference to anything else, while capacities are always relational. Something has to have the capacity to be affected by the capacity.
There is a complex ontological symmetry between properties and capacities. Capacities depend on properties; a knife has to be sharp to cut. On the other hand, the properties of a whole depend on interactions between component parts, each of which must have their own capacities. Another distinction must be made between emergent properties and tendencies. Again, a knife has the property of solidity, stable within a range of temperatures. But at a point, it liquifies, or even gasifies. These tendencies are also emergent; a single atom isn’t liquid, solid, gas. There has to be a population of atoms. Like capacities, they do not need to be actual to be real, and when they become real, it is as an event – to liquify.
The difference between tendencies and capacities is that tendencies are typically finite, while capacities need not be. Basically, we can list all the states matter could be in (solid, etc) and the possible ways it could “flow” (uniformly, periodically, turbulently). But capacities, dependent upon interaction with other entities, need not be limited: cutting, killing, hitting, scratching – all dependent on the capacity-to-be-affected of the other entity.
Since tendencies and capacities do not need to be actual to be real, we might call them possibilities. But a possible event is almost indistinguishable from a real event, the only difference being a lack of reality. Rather, we need “a way of specifying the structure of the space of possibilities that is defined by an entity’s tendencies and capacities.” (5) Our ontological commitment ought to be to the objective existence of that structure and not only the possibilities themselves since the latter only exist as a list in a mind.
The original emergentists thought thinks like “Space-Time”, “Life”, “Mind” and a sense of the divine were emergent properties, in a pyramid of progressively ascending grades. It was not supposed to be a teleological sequence, but it still certainly looked that way. To avoid this, DeLanda says we need a different image:
“. . .that of a contingent accumulation of layers or strata that may differ in complexity but that coexist and interact with each other in no particular order: a biological entity may interact with a subatomic one, as when neurons manipulate concentrations of metallic ions, or a psychological entity interact with a chemical one, as when subjective experience is modified by a drug.”
In short, DeLanda uses nonlinearity and emergence to generate a wholly materialistic account of the world, from the prebiotic soup to geological strata to banking institutions.