Quantum Theory

By Chris Clarke

Overview

Quantum Theory (often called Quantum Mechanics or Quantum Physics – the terms are used differently by different authors) is an extension of physics in order to cover the behaviour of microscopic objects. Physics as it was before Quantum Theory is called Classical Physics. On some versions Quantum Theory includes Classical Physics as a special case. From the start the theory was subject to controversy and developed into a wealth of different forms, mostly agreeing at the level of practical calculation but disagreeing wildly as to the interpretation. The question “what is quantum theory” is therefore a difficult one.

Both Classical and Quantum Physics describe how the observable properties of a system change with time. The “system” (which here means “thing”) can be anything from an atom to the universe; its properties are quantities like position, momentum, energy, the internal arrangements of its parts and so on.

In Classical Physics there is a set of properties for any given system (namely the positions and velocities of all its parts) which completely determines its time-development and the properties at any later time. In Quantum Physics there is no such complete set of properties. Instead at any given time there are many different possible sets of properties, any one of which sets can be observed; but it is not possible to observe all the properties simultaneously. For instance, position and velocity cannot be observed simultaneously; the first gives a particle-picture the second a wave-picture. The existence of different possible sets of properties is called complementarity.

Any properties at a later time cannot (except in special circumstances) be determined by observing properties at an earlier time. Only their probabilities are fixed by the earlier observation. This indeterminism is the basis of the continual openness of the universe to new possibilities. When combined with complementarity it may provide the notion of free creativity the universe (see Quantum Logic below).

The term observed means different things in different versions: e.g. “manifested,” “recorded by a macroscopic instrument,” “brought to (human?) consciousness” and so on. The last possibility links quantum theory with theories of mind. At any given time there is a well defined specification of the probability of observing any given property. This collection of probabilities is fixed by (or in some versions is identical with) the quantum state, but this state is not itself observable. Interpretations differ as to whether the state is real or a mathematical abstraction, with profound consequences for the whole notion of reality in physics.

The earliest interpretations, dating from workers in Copenhagen, used a two-tier world: a small system obeying non-Classical Physics and an observing laboratory obeying Classical Physics. The many pre-1965 theories tend to call themselves “The Copenhagen Interpretation.” Later interpretations tried to achieve a more unified view. This historical development introduced a succession of alternative structures: the collapse of the state, many worlds, environmental diffusion and so on. Although the early version lives on as a practical rule for the working physicist, for those researching into the foundations of the subject these early versions have been superseded by an interpretation called the histories interpretationwhich makes far fewer metaphysical assumptions.

Systems with infinitely many degrees of freedom (in particular, fields such as the electromagnetic field) are described by quantum field theory whose states can all be constructed out of a special state of the field in question called the vacuum for that field. Thus the vacuum is not some special new entity that brings into being electrons, protons and so on. Rather the electron vacuum is a special state of the electron quantum field, the proton vacuum a special state of the proton quantum field, and so on. One can combine these together to produce a single vacuum, which is a special state of the combined particle fields. The vacuum has zero energy (except in Dirac’s theory which enjoyed brief popularity).

Some basic concepts

Bearing in mind that many of these concepts belong to earlier versions of the theory that are no longer regarded as essential, some of the central ideas are as follows.

Quantum mechanics is usually regarded as a more general type of mechanics than traditional classical physics. The different system of mechanics is required (at least) when the size of a typical action in a system (the product of a typical energy and a typical time) becomes so small that it is comparable to a fundamental constant of nature called Planck’s constant, which has a very small value when measured in the usual laboratory-scale units. Traditional (Copenhagen) Quantum mechanics thinks of processes as having three phases: Preparation, Evolution and Observation.

In Preparation a microscopic system (such as an electron) is prepared (for instance, by being emitted from a heated wire and then projected into a vacuum by appropriate electric fields). Evolution then takes place while the system is left undisturbed – for instance, the electron may pass into an apparatus where a numerical result will be measured. Evolution is a deterministic process, governed by a precise equation called Schrödinger’s equation. Finally, observation is the actual measurement of the numerical result, which is indeterministic. If the system is not destroyed by the measurement (destruction of a photon, for instance, arising when it is absorbed in a photographic emulsion) then after the measurement the system can go to a further apparatus, with the first measurement serving as a preparation for the second.

By taking a wider view of what is meant by a system in quantum mechanics, one can regard every measurement (or observation) as a preparation of some new system, so that the distinction between preparation and observation collapses, and we have a single event that we can call manifestation. The basic concept of a process is then a sequence of manifestations, with evolution taking place between each one. Such a sequence is called a history. The deterministic evolution and the indeterministic observation are combined into a single principle that prescribes the probabilities for all the possible histories that might occur.

Quantum logic

At an even more basic level, quantum theory is just a very general way of talking about processes. At this level, before one introduces the particular laws of particular processes, the theory is called Quantum logic. A “logic” in this sense is a mathematical structure consisting, at least, of a collection of entities called propositions and a number of operations which combine propositions so as to produce new ones, including the operations “and”, “or” and “not”. Propositions correspond to quantum measurements that have a yes/no answer, such as “is the electron inside this box?” The rules of the operations on propositions are the same as for conventional logic (known as Boolean Logic) except that the distributive law “{A and (B or C)} is equivalent to {(A and B) or (B and C)}” is replaced by a rather technical weaker condition called orthocomplementarity. This makes it possible for a quantum logic to contain a number of different Boolean Logics which are incompatible with each other. These can be interpreted as different “frames of reference”, such as the wave representation or the particle representation of quantum states. Quantum logic makes sense of “Both/and” thinking: an electron is both a wave and a particle – but not within the same frame of reference.

The way we actually think, particularly in our more creative moments, is better described by quantum logic than by classical logic, because we can creatively move from one frame of reference to another, devising new ways of thinking about things. Quantum logic represents a creative approach to a creative universe, while classical logic represents a rigid approach to a deterministic universe.

Connectivity and the Aspect experiment

Perhaps the most important aspect of quantum theory is that quantum state are usually not states of single particles, but states of systems of many particles, states that cannot be reduced to statements about the individual particles. When this happens the particles are said to be entangled. Its practical consequence is that the unpredictable responses of the various particles are linked to each other, even if they are widely separated in space, giving a fundamental connectivity to the universe.

An experiment, performed by Alain Aspect, verified a particular case of this discussed in the early days of the theory by Einstein, Rosen and Podolsky. It demonstrates a kind of distant connectivity between widely separated photons (particles of light). The two photons are created by passing a single photon into a specially synthesized crystal that splits it into two daughter photons, each with half  the energy of the first. The two daughter photons are then allowed to travel apart (recently the distance has been taken to 40 km) and simultaneously each is passed through a prism which measures its polarization in a particular direction. The two photons respond at random to the measurements, and statistics are collected which show that there are correlations between the responses of the two photons. The key to the analysis is an argument, due to John Bell, which demonstrates that the particular nature of the correlations, and the way they vary with the angle of the prisms measuring the polarization, implies that it is impossible for the correlations to arise from any individual properties of the two photons, whatever these properties might be: the photons have to react in a coordinated, connected way. This crucial experiment indicates that the universe is connected at a much deeper level than previously thought.

QT fig 1

Figure 1

In the Figure 1, from one of Aspect’s papers, is the source of the two photons, moving in opposite directions until they are a distance Lapart, when they enter the measurement part of the experiment. Here two high-speed light “switches”, C1 and C2, direct each of photon to one of two possible polarization measuring devices (here represented as a polaroid filter, rather than a prism,  followed by a photomultiplier tube PM which detects the photons). The “switches” are activated after the photons set off, so that there is no possibility of the photons being influenced as they leave by the nature of the filters that they will encounter. The filters are pre-set at four different angles, and statistics of the number of “coincidences” – events where two of the photomultipliers fire simultaneously – are collected for a variety of combinations of angles.

The physicist John Bell introduced an argument that demonstrated that the pattern of coincidences that was observed could not possibly be produced if each photon responded independently: they had to be connected in some way. Bell’s argument is quite mathematical, but there is a simpler version due to Mermin which can be understood with a little effort. In this version the directions of the prisms/filters (determining the direction of the polarization that a photon has if it passes straight through the prism) are restricted to just three possibilities, symmetrically arranged. (The effect of the prism in unchanged by a 180-degree rotation, so the three distinct orientations of the prisms are defined by three positions of a line drawn through the prism, as shown in Figure 2 below:

QT fig 2

Figure 2 Positions of either prism   Figure 3  Relative angles from A to B

If we denote the two prisms by A and B, then the combined effect of the prisms depends only the relative angle from the line representing A to that representing B, which is either 0, 60 or 120 degrees (see Figure 3).

The probabilities for what A does depend on what B does and on the relative angle. The statistics that are observed are as follows:

Probability of B transmitting when A is transmitting Probability of B transmitting when A is not transmitting

Relative
Angle

B transmitting?

Relative
Angle

B transmitting?

Yes

No

Yes

No

0

1

0

0

0

1

60

1/4

3/4

60

3/4

1/4

120

1/4

3/4

120

3/4

1/4

The fact that the two tables are different shows that there is a correlation between whether the photons go through A and whether they go through B. But this is not necessarily evidence of connectionbetween the two: it could be that they have a coordinated “plan” of response laid down at the time they are emitted from the common source, with the response depending on the possible angles of the prisms encountered by the photons. The plan can be randomly chosen in advance, but there can be no variation from it once the photons have set off, otherwise one would not get a guaranteed agreement in the case where the prisms are in line.

The key step in the argument now follows. There is a result called Bell’s theorem showing that no such plan, in which each photon responds separately once it has left, can work. I am going to prove the special case of this result as it applies to the set-up here. Because of this result we must assume that the photons are responding in a coordinated way to the conditions of the prisms.

The proof is straight forward but does require a little care.

Consider one example of a possible plan. In any plan A and B have to respond in the same way to each angles of a prism so as to get agreement when the relative angle is zero (alignment), so a possible plan of response might be:

Angle (absolute)

0

60

120

Response

Yes

No

No

Such a feature can be found by looking at the pattern of agreements (Ag) and disagreements (Dis) on this plan

Can we show that every plan has some feature that is not reflected in what is observed?

Angle of A

Angle of B

0

60

120

0

Ag

Dis

Dis

60

Dis

Ag

Ag

120

Dis

Ag

Ag

The proportion of Agreements: Disagreements = 5:4

It turns out that every plan gives more agreements than disagreements.
How does this compare with observation? The previous tables show that the proportions are actually

Relative angle

Agree

Disagree

0

1

0

60

1/4

3/4

120

1/4

3/4

The average of the proportions in the “Agree” column is 1/2, as is that in the “disagree” column. So if we carry out a series of trials where the angles are set at random each time, then we will on average get equal proportions of disagreements and agreements, contradicting the possibility that the response is happening according to a plan.

This proves (the special case of) Bell’s theorem.

We can think of each photon as responding to a context that is non-local: the context includes the other photon and the other prism. Randomness is thus context-bound.

What we see is that, in the area that is most mechanistic – particle physics – we get an affirmation of what we suspect from our everyday experience anyway: that the universe is fundamentally connected across space, that context is non-local; and that randomness is bound up with context in the way that we recognise from the concept of freedom.

 

 

Quantum Entanglement

by Chris Clarke

The explanation for everything?

In 1972 Freeman and Clauser succeeded for the first time in preparing two particles which exhibited a strange condition, predicted by quantum theory, called ‘entanglement’. The condition had been discussed theoretically by Einstein and co-workers in 1935, and at that time they argued that, because such a thing was obviously impossible, there must be something wrong with quantum theory. Freeman and Clauser’s work, and subsequent more detailed experiments that fully confirmed the prediction of quantum theory, triggered a tide of speculation. There seemed no limit to the mysteries that might now be explained using this new phenomenon: telepathy, consciousness, healing … all were examined. Now that the production of entangled particles has become almost routine technology, it is perhaps a good time to take stock of what we have learnt. I shall indicate a range of possible positions, which I shall characterise as the sceptical, the liberal and the cosmological.

But first, what is entanglement? All the observations have been made on very simple microscopic particles, so I must ask the majority of readers, who are not normally interested in such things, to bear with me while I discuss these physics experiments. I will widen the discussion before long. Briefly, the essence of the idea is the production of pairs of particles which, though separated by a large distance, show correlations in their behaviour that are inexplicable on a basis of old (non-quantum) physics. To give more detail, let me describe a typical experiment, which uses the particles that make up light (particles called photons).

Entanglement experiments use a property of light called polarisation, to do with the direction in which the fields that constitute light are vibrating. (Polaroid sunglasses filter out light with a particular direction of polarisation.) It is possible, using a special optical material, to split a single photon into two so-called daughter photons. These two are allowed to travel apart (by more than 10km in some experiments) and then the directions of polarisation of the two photons are measured simultaneously. Two points emerge from analysing the results:

– The direction of the polarisation of either particle is not fully determined before the measurement takes place; it must involve a partly random response of the particle to the measuring apparatus.
– There is a correlation between the results of the measurements on the two particles. For example (and depending on the arrangement of the measuring apparatus) it might be that if particle A is measured to have its polarisation pointing vertically, it is then more likely that the same result will be obtained for particle B.

Could it be that the particles are, as it were, pre-programmed when they are split to respond in this way? (For example, it might be that the split always results either in both particles vibrating horizontally, or both vibrating vertically.) A detailed argument by the renowned theorist of foundational physics John Bell demonstrated that no ‘pre-programming’ could explain the observed results. In other words, the particles were responding spontaneously, but in an interconnected manner.

There is a huge literature expanding the sketch I have just given into detailed arguments: exploring possible loopholes, closing them, finding new ones … I will not go into these here. My concern is rather with the question, if the ideas just presented are taken at face value what are their implications? Quantum theory presents a very precise, and by now almost universally accepted, mathematical account of what is happening, in which entanglement corresponds to a particular mathematical form for the expression describing the pair of particles. What are the possible translations of this mathematics into words and pictures?

Let me describe some of the key assertions of the conventional verbal translation of the quantum mechanical account.

The properties of particles, properties which are the objects of experimental investigation, do not exist independently of the observation. Rather, they arise in the process of the interaction between the particle and the experimental apparatus. It is even misleading to think of them as ‘properties of particles’ at all: they are aspects of an event of measurement.

Spatial separation, and to some extent separation in time, are irrelevantto the correlations produced by entanglement. Spatial and temporal relations do not enter into the calculations at all; the particles could be anywhere.

Entanglement is the general rule; any interaction at any time in the past will entangle particles, so that very special conditions have to hold in order to produce particles that are not entangled. The achievement of Clauser and others actually lay not in the mere fact of producing entanglement, but in producing an entanglement that was of such a form that it could be examined experimentally. This point will be crucial below when I come to discuss the wider implications of this work.

Points 1 and 2 here carry an important philosophical message that challenges how we normally think about the universe, which is in terms of definite and separated things located in space. Point 1 undermines the definiteness of ‘things’. It is not saying that physical entities are merely figments of our cultural assumptions (though this may indeed be the case): physicists behave as if they are dealing with what might in some sense be called ‘reality’. But this inverted-commas-reality is what the philosopher of physics D’Espagnat called veiled. What we experience, either in the artificial setting of a laboratory or in normal moment-to-moment life, is quite distinct from what physicists regard as the foundation of the material universe, namely the abstract entities called particles and fields. And I should add that, while the connection between particles and experience is clear in the case of the laboratory, it remains in many respects obscure and controversial at the level of ordinary life.

It is point 3 that is vital for the wider implications of this subject. On the face of it, it would seem, for example, that we could use pairs of entangled particles for an instant communication system that operated independently of distance – something that would be highly reminiscent of telepathy. (Some authors have even written of one particle ‘instantaneously changing its state’ when the other is measured, for which there is no justification at all.) More generally, it suggests that the world, rather than being a collection of isolated particles pushing each other around, is more like an intricate web of subtle interconnections. But how far can we take this picture?

Physics is now pushing the idea of a web of quantum connections very far indeed. A significant new branch of what is sometimes called ‘Quantum Information Theory’ has emerged, covering the ways in which information can be transmitted through a mixture of entangled states and classical information transfer. The whole subject has moved out of the realm of speculation and is now supported by increasingly elaborate laboratory experiments using chains of entangled pairs of particles that verify the theory in great detail. One point that emerges from this work is that information cannot be transmitted by entangled states alone because the correlations that are observed are not ones that the user can control in order to insert information; rather, the spontaneity of the particles’ responses is an essential part of the account. In other words, quantum communication always has to involve an ordinary communication channel (such as a telephone) and a quantum channel (such as entangled particles) working in tandem. So instantaneous communication (telepathy, in the sense in which it is usually conceptualised) is impossible by this means. On the other hand empathy, in the sense of remote beings producing synchronistically related behaviour, is a possibility.

When it comes, however, to the role of entanglement in ordinary life, outside the laboratory, the situation starts to look a lot less clear. Let me put the sceptical position first. If the entanglement that is present everywhere is actually to make a difference, then the systems and organisms of the natural world need to use it in some way. The discussions of quantum information theory assume that one can prepare a pair of entangled particles, put them in two boxes, and hand one to each of two observers who take them away for later communication. But what sort of ‘box’ does a natural organism have that can preserve a quantum state in pristine condition? The laboratory experiments using photons cannot be a precisely replica of what happens in a living organism: the only known way to ‘store’ a photon in a living system is to absorb it into the electromagnetic structure of a molecule, which is such a turbulent system that the details of the state would rapidly be lost. The only known ‘box’ is the microtubule, studied by Stuart Hameroff, that I will describe shortly.

To help us understand the problems that weigh against entanglement being effective in living organisms, I shall describe the way in which almost all particles are affected by a phenomenon, heavily researched over the last 20 years, called decoherence. This is concerned with a ‘hidden property’ of particles, namely phase. This is easy to understand in the case of a wave on water travelling past a buoy, when the buoy moves regularly up and down (with an additional regular oscillation in the direction of the wave). Here thephase of the wave at this place and at a given time is the point that the buoy is currently at in its cycle. All particles are thought of as associated with a similar wave-aspect and they carry a phase, but in general this is behind D’Espagnat’s veil: there is no ‘buoy’ that can reveal it and it is deduced only indirectly, through phenomena (in particular, interference) that are analogous to those shown by waves.

Decoherence theory is about the way that the environment interacts with entangled particles so as to affect the relation of their phases. It turns out that the nature of the correlations between measurements on entangled particles is completely dependent on this phase relation. But the phases are exquisitely sensitive to perturbations by the environment, and so the influences of this can completely scramble the correlations produced by entanglement. All that is required for this to happen is that the particle states involved in the entanglement are sufficiently different for them to interact with a perturbation in different ways. If we are considering an entanglement involving the position of a large body or even a large molecule, then the slightest perturbing factor will produce enormous effects on the phase, leading to very rapid decoherence indeed.

So, to summarise the sceptics case: if we consider two particles in different places then their states will in general be entangled. But, with the exception of the particular behaviour exemplified by the polarisation of photons in the laboratory, the way in which the particles are entangled, and hence the nature of any correlation between them, will be completely random, so that in practice their responses will be independent. If we consider, instead of single particles, larger systems of many particles, then the situation becomes even worse because of their greater interaction with the environment. Thus entanglement can have no effect outside the laboratory.

I now want to suggest that this argument, while defining important limits to what can happen, is not completely conclusive. Historically, we have constantly found nature to surpass our own ingenuity in evolving its own subtle ways of implementing effects which we have to implement by brute force. If, as I think, there is circumstantial evidence for entanglement playing a role in organisms, then there is a case for searching biological systems to discover how they might do it, even when we cannot imagine this in advance. So let us move on to discuss some areas where more solid evidence for the role of entanglement in living systems might be found, moving on from the sceptic’s position to what might be called the liberal position.

This position proposes that entanglement – or at least something very like it – may play a role within an organism, as part of its internal communication and control system. In this context, Hameroff has drawn attention to the possible role of microtubules: tubes forming a ‘microskeleton’ inside each living cell, made of a regular arrangement of protein molecules. Because of their small size, and the way they are shielded by the structure of the surrounding water, these tubes could support internal vibrations whose states were well protected from decoherence by the environment. Microtubules might thus form a good ‘box’ for storing quantum states. For this to be effective, however, the microtubules need to communicate with each other by conventional means: both in order to set up a states with a known entanglement (recall the need for a classical communication system alongside the quantum one) and also to keep refreshing the entanglement as decoherence penetrates the tubules and randomises the correlations. Hameroff, in collaboration with Roger Penrose, achieves this classical communication through a novel scheme of physics in which an aspect of gravitation, yet to be worked out in full detail, intervenes so as to realise correlated manifestations at separated microtubules, in a process that is closely linked to consciousness, with co-ordination happening via ‘gap junctions’ in the microtubules. The many technical details of all this make it a very uncertain area: in 2001, for instance, Guldenagel and co-workers produced a mouse with no gap junctions but apparently normal behaviour; calculations of the length of time that the entanglement can survive decoherence are difficult and contested; Penrose’s theory is still at a very speculative stage, and it is unclear how crucial it is to the co-ordination of the microtubules; and, at the end of all this, it is not all that clear just what the microtubules are supposed to do once they have got their act together. The liberal position leads to lots of interesting scientific research, but in terms of the big questions of life it is not in the top league.

So the sceptical and liberal positions lead to a rather provocative situation. On the one hand, entanglement seems to be consonant with some of our deepest experiences: of the connectivity of the world, of the reality of synchronicity. Yet on the other hand it is hard to see how entanglement can act so as actually to deliver the goods. Are we somehow looking at things in the wrong way?

Before trying to answer this question, let me link it with another issue in which quantum theory promises much but somehow fails to deliver, namely that of the nature of mind. Many writers – including, as I have mentioned, Penrose – have associated quantum processes with mind (sometimes using in addition the word ‘consciousness’). Though we like to believe that our minds make decisions using some approximation to the formal structure of logic first described by Aristotle. But in reality, and fortunately, this is not so: the power of our thought actually lies in a process that goes significantly beyond that logic, namely our ability to hold many different conceptual frameworks conjecturally together until a creative resolution emerges. And this is essentially the definition of quantum logic (the logic governing quantum systems), rather than Aristotelian logic. Is it just a coincidence that minute particles and higher mammals (let us not be too anthropocentric) share the same perverse logic? Or could it be that, as Gregory Bateson argued – with a rather careful information-theoretic definition of ‘mind’ – all natural systems exhibit mind; and, moreover, the effect of mind is described by quantum logic?

The difficulty with linking quantum theory with these very suggestive correspondences lies in finding a way in which quantum effects can move from the microscopic, where we know they reign supreme, to the larger scale of living organisms. But could it be that this ‘bottom-up’ approach (building the large out of smaller sub-units) inevitably leaves something out? Moreover, when we examine the sceptical and liberal approaches just outlined, it looks very much as though we are trying to extend quantum theory to the large-scale realm, while at the same time working within metaphysical assumptions about space, time and reality that automatically exclude quantum theory from that that realm. Are there alternatives to this approach?

This brings me to what I call the cosmological position, which I support myself. The idea is that we regard the whole universe as a quantum system, and allow top-down influences (from the large to the small) as well as bottom-up influences. I was led to this by having devoted most of my work in physics to the large-scale structure of the universe, so that a cosmological perspective always comes naturally to me. Such a perspective radically alters one’s view of quantum theory: decoherence is the losing of quantum information to the environment; but the universe as a whole has no environment. Cosmologically, information is never lost (even, if we are to believe Hawking’s recent claims, in the presence of black holes). This suggests (and there are loopholes!) that the universe remains coherent: it was, is and always will be a pure quantum system. The non-coherence of medium scale physics – non-coherence ‘for all practical purposes’, as John Bell used to say – is only an approximate consequence of our worm’s-eye view.

When we take this viewpoint (following lines that have been explored, more conservatively, by Chris Isham and others) we find that there is whole layer of physics revealed that is taken for granted as part of the metaphysics of laboratory physics, a layer that appears formally as the interplay of different logical structures associated with different organisms, but which we might identify subjectively as an interplay of different structures of meaningexperienced by these organisms. This layer is independent of the dynamical layer investigated by laboratory physics, in the sense that, once a structure of logic/meaning emerges, then the dynamics of quantum theory operates within it without constraint, so that laboratory physics is not affected. Conversely, the outcome of this dynamics can help to shape the possible structure of logic/meaning, but it does not determine it. There is a freedom present at this level which points to a whole noetic dynamics of the universe.

This leads to the picture that I presented in Living in Connection, in which the world is a nested lattice of quantum organisms. We can see this at work in our own being. My ego (the subjective ‘I’) is a subsystem of my whole body-mind, and I can thereby sense both my relationships with the vaster patterns of meaning of the planet and beyond, which go into constituting ‘me’, and also the well-being of my body which in turn gives direction and meaning to the smaller scale processes that support it. It is this that could solve the problems of decoherence, though here I am speculating beyond what has actually been demonstrated. Certainly there is a constant interplay between the coherence which each system receives from the greater ones in which it is contained, and the processes of decoherence which make it behave, in relation to its environment, as if it were a classical system. Thus entanglement within a specific quantum state, having a function in the organism and in the greater whole, could be maintained by a top-down influence.

In this nesting of systems, the buck stops with the cosmos as a whole, which shares some of the properties of what many call ‘the mind of God’, though I always use the g-word with trepidation. If I accuse Penrose and Hameroff of being short on details, then I am much more guilty of that myself. But the more I live with this picture, the more I think it makes sense both of physics and of human experience, including the experience of the mystics. Entanglement may be an explanation of the major paranormal experiences that many of us have encountered, but we will only arrive at a justification of this by a theoretical and experiential investigation of the cosmological level.

Further reading

Clarke, Chris, ‘Quantum Mechanics, Consciousness and the Self’, in Science, Consciousness and Ultimate Reality, ed. David Lorimer (Imprint Academic, 2004) An extended account of the view given here.

Clarke, Chris, Living in Connection (Creation Spirituality Books, Warminster, 2002) An exploration of the spiritual principles linked with this position.

Omnès, Roland, Quantum Philosophy: Understanding and Interpreting Contemporary Science. Trans Arturo Sangalli (Princeton University Press, Princeton NJ, 1999) An alternative view of the same area.

 

Chris Clarke was Professor of Applied Mathematics, and now Visiting Professor, at the University of Southampton.

Emergence

By Michael Colebrook

Everything is best understood by its constitutive causes. For, as in a watch or some such small engine, the matter, figure and motion of the wheels cannot well be known except it be taken asunder and viewed in parts.

Thomas Hobbes

Emergence means complex organizational structure growing out of simple rules. Emergence means stable inevitability in the way certain things are. Emergence means unpredictability, in the sense of small events causing great and qualitative changes in larger ones. Emergence means the fundamental impossibility of control. Emergence is a law of nature to which humans are subservient.
Robert Lauchlin. A Different Universe.

In effect, there seems to be no end to the emergence of emergents. Therefore, the unpredictability of emergents will always stay one step ahead of the ground won by prediction. As a result, it seems that emergence is now here to stay.

 

Jeffery Goldstein

The fundamental premise relating to emergent phenomena is that wholes can contradict Hobbes’ dictum and be more than the sums of their parts. In any system for which this is true, that element which constitutes the ‘more than’ is an emergent property of the whole system and has to be regarded as the outcome of a creative process. Emergent phenomena cannot be described in terms of a chain of causality. They do not have a cause. They just happen. Both Ian Stewart and Stuart Kauffman (see References) emphasise that there is as yet no sound scientific theory of emergence.

It might be thought that, given these criteria, emergent phenomena were relatively rare and special features of the created order. In fact we are surrounded by them. It could be claimed that the human sensory system was designed specifically to provide awareness of emergent phenomena. Physics tells us that a chair is made up of a collection of very, very small particles separated by distances that are enormous compared to their size. By far the largest constituent of a chair is empty space. And yet, we can see and feel a chair and we are confident that if we should sit on one, we would not finish up on the floor. The solidity of solids is an emergent phenomenon and is a function of the relationships between the particles that physics tells us about. The solidity of solids is not an illusion, it is as real as the particles which constitute the solid. The properties of solids such as density, hardness, strength, elasticity etc. cannot be reduced to the properties of constituent particles because they depend on the relationships between them. The properties of solids are as real and fundamental with respect to solids as are the properties of the so-called fundamental particles.

There can be layers of emergent properties. Forests exhibit emergent properties based on relationships between living organisms. Living organisms show emergent properties based on relationships between complex chemicals. Complex chemicals show emergent properties based on the relationships between atoms. Atoms show emergent properties based on the relationships between sub-atomic particles.

Given the ubiquity of emergent processes and they way in which they are organised into progressive sequences, it can be argued that they are the means by which the universe creates itself.

If this makes it sound too easy to create a universe, emergent properties are not entirely free to make things up as they go. They are subject to significant constraints, such as the universal conservation laws which state the matter and energy can neither be created nor destroyed. They can only proceed by creating new patterns of relationship using pre-existing materials. This is one of the reasons why it took 10,500,000,000 years for life to emerge on planet Earth. As D H Lawrence rightly says:

The history of the cosmo
is the history of the struggle of becoming.

And the history of the struggle of becoming is substantially the history of emergent phenomena. It is one of the paradoxes of modern science that in spite of everything that is known and understood about the cosmos there is no clear general theory of emergence. This is because science has on the whole adopted Hobbes’ dictum and has been concerned primarily with seeking explanations of phenomena in terms of the behaviour of their parts. The study of emergent phenomena requires a top-down as opposed to the Hobbesean bottom-up approach.

One of the concepts that has been useful in attempts to give shape to studies of emergent phenomena is that of symmetry. In terms of the scientific definition of symmetry the surface of a still body of water exhibits perfect two-dimensional symmetry because it is the same in all possible directions in a two dimensional plane. If you drop a stone into the water the result is the formation of a set of circular ripples which are symmetrical about a centre (the point where the stone was dropped), these ripples exhibit radial symmetry in that they look the same in all directions but only from the central point. They exhibit a reduced symmetry compared with that of the undisturbed surface. Although there is an apparent increase in ordered pattern on the surface of the water the original symmetry has been broken. In general, increase in order and pattern is associated with broken symmetries. A good example of the emergence of order associated with symmetry breaking is provided by Langton’s Ant. The apparently chaotic pattern produced by the first 10,000 moves by the ant can be said to show perfect two-dimensional symmetry (at least away from the edges and in a statistical sense) in that it looks the same in all directions. The highway represents a breaking of this symmetry to produce a more ordered state. Both the chaotic and the ordered states are intrinsic features of the behaviour of Langton’s Ant which is governed by specific but very simple rules. The ordered state represents an emergent property associated with symmetry breaking. It needs to be stressed that symmetry breaking is a descriptive feature of emergent phenomena. It does not constitute an explanation or even a contribution towards one.

Another feature of emergent phenomena is coherence. When a stone is dropped into still water, some of its energy is transferred to the water and results in the formation of a series of waves which involve the coherent behaviour of millions and millions of water molecules. Due to the viscosity of the water the waves soon dissipate, their energy being converted to heat. The waves are caused by the energy transferred by the falling stone but the form of the waves is the result of internal and intrinsic relationships between the molecules of water. With Langton’s Ant the highway represents a coherent pattern of behaviour of the ant, in this case a loop of 104 moves, compared with the previous chaos.

In this case the pattern of the highway does not dissipate because there is a continuous input of energy from the computer on which the ant programme is running.

Another form of cellular automaton involving simple rules and devised by John Horton Conway (see Conway’s Game of Life) can exhibit a wide range of coherent behaviours involving closed loops and moving patterns as well as chaotic and ordered sequences of moves. As with Langton’s Ant the outcome of any but the simplest of initial patterns is unpredictable and small changes can produce very different outcomes. One of the interesting features is that it is possible the establish an initial state consisting of a set of moving patterns called gliders which interact to produce a pattern which then generates a stream of gliders, coming close to being a self-reproducing system.

What these cellular automaton games show is that systems with simple rules can produce unpredictable results and some of these can take the form of quite complex emergent features.

Let me finish this brief introduction to emergent phenomena with an example from the real world. If you take a bundle of fibres and spin them together, over and under each other, and you get yarn. Take yarns and spin them together, over and under each other, and you get string. Spin strings together, over and under each other, and you get rope. This is an age-old craft; the earliest preserved bit of rope was found in the caves of Lascaux and dates from about 15,000 BC.

Yarn, string and rope possess properties of length, strength, flexibility and durability which are potentially present within a bundle of fibres but their manifestation depends on the internal structural relationships between the fibres. Yarn and string and rope are more than the sums of their parts, they are emergent entities, and over and under is an emergent rule or principle. As often happens, emergent entities provide opportunities for further emergent phenomena. And in this case these involve the same rule of over and under together with developments of it.

The classic Ashley Book of Knots contains diagrams and descriptions of 3854 things that can be done with rope and string, virtually all of which involve some version of over and under.

Woven materials are another example of emergence based on yarns combined according to the rule of over and under. Starting with a simple weave of over one and under one, there is an almost infinite variety of other options as the strict form of the rule is relaxed to allow other combinations of overs and unders. Weavers experiment and probe the limits of the constraints imposed by the basic rule. But it can never be totally neglected or the resulting material will simply fall apart.

The rule of over and under applied to fibres is an example of emergence essentially created by human ingenuity in which emergence builds on emergence and in which the application of a simple principle can lead to an almost endless variety of outcomes.

Probably the most significant of all emergent phenomena is the mind that you are using to read and I hope understand this account of emergent phenomena. It has taken ‘the struggle of becoming’ some fourteen billion years to produce it.

References

Clifford Ashley. The Ashley Book of Knots (Faber & Faber, 1947).
Brian Goodwin. How the Leopard Changed Its Spots (Phoenix, 1995)
Erich Jantsch. The Self-Organising Universe (Pergamon Press, 1980).
Michael Colebrook How Things Come To Be (GreenSpirit Press, 2006).
Stuart Kauffman. At Home in the Universe (Viking, 1995).
Stuart Kauffman. Investigations (OUP, 2000).
Ricard Solé & Brian Goodwin. Signs of Life (Basic Books, 2000).
Ian Stewart & Jack Cohen. Figments of Reality (CUP, 1997).

Dynamical Systems

by Chris Clarke

Key Concepts

A continuous dynamical system is any physical thing that can be regarded as having states which can be specified (in a small enough region of the space of all states) by a collection of numbers, and where the change in the system with time is given, in terms of these numbers, by a definite rule. For example, a pendulum is a dynamical system, and its state at any time is specified by the position of the pendulum (a point on a circle, if the pendulum is allowed to go all the way round) and the speed of the pendulum bob at that time, which is a number. If we decide on one fixed direction round the circle in which to measure the speed, then this number can vary over all negative and positive numbers (negative numbers meaning that the bob is moving backwards relative to the chosen direction). It we draw the circle representing position in one plane and the line representing the speed at right angles to this plane, then we can represent the state space of the pendulum by a cylinder.

If the rule depends only on the current state, and not on the time, the system is called autonomous. Note that a dynamical system is, mathematically speaking, deterministic because it is governed by precise rules, though practically it may be impossible to predict its motion. By contrast a quantum system is not deterministic, because of the measurement phase of its evolution (see the paper on Quantum Theory).

As the state of a dynamical system varies in time it traces out a path in state-space called a trajectory. The only exception is when the state is at an equilibrium point, which is a special case when the rule specifying the dynamical system requires that the state does not change at all with time. For the pendulum the equilibrium points are the states with zero speed at the top and bottom of the circle of positions. In these cases the trajectory consists of just a single point. Thinking of dynamics in this pictorial way moves the subject from the arena of numbers and equations into the arena of shapes and forms.

A system in some region of its space of states is called dynamically unstable if a small variation in the initial conditions can lead to a difference which, on some region of  the trajectory, starts growing exponentially. (Strictly this is called Liapoounov instability). In cases like this the system is practically unpredictable , even if theoretically its motion is determined by a precise rule. The Lorenz System (see below) provide a good example of this. The combination of dynamical instability with quantum theory demonstrates that, however one interprets quantum theory, the universe is fundamentally unpredictable on a large scale as well as a small scale. Dynamical instability is sometimes called “the butterfly effect”: a butterfly flapping its wings in Brazil can cause a hurricane in Bengal.

Chaos. The term was first used of a dynamical system whose behaviour showed no predictable pattern. Various attempts have been made to specialize the term to say something more positive – for instance, to describe behaviour where a trajectory wanders round a region of the space of all states, coming arbitrarily close to any specified sate (a situation also called ergodic)

An attractor is a trajectory in the state-space of a dynamical system which has the property than any state sufficiently close to the attractor will move towards it. If the attractor is neither a (stable) equilibrium point nor a closed loop (called a limit cycle) then it is called a strange attractor.

The Lorentz System 

The system here is a very simplified model of convection in a gas, which Lorentz was studying in order to understand weather system. It produced the first example of a strange attractor to be identified and explored in detail. While convection is in reality a complicated motion of the entire gas which would involve millions of parameters to specify the position of each part of the gas at all accurately, in this model the state of the system is described by just three parameters, and the equations governing them are reduced to a very simple form. The picture below (generated by Dynamics solver by Juan M Aguirregabiria) shows the way in which two of these parameters very with time.

Two trajectories are shown, in red and blue, starting close together in the inner part of the loops on the left hand side. The system illustrates the “butterfly effect” in which a small variation in the initial conditions produces a large difference in the eventual behaviour. The trajectories stay close together as they pass through a stable region. After performing three loops of a quasi-cyclical motion (nearly returning to their starting point, but drifting away slightly) they enter an unstable region. Thereafter the two behave quite differently, even though they started close together. The red trajectory veers off to the right hand region, where it completes one loop and then returns to the left and makes a further loop there. The blue trajectory completes one more loop in the left region before moving to the right, where it takes up a quasi-cyclical motion in the right hand region.

Emergence of Structure 

Dynamical systems have the property that structure can spontaneously emerge from them. Though this happens in continuous dynamical systems, it is easiest to study by reducing the dynamical system to a discrete one, in which time progresses by steps instead of changing continuously. We can get a discrete system out of a continuous one by taking “snapshots” at either a regular spacing, or when a trajectory cuts through a chosen surface. In the examples we are about to look at, the space of states is also discrete, being made up of the set of possible patterns on a grid. Again, this is for the sake of reducing the problem to something simple enough to handle. A system called “life” is described next. A simpler system called “Langton’s ant” is described elsewhere.

Conway’s Life (described in more detail on this page)

The “game” (for one player, with no choice at any stage!) is played on an infinite grid of squares, each one of which can be either black or white. The picture below (Pictures of Conway’s “Life” are generated by the programme Life32, by Johan G. Bontes) shows a part of a random pattern, imagined to continue indefinitely in all directions.

The pattern evolves one step at a time according to a deterministic rule:  At each step, if a black square is surrounded by more than three other black squares it “dies” (turns white) through overcrowding, while if it is surrounded by less than two black cells it dies from loneliness. If a blank square is surrounded by exactly three black squares then a new black square is “born” there.

After 21 steps the above random has evolved to

– which is starting to manifest the sort of grouping that we might call a pattern. After 62 steps quite stable structures have formed: static rocks, forms that oscillate regularly between two shapes, and forms that move steadily across the board. The picture below shows a snapshot of some of these.

Conway’s Game of Life

by Michael Colebrook

(The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970.

The “game” is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves.)

 The Story So Far

 Self-replication in Conway’s Life has been a topic for discussion and research from the very beginning, over forty years ago now (!). The original purpose of Conway’s Life was to find a simplification of John von Neumann’s self-replicating machine designs, which used a CA rule with 29 states. A couple of non-constructive universality proofs for B3/S23 Life were completed very early on, though they were never published in detail — and my sense is that actual self-replicating patterns along the lines of these proofs would require something on the order of a planet-sized computer and a geological epoch or two to simulate a replication cycle.

The technology to build a Conway’s Life replicator out of stable parts has been available since at least 2004. A working pattern could certainly have been put together in a few years by a full-time Herschel plumber, with a high-energy glider physicist or two as consultants. But unfortunately there seem to be very few multi-year grants available for large-scale CA pattern-building — even for such obviously worthwhile Holy-Grail quests as this one!

In 2009, Adam P. Goucher put together a working universal computer-constructor that could be programmed to make a complete copy of itself. The pattern, however, is so huge and slow that it would have taken an enormous amount of work to program it to self-replicate — it would have been easier to come up with a new replicator design from scratch. Clearly, in hindsight, everyone was waiting for something better to come along.

The latest developments

Replicator Redux (from b3s23life.blogspot.com) January 11th, 2013

 

Play the game here:

http://www.math.com/students/wonders/life/life.html