Global Warming-The Real Questions

by Sky McCain

 

“Man has lost the capacity to foresee and to forestall.  He will end by destroying the earth.”  Albert Schweitzer

“Climate is essentially an emergent property of life’s interaction with its immediate environment.”  Peter Bunyard

Just as cancer cells appear to have no awareness that they are committing suicide by killing their host, we are doing the same in respect to Gaia.  Although we have not found a cure for cancer, I do hope that Gaia will be able to find a cure for her form of cancer.

I’ve written this because I think that we are getting our climate change information in short bursts of controversial statements and media hype.

“A greater obstacle to public communication has arisen with the politicization of reporting of global warming, a perhaps inevitable consequence of the economic and social implications of efforts required to alter the course of human-made climate change. We have the impression that the effect of politicization on communication of the science is aggravated by the fact that much of the media is owned by or strongly influenced by special economic interests.

The task of alleviating the communication obstacle posed by politicization is formidable. The difficulty is compounded by continual attacks on the credibility of scientists. Polls indicate that the attacks have been effective in causing many members of the public to doubt the reality of global warming.” (Hansen, 2010)

The books that I am familiar with on the subject, although written by highly qualified scientists, such as James Hansen, in my estimation, miss important aspects that would serve to fill the gaps left by the media. In this chapter, which is not intended to tell the whole story, I hope to tie up a few loose strands and fill in some gaps usually ignored by the typical news item.

Anthropogenic carbon emissions have altered natural climate cycles for the last several thousand years in gradual increments.  As more and more of our species left hunter gathering groups and engaged in agriculture, our footprints deepened.  Forests were cleared for agriculture, housing, war machines, weapons and shipbuilding and many other purposes.Forestdestruction has persisted to this day and previously massive rainforests are threatened with total destruction with no hope of natural regeneration as the poor soil is depleted for export crops such as palm oil and soy beans.  Parallel with this destruction we are seeing a population increase that is totally out of control.  It seems that many people believe they have an inalienable right to have as many children as they choose without regard whatsoever for the Earth’s carrying capacity. There appears to be no serious discussions world wide directed toward seriously limiting human population.  Do we really have an energy crisis or just feeling the results of runaway population growth? The rise in population and the rise in temperature and atmospheric CO2 follow the same upward trend locked into a regenerative feedback cycle.  More people, more energy required, more energy expended, more CO2 not to mention unhealthy pollutants such as coal and diesel particulates. It is only since the Industrial Revolution and the development of monitoring technology that we have charts and diagrams that take temperature and CO2 out of the speculative realm to the home of certainty.  It is a certainty that the percentage of CO2 in the atmosphere is higher than we have seen in at least the last 600,000 years and that humans are the major cause.

There can be no doubt that anthropogenic greenhouse gas emissions, such as carbon dioxide, nitrous oxide and methane coupled with carbon particulates from brown coal fired electricity plants, contribute significantly to the greenhouse effect that has inhibited the sharp drop in global temperature seen in the past 3 or 4 interglacial periods.  The graphic picture of the global temperature pattern revealed by the Vostok and EPICA (Antarctic) earth-core samples, resembles an upside down icicle of warmth for around 8,000 years followed by around 110,000 years of cold (known as ice age). This upside down icicle depicts a very quick rise, very sharp peak and a very sharp fall of 2 to 3 degrees centigrade within less than 10,000 years before gradually settling into an ice age of nearly 10 degrees colder than the peak in around 10,000 more years.

Click here for a good visual of the Vostok findings

Click here for the antartic core samples visual

Humans are obviously an important part of nature, but since our ecological footprint has only been really significant for maybe 5 out of the last 10,000 years, I think we need to ask:  How did the Earth manage to sustain itself before we arrived on the scene?

Humans have dangerously stifled the planets natural means of regulating itself.  Gaia maintains a balance of forces using its life-forms and weathering process.  A few life-forms vary the amount of reflected sunlight, atmospheric CO2 and O2 content, whilst others sequester carbon. The ocean absorbs vast amounts of C02 but less and less as its temperature increases.

“Our study carried somewhat surprising results, showing that although the major impact of deforestation on precipitation is found in and near the deforested regions, it also has a strong influence on rainfall in the mid and even high latitudes,” said Roni Avissar, lead author of the study, published in the April 2005 issue of the Journal of Hydrometeorology.

http://news.mongabay.com/2005/0919-nasa.html

Moisture from forests is driven north in the Northern hemisphere and south in the Southern hemisphere.  In the Northern hemisphere, for instance, the Amazon rainforest has in the past, through wind channels, brought welcome moisture northward into the arid regions of Mexico and the US Southwest.  The continuous and ruinous wood chopping has already caused and/or enhanced severe draught in these areas.  Since the destruction continues, the American Southwest is doomed to a Sahara-like future and the wondrous mountain forests of the Chiricaua’s, for instance, will be lost, probably forever.  I suppose you would have had to actually walked within this region to grieve for them as they suffer.  I only hope I can live long enough to see them once again and hope they shall not die before I do.

Glacier melt re-mineralizes the soil after each ice age.  There is an amazing array of self-regulation behaviours too lengthy to delve into here. However, our present and recent past global growth economy is destroying these mechanisms.  It is obvious that the fate of the Earth and our fate intermingle, for actually, as was proposed in previous chapters, we are the Earth and we are thus committing suicide.

Being at the top of the food chain does not mean that we can now assume Earth management responsibilities.

Several knowledgeable and well-meaning authors have written about how we should be stewards of the Earth.  A first thought might accept this idea but with more reflection, I suggest that it is an extremely dangerous anthropocentric view of our presence.  Stewardship means “responsible planning and management of resources.”  This view is dangerous simply because we have not the wisdom and understanding to control and manage “nature”, the word we use for the aggregate physicality of Gaia. We must play in tune with Gaia’s rhythm or be selected out of the orchestra.

So it has been hot before

Climate change sceptics never tire of stating that the Earth was warmer millions of years ago. Be that as it may, we can forget about how the planet was 65 million years ago for the purpose of determining how our industrial pollution affects the present.  Yes, the planet was far warmer and there was over 10 times the amount of atmospheric CO2 around.  Why Gaia evolved to a series of cyclical glacial and interglacial periods is beyond determining with the instruments we have on hand and thus beyond our understanding. We must begin to appreciate that Gaia may just know what she is doing.  Perhaps the long glacial periods are necessary to counteract the increased warming energy of the sun.  Again, we don’t know.  Of course, if we continue to look upon the planet as a piece of machinery that can be rebuilt and controlled, than there is very little hope for us.  Gaia is a living organism which we live in and function as part of.  Living organisms are complex and Gaia even more complex.  Regardless of how the planet was 65 million years ago or 2 billion years ago, we have to work with the present extent of earth’s evolution.  Of course climate varies.  However, our advances in scientific instruments and ability to read the past from soil and ice core samples reveals that our present interglacial warming half cycle varies significantly from the past in two primary ways.  Atmospheric CO2 is steadily increasing and higher than ever recorded in the last 750,000 years and although we are overdue for a cooling trend, global temperature has increased around one degree centigrade in the last 140 years.  It is appropriate to question why average temperature is increasing, but a far more important and largely ignored question is: Why have we not begun the steep cooling trend seen in the Vostok core sample graph shown above and in the graph shown here?


*Note:  Contrary to the note in red within the graphic, the high in the present interglacial period is NOT  2c higher than the previous four periods shown in the chart.  Actually, it is only a little more than 1c higher.

First some background information. I’m afraid that many people think that climate is completely unpredictable and entirely erratic.  This is not true.  Let me explain why.

Why the heating effect from the sun varies

The fundamental driver of atmospheric temperature is, of course, the sun.  The amount of heat felt on the Earth’s surface is dependent primarily on the angle of incidence of sunlight.  Obviously, with a sphere, the greatest heat is absorbed at the center of the globe if and when the globe lies in a plane perpendicular to the sun.  The heating effect tails off as the angle, of incidence, [the angle formed by the input rays and the reflected rays when the sun’s rays are not perpendicular] increases and the sun’s rays are spread out over a greater area.  At an angle of 30 degrees, for instance, the sun’s rays are spread over an area double to that of the perpendicular and thus the heat generated is only one half.  Calculating the heating effect of the sun, called insolation, is much more complicated and variable due to three aspects of the earth’s juxtaposition to and orbital path around the sun.  The following explanation is limited to just an introduction.  Full understanding requires detailed diagrams and more precise information than I am able to impart in this chapter. The details are truly fascinating and are fully and clearly explained on the web at:http://en.wikipedia.org/wiki/Milankovitch_cycles.

There are three major physical factors that affect insolation and it is only their synthesis which varies insolation enough to trigger the significant rise or fall of global temperature that results in the recent glacial/interglacial cycles with a period of around 120,000 years.

Obliquity

Basically, the equator only receives maximum energy transfer (when the sun’s rays are perpendicular) twice a year because the earth is tilted.  It is this tilt – called obliquity – which causes what we call the seasons.  The northern hemisphere, for instance, receives a lesser incidence angle during what we call summer and greater during the winter.  The tilt also varies slightly (from 22.5 degrees to 25 degrees) in a cyclical manner that either favors warming or cooling in both hemispheres but always opposite to each other.

This can be tricky to visualize.  You might push a stick through a small ball of something soft so it sticks out at both ends.  Color one end of the stick blue, for instance and name it the North Pole and the other, South Pole a different color.  Hold it in one hand and tilt it around, for example, 30 degrees.  Then hold a somewhat bigger ball in the other hand that represents the sun. Tilt the stick so that the North Pole is tilted toward the sun.  You can see that at the beginning you have maximum summer at the North Pole and maximum winter at the South Pole.  Move the tilted ball 180 degrees and you’ll note the opposite effect.  At the 90 degree and 270 degree points around the sun you can see the equinoxes where the tilt has virtually no effect.

Eccentricity

The factor that amplifies both warmer and cooler conditions is the shape of the earth’s orbit around the sun.  You may know that something that is “out of round” is often called eccentric.  The Earth’s orbit is not only eccentric, but the amount of eccentricity varies over time in a cyclical manner.  During the thousands of years when the orbit is more circular, the intensity of the seasons is less because the Earth is closer to the sun at both the closest and farthest extent of the orbit. The opposite, of course, holds true when the orbit is more eccentric.

Precession

The last, most difficult to grasp factor is called precession.  My simple explanation for purposes of this discussion is that since the Earth is not perfectly round, it does not spin like a fast moving top but wobbles slightly.

The effect of this wobble on our climate is subtle.  You may need to study precession more deeply elsewhere to fully grasp the concept.  Wobble not only adds or subtracts to the amount of tilt, but means that the beginning and end of the seasons don’t hold to the same geographic location around the Earth during the precession cycle.  The reason being that wobble changes the maximum and minimum extent of tilt so that the equinox points are not synchronous with our solar based time.  How might that affect insolation?  The start of winter at X geographical position, for instance, makes a complete cycle approximately every 23,000 years. (Note that none of the Earth’s orbital cycles are of equal duration over a period of many cycles.) If winter occurs in the northern hemisphere over mostly land and the Earth’s orbital position is near the farthest point from the sun, then these winters are colder and thus enhances the build-up of ice. If this is synchronous with a cooling effect out of the other two factors mentioned above, then it is likely that the planet will remain in an ice age.

Again, the opposite or interglacial period, may occur where warm winters are in synch with a warming effect out of the other two factors.

Click here for an interesting graphic on precession

 

It is how the orbital cycles line up with each other

Since the period of all three of these cycles is different, they are constantly aiding and opposing insolation.  The Melankovich theory is that when they move into an aiding position, they trigger the start of an interglacial period and when the shortest period decreases enough, they set up a condition favourable for a sudden decrease in insolation and average global temperature.  The resultant cycle is around 120,000 years and

Co-relates favourably with ice core samples.  Of course, as in most scientific theories, there are detractors who wish to trash the whole idea.  One only needs to consider the numbers of people, including scientists, who don’t accept the theory of evolution.  One reason for this is that factors such as absorption of CO2 by trees, clouds formed because of transpiration by trees, variations in the amount CO2 being held in the oceans, snow cover, the numbers of phytoplankton, the amount of dimethyl sulphide and more must figure into the equation.  With so many variables, all global warming theory is open to criticism and those oppose will bring their ulterior motives into the picture.

There is more to the story

As if there were not enough variables to contend with, I must add that cyclical variations in air and ocean currents are affected by orbital forcing explained above and in turn also affect the amount of heat absorbed.

Scientists, particularly Professor Andre Berger, have been able to plot the various orbital cycles and produce a table going back a million years.  This table also contains insolation figures.  Thankfully, the tables are available for download from the internet.  An analysis of the tables reveals two rather worrying facts which due to the complexity of the material that I have tried to throw some light on above, have not been explained by the media.

One, the event that should have triggered a cooling cycle – minimum tilt that favours a cooling trigger – has passed its nadir and is now increasing.  Another nadir is not due for another 40,000 years.

Two, eccentricity is now nearly as low as it ever been in the last million years and will become even lower for the next few thousand years.  So the Earth’s orbit is almost circular.  This means that the past reinforcement and coincidence of cycle overlaps that have triggered colder winters and global cooling, for instance, will be missing for many thousands of years.

Professor Berger proposes that we will not see another ice age for many thousands of years.

“Today’s comparatively warm climate has been the exception more than the rule during the last 500,000 years or more. If recent warm periods (or interglacials) are a guide, then we may soon slip into another glacial period. But Berger and Loutre argue in their Perspective that with or without human perturbations, the current warm climate may last another 50,000 years. The reason is a minimum in the eccentricity of Earth’s orbit around the Sun.”  (Berger, A. and Loutre, M., 2002, pgs 1287 – 1288)

Much of the climate change literature points out that the adverse (adverse to life-forms like humans) conditions such as increased desertification, flooding of coastal lowlands such as Miami Beach and Bangladesh may occur regardless of our efforts to cut down on anthropogenic greenhouse gas emissions.  Perhaps they read Berger and Loutre above.

So how are humans connected with the variations of orbital factors?

You might well point out that these factors taken simply as stated operate regardless of the human actions.  Yes, but this is not the whole story.

Next we must briefly consider both how much of the sun’s energy is reflected back out into space (referred to as albedo) and how the variations at the earth’s surface, some materials not only vary in absorption capabilities but also hold heat longer than others,  plus the content of the various layers of the atmosphere effect global temperature.  As most of you know, our atmosphere impedes the escape of reflected sun energy by further absorbing and reflecting back heat.  We have named this the greenhouse effect.  Without this property, the earth’s temperature would be perhaps 30 degrees cooler and life as we know it would not be possible.  Our scientists, armed with sensitive instruments, have documented the heat energy absorptive properties of the atmospheric gasses, aerosols, and especially carbon particulates from diesel and coal fuels.  Climate sceptics like to point out that water vapour is the strongest greenhouse gas.  So what?  We can’t drink ocean saltwater but fish live in it.  What’s the point?

To put it simply, we are upsetting the balancing and regulating capabilities of the living Earth by emitting such a large volume of gasses, aerosols and carbon particulates that their heat absorption capabilities have increased the greenhouse warming effect dramatically.  Regardless of the arguments of climate skeptics, there are just no other factors other than anthropogenic that would account for the resulting rise in CO2 and average global temperature.  A graph of both CO2 and Global temperature over the last 600,000 years reveals that they are virtually synchronous.  Our media has had a circus day describing the possible results of excessive global warming.  Enough has already been said so I’ll not go into that subject.  I suggest that overall, the IPCC has underplayed the tune and understated the possibility of runaway global warming.

The Real Questions

The media has drawn our attention away from what I suggest is the real question.  So, we have global warming and that is a problem.  We have identified several factors above that are causing the temperature to rise.  However, I think we are blind to the most significant question:

What, given the factors we have identified and considering Gaia’s ability to self regulate over time, does Gaia have available to reverse the warming if we continue with our industrial expansion?

As we saw above, The Melankovich cycles favour cooling to their maximum extent yet our interglacial period has not come to an end.  I support Dr. James Hansen’s suggestion that we may well skip the next ice age.  If we do, will the Melankovich cooling factors overcome the warming factors that are still present 120,000 years from now? If not, then how hot will the Earth be?

So what? You may still feel that since the Earth has been a lot warmer in the past there is nothing to be concerned about.  Yes, but don’t forget that the sun is constantly growing in volume and putting out more heat.  Dr. James Lovelock’s Gaia Theory was prompted by his dismay over how much cooler the earth had stayed relative to the effect of the increase strength of the sun’s energy over the last 2 to 3 billion years. Lovelock and Harding have listed and explained the various positive and negative feedback mechanisms that come into the forefront as Gaia self-regulates.  Obviously, Gaia’s efforts have brought us to where we are now which resulted in incredible diversity and growth as polar ice diminished.  Not only have humans caused a major drop in diversity and increase in extinction, but our technology and economic system of expansion based on Earth’s resources have set up atmospheric and oceanic conditions outside the parameters we have studied in the last 600,000 years.  Although we certainly cannot answer the question above, we can attempt to discover what resources were available let’s say 120,000 or so years ago at the end of the last interglacial period.

At the beginning of each interglacial period as the ice receded from the land, vast numbers of trees spread north and performed a carbon sequestering service.  They also released water vapor which stimulated cloud cover that increased the albedo effectively taking the place, as far as albedo is concerned, of the miles and miles of ice that had melted. With that negative feedback firmly in place and the orbital forcing factors favouring cooling, the downward cycle of Gaia’s temperature was assured and triggered the end of the interglacial period.

Unfortunately for all, these natural feedback factors been destroyed by humans.  Millions of trees over thousands of years have been chopped to build armadas and commercial shipping, other war implements, and shelter for humans as if they were useless to anything other than to serve the greed of humans.

“Apart from he profligate burning of fossil fuels and releasing the earth’s long-term carbon and energy storage depot that has taken millions of years to lay down, deforestation has been the main contributor to the increase in carbon dioxide in the atmosphere that has resulted in global warming.1 Energy capture and storage is absolutely essential for the survival of the planet, just as energy capture and storage is necessary for the survival of individual organisms.”  (Ho, M. W., 2008, pg. 81)

That’s why I emphasize that we are asking the wrong question perhaps too late.  How will Gaia halt the present positive feedback loop?  Could it be the halt of the Atlantic Current because of the rapid melting of northern ice?  If theAtlantic Oceancools sufficiently to absorb enough CO2 to counteract the other positive feedbacks, then possibly global cooling will be triggered.

We have James Lovelock to thank for prompting research into how the Earth self-regulates its temperature.  How severely have we weakened Gaia’s ability to self-regulate effectively?  When and if Gaia achieves temperature stability, what kind of environment will we have to adjust to?  We simply cannot answer these two questions.  We don’t know and we don’t even know if we can ever know.  No sane gambler would play a game with such dismal odds.  Perhaps our leaders are insane.

Ho, M.W. “Oceans and Global Warming” Science and Society 31 (2006b): 11-13

Ho, M.W. “Global Warming is Happening” Science and Society 31 (2006c): 23-24

Bunyard, P. Gaia, Climate and the Amazon, 2005, Independent Science Panel

 

Global Warming – The Real Questions

Global Warming-The Real Questions

30 October, 2010

Sky McCain

 

“Man has lost the capacity to foresee and to forestall.  He will end by destroying the earth.”  Albert Schweitzer

“Climate is essentially an emergent property of life’s interaction with its immediate environment.”  Peter Bunyard

Just as cancer cells appear to have no awareness that they are committing suicide by killing their host, we are doing the same in respect to Gaia.  Although we have not found a cure for cancer, I do hope that Gaia will be able to find a cure for her form of cancer.

I’ve written this because I think that we are getting our climate change information in short bursts of controversial statements and media hype.

“A greater obstacle to public communication has arisen with the politicization of reporting of global warming, a perhaps inevitable consequence of the economic and social implications of efforts required to alter the course of human-made climate change. We have the impression that the effect of politicization on communication of the science is aggravated by the fact that much of the media is owned by or strongly influenced by special economic interests.

The task of alleviating the communication obstacle posed by politicization is formidable. The difficulty is compounded by continual attacks on the credibility of scientists. Polls indicate that the attacks have been effective in causing many members of the public to doubt the reality of global warming.” (Hansen, 2010)

The books that I am familiar with on the subject, although written by highly qualified scientists, such as James Hansen, in my estimation, miss important aspects that would serve to fill the gaps left by the media. In this chapter, which is not intended to tell the whole story, I hope to tie up a few loose strands and fill in some gaps usually ignored by the typical news item.

Anthropogenic carbon emissions have altered natural climate cycles for the last several thousand years in gradual increments.  As more and more of our species left hunter gathering groups and engaged in agriculture, our footprints deepened.  Forests were cleared for agriculture, housing, war machines, weapons and shipbuilding and many other purposes.Forestdestruction has persisted to this day and previously massive rainforests are threatened with total destruction with no hope of natural regeneration as the poor soil is depleted for export crops such as palm oil and soy beans.  Parallel with this destruction we are seeing a population increase that is totally out of control.  It seems that many people believe they have an inalienable right to have as many children as they choose without regard whatsoever for the Earth’s carrying capacity. There appears to be no serious discussions world wide directed toward seriously limiting human population.  Do we really have an energy crisis or just feeling the results of runaway population growth? The rise in population and the rise in temperature and atmospheric CO2 follow the same upward trend locked into a regenerative feedback cycle.  More people, more energy required, more energy expended, more CO2 not to mention unhealthy pollutants such as coal and diesel particulates. It is only since the Industrial Revolution and the development of monitoring technology that we have charts and diagrams that take temperature and CO2 out of the speculative realm to the home of certainty.  It is a certainty that the percentage of CO2 in the atmosphere is higher than we have seen in at least the last 600,000 years and that humans are the major cause.

There can be no doubt that anthropogenic greenhouse gas emissions, such as carbon dioxide, nitrous oxide and methane coupled with carbon particulates from brown coal fired electricity plants, contribute significantly to the greenhouse effect that has inhibited the sharp drop in global temperature seen in the past 3 or 4 interglacial periods.  The graphic picture of the global temperature pattern revealed by the Vostok and EPICA (Antarctic) earth-core samples, resembles an upside down icicle of warmth for around 8,000 years followed by around 110,000 years of cold (known as ice age). This upside down icicle depicts a very quick rise, very sharp peak and a very sharp fall of 2 to 3 degrees centigrade within less than 10,000 years before gradually settling into an ice age of nearly 10 degrees colder than the peak in around 10,000 more years.

Click here for a good visual of the Vostok findings

http://www.sahfos.ac.uk/climate%20encyclopaedia/images/img9.jpg

Click here for the antartic core samples visual

http://www.iceandclimate.nbi.ku.dk/images/images_research_sep_09/EPICA_with_current.PNG/

Humans are obviously an important part of nature, but since our ecological footprint has only been really significant for maybe 5 out of the last 10,000 years, I think we need to ask:  How did the Earth manage to sustain itself before we arrived on the scene?

Humans have dangerously stifled the planets natural means of regulating itself.  Gaia maintains a balance of forces using its life-forms and weathering process.  A few life-forms vary the amount of reflected sunlight, atmospheric CO2 and O2 content, whilst others sequester carbon. The ocean absorbs vast amounts of C02 but less and less as its temperature increases.

 

“Our study carried somewhat surprising results, showing that although the major impact of deforestation on precipitation is found in and near the deforested regions, it also has a strong influence on rainfall in the mid and even high latitudes,” said Roni Avissar, lead author of the study, published in the April 2005 issue of the Journal of Hydrometeorology.

http://news.mongabay.com/2005/0919-nasa.html

Moisture from forests is driven north in the Northern hemisphere and south in the Southern hemisphere.  In the Northern hemisphere, for instance, the Amazon rainforest has in the past, through wind channels, brought welcome moisture northward into the arid regions of Mexico and the US Southwest.  The continuous and ruinous wood chopping has already caused and/or enhanced severe draught in these areas.  Since the destruction continues, the American Southwest is doomed to a Sahara-like future and the wondrous mountain forests of the Chiricaua’s, for instance, will be lost, probably forever.  I suppose you would have had to actually walked within this region to grieve for them as they suffer.  I only hope I can live long enough to see them once again and hope they shall not die before I do.

Glacier melt re-mineralizes the soil after each ice age.  There is an amazing array of self-regulation behaviours too lengthy to delve into here. However, our present and recent past global growth economy is destroying these mechanisms.  It is obvious that the fate of the Earth and our fate intermingle, for actually, as was proposed in previous chapters, we are the Earth and we are thus committing suicide.

Being at the top of the food chain does not mean that we can now assume Earth management responsibilities.

Several knowledgeable and well-meaning authors have written about how we should be stewards of the Earth.  A first thought might accept this idea but with more reflection, I suggest that it is an extremely dangerous anthropocentric view of our presence.  Stewardship means “responsible planning and management of resources.”  This view is dangerous simply because we have not the wisdom and understanding to control and manage “nature”, the word we use for the aggregate physicality of Gaia. We must play in tune with Gaia’s rhythm or be selected out of the orchestra.

So it has been hot before

Climate change sceptics never tire of stating that the Earth was warmer millions of years ago. Be that as it may, we can forget about how the planet was 65 million years ago for the purpose of determining how our industrial pollution affects the present.  Yes, the planet was far warmer and there was over 10 times the amount of atmospheric CO2 around.  Why Gaia evolved to a series of cyclical glacial and interglacial periods is beyond determining with the instruments we have on hand and thus beyond our understanding. We must begin to appreciate that Gaia may just know what she is doing.  Perhaps the long glacial periods are necessary to counteract the increased warming energy of the sun.  Again, we don’t know.  Of course, if we continue to look upon the planet as a piece of machinery that can be rebuilt and controlled, than there is very little hope for us.  Gaia is a living organism which we live in and function as part of.  Living organisms are complex and Gaia even more complex.  Regardless of how the planet was 65 million years ago or 2 billion years ago, we have to work with the present extent of earth’s evolution.  Of course climate varies.  However, our advances in scientific instruments and ability to read the past from soil and ice core samples reveals that our present interglacial warming half cycle varies significantly from the past in two primary ways.  Atmospheric CO2 is steadily increasing and higher than ever recorded in the last 750,000 years and although we are overdue for a cooling trend, global temperature has increased around one degree centigrade in the last 140 years.  It is appropriate to question why average temperature is increasing, but a far more important and largely ignored question is: Why have we not begun the steep cooling trend seen in the Vostok core sample graph shown above and in the graph shown here?


*Note:  Contrary to the note in red within the graphic, the high in the present interglacial period is NOT  2c higher than the previous four periods shown in the chart.  Actually, it is only a little more than 1c higher.

First some background information. I’m afraid that many people think that climate is completely unpredictable and entirely erratic.  This is not true.  Let me explain why.

Why the heating effect from the sun varies

The fundamental driver of atmospheric temperature is, of course, the sun.  The amount of heat felt on the Earth’s surface is dependent primarily on the angle of incidence of sunlight.  Obviously, with a sphere, the greatest heat is absorbed at the center of the globe if and when the globe lies in a plane perpendicular to the sun.  The heating effect tails off as the angle, of incidence, [the angle formed by the input rays and the reflected rays when the sun’s rays are not perpendicular] increases and the sun’s rays are spread out over a greater area.  At an angle of 30 degrees, for instance, the sun’s rays are spread over an area double to that of the perpendicular and thus the heat generated is only one half.  Calculating the heating effect of the sun, called insolation, is much more complicated and variable due to three aspects of the earth’s juxtaposition to and orbital path around the sun.  The following explanation is limited to just an introduction.  Full understanding requires detailed diagrams and more precise information than I am able to impart in this chapter. The details are truly fascinating and are fully and clearly explained on the web at:http://en.wikipedia.org/wiki/Milankovitch_cycles.

There are three major physical factors that affect insolation and it is only their synthesis which varies insolation enough to trigger the significant rise or fall of global temperature that results in the recent glacial/interglacial cycles with a period of around 120,000 years.

Obliquity

Basically, the equator only receives maximum energy transfer (when the sun’s rays are perpendicular) twice a year because the earth is tilted.  It is this tilt – called obliquity – which causes what we call the seasons.  The northern hemisphere, for instance, receives a lesser incidence angle during what we call summer and greater during the winter.  The tilt also varies slightly (from 22.5 degrees to 25 degrees) in a cyclical manner that either favors warming or cooling in both hemispheres but always opposite to each other.

This can be tricky to visualize.  You might push a stick through a small ball of something soft so it sticks out at both ends.  Color one end of the stick blue, for instance and name it the North Pole and the other, South Pole a different color.  Hold it in one hand and tilt it around, for example, 30 degrees.  Then hold a somewhat bigger ball in the other hand that represents the sun. Tilt the stick so that the North Pole is tilted toward the sun.  You can see that at the beginning you have maximum summer at the North Pole and maximum winter at the South Pole.  Move the tilted ball 180 degrees and you’ll note the opposite effect.  At the 90 degree and 270 degree points around the sun you can see the equinoxes where the tilt has virtually no effect.

Click here for an interesting graphic on obliquity

http://en.wikipedia.org/wiki/Milankovitch_cycles.

Eccentricity

The factor that amplifies both warmer and cooler conditions is the shape of the earth’s orbit around the sun.  You may know that something that is “out of round” is often called eccentric.  The Earth’s orbit is not only eccentric, but the amount of eccentricity varies over time in a cyclical manner.  During the thousands of years when the orbit is more circular, the intensity of the seasons is less because the Earth is closer to the sun at both the closest

Click here for an interesting graphic on eccentricity

http://en.wikipedia.org/wiki/Milankovitch_cycles.

and farthest extent of the orbit. The opposite, of course, holds true when the orbit is more eccentric.

Precession

The last, most difficult to grasp factor is called precession.  My simple explanation for purposes of this discussion is that since the Earth is not perfectly round, it does not spin like a fast moving top but wobbles slightly.

The effect of this wobble on our climate is subtle.  You may need to study precession more deeply elsewhere to fully grasp the concept.  Wobble not only adds or subtracts to the amount of tilt, but means that the beginning and end of the seasons don’t hold to the same geographic location around the Earth during the precession cycle.  The reason being that wobble changes the maximum and minimum extent of tilt so that the equinox points are not synchronous with our solar based time.  How might that affect insolation?  The start of winter at X geographical position, for instance, makes a complete cycle approximately every 23,000 years. (Note that none of the Earth’s orbital cycles are of equal duration over a period of many cycles.) If winter occurs in the northern hemisphere over mostly land and the Earth’s orbital position is near the farthest point from the sun, then these winters are colder and thus enhances the build-up of ice. If this is synchronous with a cooling effect out of the other two factors mentioned above, then it is likely that the planet will remain in an ice age.

Again, the opposite or interglacial period, may occur where warm winters are in synch with a warming effect out of the other two factors.

Click here for an interesting graphic on precession

http://csep10.phys.utk.edu/astr161/lect/time/precession.html

 

It is how the orbital cycles line up with each other

Since the period of all three of these cycles is different, they are constantly aiding and opposing insolation.  The Melankovich theory is that when they move into an aiding position, they trigger the start of an interglacial period and when the shortest period decreases enough, they set up a condition favourable for a sudden decrease in insolation and average global temperature.  The resultant cycle is around 120,000 years and

Co-relates favourably with ice core samples.  Of course, as in most scientific theories, there are detractors who wish to trash the whole idea.  One only needs to consider the numbers of people, including scientists, who don’t accept the theory of evolution.  One reason for this is that factors such as absorption of CO2 by trees, clouds formed because of transpiration by trees, variations in the amount CO2 being held in the oceans, snow cover, the numbers of phytoplankton, the amount of dimethyl sulphide and more must figure into the equation.  With so many variables, all global warming theory is open to criticism and those oppose will bring their ulterior motives into the picture.

There is more to the story

As if there were not enough variables to contend with, I must add that cyclical variations in air and ocean currents are affected by orbital forcing explained above and in turn also affect the amount of heat absorbed.

Scientists, particularly Professor Andre Berger, have been able to plot the various orbital cycles and produce a table going back a million years.  This table also contains insolation figures.  Thankfully, the tables are available for download from the internet.  An analysis of the tables reveals two rather worrying facts which due to the complexity of the material that I have tried to throw some light on above, have not been explained by the media.

One, the event that should have triggered a cooling cycle – minimum tilt that favours a cooling trigger – has passed its nadir and is now increasing.  Another nadir is not due for another 40,000 years.

Two, eccentricity is now nearly as low as it ever been in the last million years and will become even lower for the next few thousand years.  So the Earth’s orbit is almost circular.  This means that the past reinforcement and coincidence of cycle overlaps that have triggered colder winters and global cooling, for instance, will be missing for many thousands of years.

Professor Berger proposes that we will not see another ice age for many thousands of years.

“Today’s comparatively warm climate has been the exception more than the rule during the last 500,000 years or more. If recent warm periods (or interglacials) are a guide, then we may soon slip into another glacial period. But Berger and Loutre argue in their Perspective that with or without human perturbations, the current warm climate may last another 50,000 years. The reason is a minimum in the eccentricity of Earth’s orbit around the Sun.”  (Berger, A. and Loutre, M., 2002, pgs 1287 – 1288)

Much of the climate change literature points out that the adverse (adverse to life-forms like humans) conditions such as increased desertification, flooding of coastal lowlands such as Miami Beach and Bangladesh may occur regardless of our efforts to cut down on anthropogenic greenhouse gas emissions.  Perhaps they read Berger and Loutre above.

So how are humans connected with the variations of orbital factors?

You might well point out that these factors taken simply as stated operate regardless of the human actions.  Yes, but this is not the whole story.

Next we must briefly consider both how much of the sun’s energy is reflected back out into space (referred to as albedo) and how the variations at the earth’s surface, some materials not only vary in absorption capabilities but also hold heat longer than others,  plus the content of the various layers of the atmosphere effect global temperature.  As most of you know, our atmosphere impedes the escape of reflected sun energy by further absorbing and reflecting back heat.  We have named this the greenhouse effect.  Without this property, the earth’s temperature would be perhaps 30 degrees cooler and life as we know it would not be possible.  Our scientists, armed with sensitive instruments, have documented the heat energy absorptive properties of the atmospheric gasses, aerosols, and especially carbon particulates from diesel and coal fuels.  Climate sceptics like to point out that water vapour is the strongest greenhouse gas.  So what?  We can’t drink ocean saltwater but fish live in it.  What’s the point?

To put it simply, we are upsetting the balancing and regulating capabilities of the living Earth by emitting such a large volume of gasses, aerosols and carbon particulates that their heat absorption capabilities have increased the greenhouse warming effect dramatically.  Regardless of the arguments of climate skeptics, there are just no other factors other than anthropogenic that would account for the resulting rise in CO2 and average global temperature.  A graph of both CO2 and Global temperature over the last 600,000 years reveals that they are virtually synchronous.  Our media has had a circus day describing the possible results of excessive global warming.  Enough has already been said so I’ll not go into that subject.  I suggest that overall, the IPCC has underplayed the tune and understated the possibility of runaway global warming.

The Real Questions

The media has drawn our attention away from what I suggest is the real question.  So, we have global warming and that is a problem.  We have identified several factors above that are causing the temperature to rise.  However, I think we are blind to the most significant question:

What, given the factors we have identified and considering Gaia’s ability to self regulate over time, does Gaia have available to reverse the warming if we continue with our industrial expansion?

As we saw above, The Melankovich cycles favour cooling to their maximum extent yet our interglacial period has not come to an end.  I support Dr. James Hansen’s suggestion that we may well skip the next ice age.  If we do, will the Melankovich cooling factors overcome the warming factors that are still present 120,000 years from now? If not, then how hot will the Earth be?

So what? You may still feel that since the Earth has been a lot warmer in the past there is nothing to be concerned about.  Yes, but don’t forget that the sun is constantly growing in volume and putting out more heat.  Dr. James Lovelock’s Gaia Theory was prompted by his dismay over how much cooler the earth had stayed relative to the effect of the increase strength of the sun’s energy over the last 2 to 3 billion years. Lovelock and Harding have listed and explained the various positive and negative feedback mechanisms that come into the forefront as Gaia self-regulates.  Obviously, Gaia’s efforts have brought us to where we are now which resulted in incredible diversity and growth as polar ice diminished.  Not only have humans caused a major drop in diversity and increase in extinction, but our technology and economic system of expansion based on Earth’s resources have set up atmospheric and oceanic conditions outside the parameters we have studied in the last 600,000 years.  Although we certainly cannot answer the question above, we can attempt to discover what resources were available let’s say 120,000 or so years ago at the end of the last interglacial period.

At the beginning of each interglacial period as the ice receded from the land, vast numbers of trees spread north and performed a carbon sequestering service.  They also released water vapor which stimulated cloud cover that increased the albedo effectively taking the place, as far as albedo is concerned, of the miles and miles of ice that had melted. With that negative feedback firmly in place and the orbital forcing factors favouring cooling, the downward cycle of Gaia’s temperature was assured and triggered the end of the interglacial period.

Unfortunately for all, these natural feedback factors been destroyed by humans.  Millions of trees over thousands of years have been chopped to build armadas and commercial shipping, other war implements, and shelter for humans as if they were useless to anything other than to serve the greed of humans.

“Apart from he profligate burning of fossil fuels and releasing the earth’s long-term carbon and energy storage depot that has taken millions of years to lay down, deforestation has been the main contributor to the increase in carbon dioxide in the atmosphere that has resulted in global warming.1 Energy capture and storage is absolutely essential for the survival of the planet, just as energy capture and storage is necessary for the survival of individual organisms.”  (Ho, M. W., 2008, pg. 81)

That’s why I emphasize that we are asking the wrong question perhaps too late.  How will Gaia halt the present positive feedback loop?  Could it be the halt of the Atlantic Current because of the rapid melting of northern ice?  If theAtlantic Oceancools sufficiently to absorb enough CO2 to counteract the other positive feedbacks, then possibly global cooling will be triggered.

We have James Lovelock to thank for prompting research into how the Earth self-regulates its temperature.  How severely have we weakened Gaia’s ability to self-regulate effectively?  When and if Gaia achieves temperature stability, what kind of environment will we have to adjust to?  We simply cannot answer these two questions.  We don’t know and we don’t even know if we can ever know.  No sane gambler would play a game with such dismal odds.  Perhaps our leaders are insane.

1 Ho, M.W. “Oceans and Global Warming” Science and Society 31 (2006b): 11-13

Ho, M.W. “Global Warming is Happening” Science and Society 31 (2006c): 23-24

Bunyard, P. Gaia, Climate and the Amazon, 2005, Independent Science Panel

 

Quantum Theory

By Chris Clarke

Overview

Quantum Theory (often called Quantum Mechanics or Quantum Physics – the terms are used differently by different authors) is an extension of physics in order to cover the behaviour of microscopic objects. Physics as it was before Quantum Theory is called Classical Physics. On some versions Quantum Theory includes Classical Physics as a special case. From the start the theory was subject to controversy and developed into a wealth of different forms, mostly agreeing at the level of practical calculation but disagreeing wildly as to the interpretation. The question “what is quantum theory” is therefore a difficult one.

Both Classical and Quantum Physics describe how the observable properties of a system change with time. The “system” (which here means “thing”) can be anything from an atom to the universe; its properties are quantities like position, momentum, energy, the internal arrangements of its parts and so on.

In Classical Physics there is a set of properties for any given system (namely the positions and velocities of all its parts) which completely determines its time-development and the properties at any later time. In Quantum Physics there is no such complete set of properties. Instead at any given time there are many different possible sets of properties, any one of which sets can be observed; but it is not possible to observe all the properties simultaneously. For instance, position and velocity cannot be observed simultaneously; the first gives a particle-picture the second a wave-picture. The existence of different possible sets of properties is called complementarity.

Any properties at a later time cannot (except in special circumstances) be determined by observing properties at an earlier time. Only their probabilities are fixed by the earlier observation. This indeterminism is the basis of the continual openness of the universe to new possibilities. When combined with complementarity it may provide the notion of free creativity the universe (see Quantum Logic below).

The term observed means different things in different versions: e.g. “manifested,” “recorded by a macroscopic instrument,” “brought to (human?) consciousness” and so on. The last possibility links quantum theory with theories of mind. At any given time there is a well defined specification of the probability of observing any given property. This collection of probabilities is fixed by (or in some versions is identical with) the quantum state, but this state is not itself observable. Interpretations differ as to whether the state is real or a mathematical abstraction, with profound consequences for the whole notion of reality in physics.

The earliest interpretations, dating from workers in Copenhagen, used a two-tier world: a small system obeying non-Classical Physics and an observing laboratory obeying Classical Physics. The many pre-1965 theories tend to call themselves “The Copenhagen Interpretation.” Later interpretations tried to achieve a more unified view. This historical development introduced a succession of alternative structures: the collapse of the state, many worlds, environmental diffusion and so on. Although the early version lives on as a practical rule for the working physicist, for those researching into the foundations of the subject these early versions have been superseded by an interpretation called the histories interpretationwhich makes far fewer metaphysical assumptions.

Systems with infinitely many degrees of freedom (in particular, fields such as the electromagnetic field) are described by quantum field theory whose states can all be constructed out of a special state of the field in question called the vacuum for that field. Thus the vacuum is not some special new entity that brings into being electrons, protons and so on. Rather the electron vacuum is a special state of the electron quantum field, the proton vacuum a special state of the proton quantum field, and so on. One can combine these together to produce a single vacuum, which is a special state of the combined particle fields. The vacuum has zero energy (except in Dirac’s theory which enjoyed brief popularity).

Some basic concepts

Bearing in mind that many of these concepts belong to earlier versions of the theory that are no longer regarded as essential, some of the central ideas are as follows.

Quantum mechanics is usually regarded as a more general type of mechanics than traditional classical physics. The different system of mechanics is required (at least) when the size of a typical action in a system (the product of a typical energy and a typical time) becomes so small that it is comparable to a fundamental constant of nature called Planck’s constant, which has a very small value when measured in the usual laboratory-scale units. Traditional (Copenhagen) Quantum mechanics thinks of processes as having three phases: Preparation, Evolution and Observation.

In Preparation a microscopic system (such as an electron) is prepared (for instance, by being emitted from a heated wire and then projected into a vacuum by appropriate electric fields). Evolution then takes place while the system is left undisturbed – for instance, the electron may pass into an apparatus where a numerical result will be measured. Evolution is a deterministic process, governed by a precise equation called Schrödinger’s equation. Finally, observation is the actual measurement of the numerical result, which is indeterministic. If the system is not destroyed by the measurement (destruction of a photon, for instance, arising when it is absorbed in a photographic emulsion) then after the measurement the system can go to a further apparatus, with the first measurement serving as a preparation for the second.

By taking a wider view of what is meant by a system in quantum mechanics, one can regard every measurement (or observation) as a preparation of some new system, so that the distinction between preparation and observation collapses, and we have a single event that we can call manifestation. The basic concept of a process is then a sequence of manifestations, with evolution taking place between each one. Such a sequence is called a history. The deterministic evolution and the indeterministic observation are combined into a single principle that prescribes the probabilities for all the possible histories that might occur.

Quantum logic

At an even more basic level, quantum theory is just a very general way of talking about processes. At this level, before one introduces the particular laws of particular processes, the theory is called Quantum logic. A “logic” in this sense is a mathematical structure consisting, at least, of a collection of entities called propositions and a number of operations which combine propositions so as to produce new ones, including the operations “and”, “or” and “not”. Propositions correspond to quantum measurements that have a yes/no answer, such as “is the electron inside this box?” The rules of the operations on propositions are the same as for conventional logic (known as Boolean Logic) except that the distributive law “{A and (B or C)} is equivalent to {(A and B) or (B and C)}” is replaced by a rather technical weaker condition called orthocomplementarity. This makes it possible for a quantum logic to contain a number of different Boolean Logics which are incompatible with each other. These can be interpreted as different “frames of reference”, such as the wave representation or the particle representation of quantum states. Quantum logic makes sense of “Both/and” thinking: an electron is both a wave and a particle – but not within the same frame of reference.

The way we actually think, particularly in our more creative moments, is better described by quantum logic than by classical logic, because we can creatively move from one frame of reference to another, devising new ways of thinking about things. Quantum logic represents a creative approach to a creative universe, while classical logic represents a rigid approach to a deterministic universe.

Connectivity and the Aspect experiment

Perhaps the most important aspect of quantum theory is that quantum state are usually not states of single particles, but states of systems of many particles, states that cannot be reduced to statements about the individual particles. When this happens the particles are said to be entangled. Its practical consequence is that the unpredictable responses of the various particles are linked to each other, even if they are widely separated in space, giving a fundamental connectivity to the universe.

An experiment, performed by Alain Aspect, verified a particular case of this discussed in the early days of the theory by Einstein, Rosen and Podolsky. It demonstrates a kind of distant connectivity between widely separated photons (particles of light). The two photons are created by passing a single photon into a specially synthesized crystal that splits it into two daughter photons, each with half  the energy of the first. The two daughter photons are then allowed to travel apart (recently the distance has been taken to 40 km) and simultaneously each is passed through a prism which measures its polarization in a particular direction. The two photons respond at random to the measurements, and statistics are collected which show that there are correlations between the responses of the two photons. The key to the analysis is an argument, due to John Bell, which demonstrates that the particular nature of the correlations, and the way they vary with the angle of the prisms measuring the polarization, implies that it is impossible for the correlations to arise from any individual properties of the two photons, whatever these properties might be: the photons have to react in a coordinated, connected way. This crucial experiment indicates that the universe is connected at a much deeper level than previously thought.

QT fig 1

Figure 1

In the Figure 1, from one of Aspect’s papers, is the source of the two photons, moving in opposite directions until they are a distance Lapart, when they enter the measurement part of the experiment. Here two high-speed light “switches”, C1 and C2, direct each of photon to one of two possible polarization measuring devices (here represented as a polaroid filter, rather than a prism,  followed by a photomultiplier tube PM which detects the photons). The “switches” are activated after the photons set off, so that there is no possibility of the photons being influenced as they leave by the nature of the filters that they will encounter. The filters are pre-set at four different angles, and statistics of the number of “coincidences” – events where two of the photomultipliers fire simultaneously – are collected for a variety of combinations of angles.

The physicist John Bell introduced an argument that demonstrated that the pattern of coincidences that was observed could not possibly be produced if each photon responded independently: they had to be connected in some way. Bell’s argument is quite mathematical, but there is a simpler version due to Mermin which can be understood with a little effort. In this version the directions of the prisms/filters (determining the direction of the polarization that a photon has if it passes straight through the prism) are restricted to just three possibilities, symmetrically arranged. (The effect of the prism in unchanged by a 180-degree rotation, so the three distinct orientations of the prisms are defined by three positions of a line drawn through the prism, as shown in Figure 2 below:

QT fig 2

Figure 2 Positions of either prism   Figure 3  Relative angles from A to B

If we denote the two prisms by A and B, then the combined effect of the prisms depends only the relative angle from the line representing A to that representing B, which is either 0, 60 or 120 degrees (see Figure 3).

The probabilities for what A does depend on what B does and on the relative angle. The statistics that are observed are as follows:

Probability of B transmitting when A is transmitting Probability of B transmitting when A is not transmitting

Relative
Angle

B transmitting?

Relative
Angle

B transmitting?

Yes

No

Yes

No

0

1

0

0

0

1

60

1/4

3/4

60

3/4

1/4

120

1/4

3/4

120

3/4

1/4

The fact that the two tables are different shows that there is a correlation between whether the photons go through A and whether they go through B. But this is not necessarily evidence of connectionbetween the two: it could be that they have a coordinated “plan” of response laid down at the time they are emitted from the common source, with the response depending on the possible angles of the prisms encountered by the photons. The plan can be randomly chosen in advance, but there can be no variation from it once the photons have set off, otherwise one would not get a guaranteed agreement in the case where the prisms are in line.

The key step in the argument now follows. There is a result called Bell’s theorem showing that no such plan, in which each photon responds separately once it has left, can work. I am going to prove the special case of this result as it applies to the set-up here. Because of this result we must assume that the photons are responding in a coordinated way to the conditions of the prisms.

The proof is straight forward but does require a little care.

Consider one example of a possible plan. In any plan A and B have to respond in the same way to each angles of a prism so as to get agreement when the relative angle is zero (alignment), so a possible plan of response might be:

Angle (absolute)

0

60

120

Response

Yes

No

No

Such a feature can be found by looking at the pattern of agreements (Ag) and disagreements (Dis) on this plan

Can we show that every plan has some feature that is not reflected in what is observed?

Angle of A

Angle of B

0

60

120

0

Ag

Dis

Dis

60

Dis

Ag

Ag

120

Dis

Ag

Ag

The proportion of Agreements: Disagreements = 5:4

It turns out that every plan gives more agreements than disagreements.
How does this compare with observation? The previous tables show that the proportions are actually

Relative angle

Agree

Disagree

0

1

0

60

1/4

3/4

120

1/4

3/4

The average of the proportions in the “Agree” column is 1/2, as is that in the “disagree” column. So if we carry out a series of trials where the angles are set at random each time, then we will on average get equal proportions of disagreements and agreements, contradicting the possibility that the response is happening according to a plan.

This proves (the special case of) Bell’s theorem.

We can think of each photon as responding to a context that is non-local: the context includes the other photon and the other prism. Randomness is thus context-bound.

What we see is that, in the area that is most mechanistic – particle physics – we get an affirmation of what we suspect from our everyday experience anyway: that the universe is fundamentally connected across space, that context is non-local; and that randomness is bound up with context in the way that we recognise from the concept of freedom.

 

 

Quantum Entanglement

by Chris Clarke

The explanation for everything?

In 1972 Freeman and Clauser succeeded for the first time in preparing two particles which exhibited a strange condition, predicted by quantum theory, called ‘entanglement’. The condition had been discussed theoretically by Einstein and co-workers in 1935, and at that time they argued that, because such a thing was obviously impossible, there must be something wrong with quantum theory. Freeman and Clauser’s work, and subsequent more detailed experiments that fully confirmed the prediction of quantum theory, triggered a tide of speculation. There seemed no limit to the mysteries that might now be explained using this new phenomenon: telepathy, consciousness, healing … all were examined. Now that the production of entangled particles has become almost routine technology, it is perhaps a good time to take stock of what we have learnt. I shall indicate a range of possible positions, which I shall characterise as the sceptical, the liberal and the cosmological.

But first, what is entanglement? All the observations have been made on very simple microscopic particles, so I must ask the majority of readers, who are not normally interested in such things, to bear with me while I discuss these physics experiments. I will widen the discussion before long. Briefly, the essence of the idea is the production of pairs of particles which, though separated by a large distance, show correlations in their behaviour that are inexplicable on a basis of old (non-quantum) physics. To give more detail, let me describe a typical experiment, which uses the particles that make up light (particles called photons).

Entanglement experiments use a property of light called polarisation, to do with the direction in which the fields that constitute light are vibrating. (Polaroid sunglasses filter out light with a particular direction of polarisation.) It is possible, using a special optical material, to split a single photon into two so-called daughter photons. These two are allowed to travel apart (by more than 10km in some experiments) and then the directions of polarisation of the two photons are measured simultaneously. Two points emerge from analysing the results:

– The direction of the polarisation of either particle is not fully determined before the measurement takes place; it must involve a partly random response of the particle to the measuring apparatus.
– There is a correlation between the results of the measurements on the two particles. For example (and depending on the arrangement of the measuring apparatus) it might be that if particle A is measured to have its polarisation pointing vertically, it is then more likely that the same result will be obtained for particle B.

Could it be that the particles are, as it were, pre-programmed when they are split to respond in this way? (For example, it might be that the split always results either in both particles vibrating horizontally, or both vibrating vertically.) A detailed argument by the renowned theorist of foundational physics John Bell demonstrated that no ‘pre-programming’ could explain the observed results. In other words, the particles were responding spontaneously, but in an interconnected manner.

There is a huge literature expanding the sketch I have just given into detailed arguments: exploring possible loopholes, closing them, finding new ones … I will not go into these here. My concern is rather with the question, if the ideas just presented are taken at face value what are their implications? Quantum theory presents a very precise, and by now almost universally accepted, mathematical account of what is happening, in which entanglement corresponds to a particular mathematical form for the expression describing the pair of particles. What are the possible translations of this mathematics into words and pictures?

Let me describe some of the key assertions of the conventional verbal translation of the quantum mechanical account.

The properties of particles, properties which are the objects of experimental investigation, do not exist independently of the observation. Rather, they arise in the process of the interaction between the particle and the experimental apparatus. It is even misleading to think of them as ‘properties of particles’ at all: they are aspects of an event of measurement.

Spatial separation, and to some extent separation in time, are irrelevantto the correlations produced by entanglement. Spatial and temporal relations do not enter into the calculations at all; the particles could be anywhere.

Entanglement is the general rule; any interaction at any time in the past will entangle particles, so that very special conditions have to hold in order to produce particles that are not entangled. The achievement of Clauser and others actually lay not in the mere fact of producing entanglement, but in producing an entanglement that was of such a form that it could be examined experimentally. This point will be crucial below when I come to discuss the wider implications of this work.

Points 1 and 2 here carry an important philosophical message that challenges how we normally think about the universe, which is in terms of definite and separated things located in space. Point 1 undermines the definiteness of ‘things’. It is not saying that physical entities are merely figments of our cultural assumptions (though this may indeed be the case): physicists behave as if they are dealing with what might in some sense be called ‘reality’. But this inverted-commas-reality is what the philosopher of physics D’Espagnat called veiled. What we experience, either in the artificial setting of a laboratory or in normal moment-to-moment life, is quite distinct from what physicists regard as the foundation of the material universe, namely the abstract entities called particles and fields. And I should add that, while the connection between particles and experience is clear in the case of the laboratory, it remains in many respects obscure and controversial at the level of ordinary life.

It is point 3 that is vital for the wider implications of this subject. On the face of it, it would seem, for example, that we could use pairs of entangled particles for an instant communication system that operated independently of distance – something that would be highly reminiscent of telepathy. (Some authors have even written of one particle ‘instantaneously changing its state’ when the other is measured, for which there is no justification at all.) More generally, it suggests that the world, rather than being a collection of isolated particles pushing each other around, is more like an intricate web of subtle interconnections. But how far can we take this picture?

Physics is now pushing the idea of a web of quantum connections very far indeed. A significant new branch of what is sometimes called ‘Quantum Information Theory’ has emerged, covering the ways in which information can be transmitted through a mixture of entangled states and classical information transfer. The whole subject has moved out of the realm of speculation and is now supported by increasingly elaborate laboratory experiments using chains of entangled pairs of particles that verify the theory in great detail. One point that emerges from this work is that information cannot be transmitted by entangled states alone because the correlations that are observed are not ones that the user can control in order to insert information; rather, the spontaneity of the particles’ responses is an essential part of the account. In other words, quantum communication always has to involve an ordinary communication channel (such as a telephone) and a quantum channel (such as entangled particles) working in tandem. So instantaneous communication (telepathy, in the sense in which it is usually conceptualised) is impossible by this means. On the other hand empathy, in the sense of remote beings producing synchronistically related behaviour, is a possibility.

When it comes, however, to the role of entanglement in ordinary life, outside the laboratory, the situation starts to look a lot less clear. Let me put the sceptical position first. If the entanglement that is present everywhere is actually to make a difference, then the systems and organisms of the natural world need to use it in some way. The discussions of quantum information theory assume that one can prepare a pair of entangled particles, put them in two boxes, and hand one to each of two observers who take them away for later communication. But what sort of ‘box’ does a natural organism have that can preserve a quantum state in pristine condition? The laboratory experiments using photons cannot be a precisely replica of what happens in a living organism: the only known way to ‘store’ a photon in a living system is to absorb it into the electromagnetic structure of a molecule, which is such a turbulent system that the details of the state would rapidly be lost. The only known ‘box’ is the microtubule, studied by Stuart Hameroff, that I will describe shortly.

To help us understand the problems that weigh against entanglement being effective in living organisms, I shall describe the way in which almost all particles are affected by a phenomenon, heavily researched over the last 20 years, called decoherence. This is concerned with a ‘hidden property’ of particles, namely phase. This is easy to understand in the case of a wave on water travelling past a buoy, when the buoy moves regularly up and down (with an additional regular oscillation in the direction of the wave). Here thephase of the wave at this place and at a given time is the point that the buoy is currently at in its cycle. All particles are thought of as associated with a similar wave-aspect and they carry a phase, but in general this is behind D’Espagnat’s veil: there is no ‘buoy’ that can reveal it and it is deduced only indirectly, through phenomena (in particular, interference) that are analogous to those shown by waves.

Decoherence theory is about the way that the environment interacts with entangled particles so as to affect the relation of their phases. It turns out that the nature of the correlations between measurements on entangled particles is completely dependent on this phase relation. But the phases are exquisitely sensitive to perturbations by the environment, and so the influences of this can completely scramble the correlations produced by entanglement. All that is required for this to happen is that the particle states involved in the entanglement are sufficiently different for them to interact with a perturbation in different ways. If we are considering an entanglement involving the position of a large body or even a large molecule, then the slightest perturbing factor will produce enormous effects on the phase, leading to very rapid decoherence indeed.

So, to summarise the sceptics case: if we consider two particles in different places then their states will in general be entangled. But, with the exception of the particular behaviour exemplified by the polarisation of photons in the laboratory, the way in which the particles are entangled, and hence the nature of any correlation between them, will be completely random, so that in practice their responses will be independent. If we consider, instead of single particles, larger systems of many particles, then the situation becomes even worse because of their greater interaction with the environment. Thus entanglement can have no effect outside the laboratory.

I now want to suggest that this argument, while defining important limits to what can happen, is not completely conclusive. Historically, we have constantly found nature to surpass our own ingenuity in evolving its own subtle ways of implementing effects which we have to implement by brute force. If, as I think, there is circumstantial evidence for entanglement playing a role in organisms, then there is a case for searching biological systems to discover how they might do it, even when we cannot imagine this in advance. So let us move on to discuss some areas where more solid evidence for the role of entanglement in living systems might be found, moving on from the sceptic’s position to what might be called the liberal position.

This position proposes that entanglement – or at least something very like it – may play a role within an organism, as part of its internal communication and control system. In this context, Hameroff has drawn attention to the possible role of microtubules: tubes forming a ‘microskeleton’ inside each living cell, made of a regular arrangement of protein molecules. Because of their small size, and the way they are shielded by the structure of the surrounding water, these tubes could support internal vibrations whose states were well protected from decoherence by the environment. Microtubules might thus form a good ‘box’ for storing quantum states. For this to be effective, however, the microtubules need to communicate with each other by conventional means: both in order to set up a states with a known entanglement (recall the need for a classical communication system alongside the quantum one) and also to keep refreshing the entanglement as decoherence penetrates the tubules and randomises the correlations. Hameroff, in collaboration with Roger Penrose, achieves this classical communication through a novel scheme of physics in which an aspect of gravitation, yet to be worked out in full detail, intervenes so as to realise correlated manifestations at separated microtubules, in a process that is closely linked to consciousness, with co-ordination happening via ‘gap junctions’ in the microtubules. The many technical details of all this make it a very uncertain area: in 2001, for instance, Guldenagel and co-workers produced a mouse with no gap junctions but apparently normal behaviour; calculations of the length of time that the entanglement can survive decoherence are difficult and contested; Penrose’s theory is still at a very speculative stage, and it is unclear how crucial it is to the co-ordination of the microtubules; and, at the end of all this, it is not all that clear just what the microtubules are supposed to do once they have got their act together. The liberal position leads to lots of interesting scientific research, but in terms of the big questions of life it is not in the top league.

So the sceptical and liberal positions lead to a rather provocative situation. On the one hand, entanglement seems to be consonant with some of our deepest experiences: of the connectivity of the world, of the reality of synchronicity. Yet on the other hand it is hard to see how entanglement can act so as actually to deliver the goods. Are we somehow looking at things in the wrong way?

Before trying to answer this question, let me link it with another issue in which quantum theory promises much but somehow fails to deliver, namely that of the nature of mind. Many writers – including, as I have mentioned, Penrose – have associated quantum processes with mind (sometimes using in addition the word ‘consciousness’). Though we like to believe that our minds make decisions using some approximation to the formal structure of logic first described by Aristotle. But in reality, and fortunately, this is not so: the power of our thought actually lies in a process that goes significantly beyond that logic, namely our ability to hold many different conceptual frameworks conjecturally together until a creative resolution emerges. And this is essentially the definition of quantum logic (the logic governing quantum systems), rather than Aristotelian logic. Is it just a coincidence that minute particles and higher mammals (let us not be too anthropocentric) share the same perverse logic? Or could it be that, as Gregory Bateson argued – with a rather careful information-theoretic definition of ‘mind’ – all natural systems exhibit mind; and, moreover, the effect of mind is described by quantum logic?

The difficulty with linking quantum theory with these very suggestive correspondences lies in finding a way in which quantum effects can move from the microscopic, where we know they reign supreme, to the larger scale of living organisms. But could it be that this ‘bottom-up’ approach (building the large out of smaller sub-units) inevitably leaves something out? Moreover, when we examine the sceptical and liberal approaches just outlined, it looks very much as though we are trying to extend quantum theory to the large-scale realm, while at the same time working within metaphysical assumptions about space, time and reality that automatically exclude quantum theory from that that realm. Are there alternatives to this approach?

This brings me to what I call the cosmological position, which I support myself. The idea is that we regard the whole universe as a quantum system, and allow top-down influences (from the large to the small) as well as bottom-up influences. I was led to this by having devoted most of my work in physics to the large-scale structure of the universe, so that a cosmological perspective always comes naturally to me. Such a perspective radically alters one’s view of quantum theory: decoherence is the losing of quantum information to the environment; but the universe as a whole has no environment. Cosmologically, information is never lost (even, if we are to believe Hawking’s recent claims, in the presence of black holes). This suggests (and there are loopholes!) that the universe remains coherent: it was, is and always will be a pure quantum system. The non-coherence of medium scale physics – non-coherence ‘for all practical purposes’, as John Bell used to say – is only an approximate consequence of our worm’s-eye view.

When we take this viewpoint (following lines that have been explored, more conservatively, by Chris Isham and others) we find that there is whole layer of physics revealed that is taken for granted as part of the metaphysics of laboratory physics, a layer that appears formally as the interplay of different logical structures associated with different organisms, but which we might identify subjectively as an interplay of different structures of meaningexperienced by these organisms. This layer is independent of the dynamical layer investigated by laboratory physics, in the sense that, once a structure of logic/meaning emerges, then the dynamics of quantum theory operates within it without constraint, so that laboratory physics is not affected. Conversely, the outcome of this dynamics can help to shape the possible structure of logic/meaning, but it does not determine it. There is a freedom present at this level which points to a whole noetic dynamics of the universe.

This leads to the picture that I presented in Living in Connection, in which the world is a nested lattice of quantum organisms. We can see this at work in our own being. My ego (the subjective ‘I’) is a subsystem of my whole body-mind, and I can thereby sense both my relationships with the vaster patterns of meaning of the planet and beyond, which go into constituting ‘me’, and also the well-being of my body which in turn gives direction and meaning to the smaller scale processes that support it. It is this that could solve the problems of decoherence, though here I am speculating beyond what has actually been demonstrated. Certainly there is a constant interplay between the coherence which each system receives from the greater ones in which it is contained, and the processes of decoherence which make it behave, in relation to its environment, as if it were a classical system. Thus entanglement within a specific quantum state, having a function in the organism and in the greater whole, could be maintained by a top-down influence.

In this nesting of systems, the buck stops with the cosmos as a whole, which shares some of the properties of what many call ‘the mind of God’, though I always use the g-word with trepidation. If I accuse Penrose and Hameroff of being short on details, then I am much more guilty of that myself. But the more I live with this picture, the more I think it makes sense both of physics and of human experience, including the experience of the mystics. Entanglement may be an explanation of the major paranormal experiences that many of us have encountered, but we will only arrive at a justification of this by a theoretical and experiential investigation of the cosmological level.

Further reading

Clarke, Chris, ‘Quantum Mechanics, Consciousness and the Self’, in Science, Consciousness and Ultimate Reality, ed. David Lorimer (Imprint Academic, 2004) An extended account of the view given here.

Clarke, Chris, Living in Connection (Creation Spirituality Books, Warminster, 2002) An exploration of the spiritual principles linked with this position.

Omnès, Roland, Quantum Philosophy: Understanding and Interpreting Contemporary Science. Trans Arturo Sangalli (Princeton University Press, Princeton NJ, 1999) An alternative view of the same area.

 

Chris Clarke was Professor of Applied Mathematics, and now Visiting Professor, at the University of Southampton.

Emergence

By Michael Colebrook

Everything is best understood by its constitutive causes. For, as in a watch or some such small engine, the matter, figure and motion of the wheels cannot well be known except it be taken asunder and viewed in parts.

Thomas Hobbes

Emergence means complex organizational structure growing out of simple rules. Emergence means stable inevitability in the way certain things are. Emergence means unpredictability, in the sense of small events causing great and qualitative changes in larger ones. Emergence means the fundamental impossibility of control. Emergence is a law of nature to which humans are subservient.
Robert Lauchlin. A Different Universe.

In effect, there seems to be no end to the emergence of emergents. Therefore, the unpredictability of emergents will always stay one step ahead of the ground won by prediction. As a result, it seems that emergence is now here to stay.

 

Jeffery Goldstein

The fundamental premise relating to emergent phenomena is that wholes can contradict Hobbes’ dictum and be more than the sums of their parts. In any system for which this is true, that element which constitutes the ‘more than’ is an emergent property of the whole system and has to be regarded as the outcome of a creative process. Emergent phenomena cannot be described in terms of a chain of causality. They do not have a cause. They just happen. Both Ian Stewart and Stuart Kauffman (see References) emphasise that there is as yet no sound scientific theory of emergence.

It might be thought that, given these criteria, emergent phenomena were relatively rare and special features of the created order. In fact we are surrounded by them. It could be claimed that the human sensory system was designed specifically to provide awareness of emergent phenomena. Physics tells us that a chair is made up of a collection of very, very small particles separated by distances that are enormous compared to their size. By far the largest constituent of a chair is empty space. And yet, we can see and feel a chair and we are confident that if we should sit on one, we would not finish up on the floor. The solidity of solids is an emergent phenomenon and is a function of the relationships between the particles that physics tells us about. The solidity of solids is not an illusion, it is as real as the particles which constitute the solid. The properties of solids such as density, hardness, strength, elasticity etc. cannot be reduced to the properties of constituent particles because they depend on the relationships between them. The properties of solids are as real and fundamental with respect to solids as are the properties of the so-called fundamental particles.

There can be layers of emergent properties. Forests exhibit emergent properties based on relationships between living organisms. Living organisms show emergent properties based on relationships between complex chemicals. Complex chemicals show emergent properties based on the relationships between atoms. Atoms show emergent properties based on the relationships between sub-atomic particles.

Given the ubiquity of emergent processes and they way in which they are organised into progressive sequences, it can be argued that they are the means by which the universe creates itself.

If this makes it sound too easy to create a universe, emergent properties are not entirely free to make things up as they go. They are subject to significant constraints, such as the universal conservation laws which state the matter and energy can neither be created nor destroyed. They can only proceed by creating new patterns of relationship using pre-existing materials. This is one of the reasons why it took 10,500,000,000 years for life to emerge on planet Earth. As D H Lawrence rightly says:

The history of the cosmo
is the history of the struggle of becoming.

And the history of the struggle of becoming is substantially the history of emergent phenomena. It is one of the paradoxes of modern science that in spite of everything that is known and understood about the cosmos there is no clear general theory of emergence. This is because science has on the whole adopted Hobbes’ dictum and has been concerned primarily with seeking explanations of phenomena in terms of the behaviour of their parts. The study of emergent phenomena requires a top-down as opposed to the Hobbesean bottom-up approach.

One of the concepts that has been useful in attempts to give shape to studies of emergent phenomena is that of symmetry. In terms of the scientific definition of symmetry the surface of a still body of water exhibits perfect two-dimensional symmetry because it is the same in all possible directions in a two dimensional plane. If you drop a stone into the water the result is the formation of a set of circular ripples which are symmetrical about a centre (the point where the stone was dropped), these ripples exhibit radial symmetry in that they look the same in all directions but only from the central point. They exhibit a reduced symmetry compared with that of the undisturbed surface. Although there is an apparent increase in ordered pattern on the surface of the water the original symmetry has been broken. In general, increase in order and pattern is associated with broken symmetries. A good example of the emergence of order associated with symmetry breaking is provided by Langton’s Ant. The apparently chaotic pattern produced by the first 10,000 moves by the ant can be said to show perfect two-dimensional symmetry (at least away from the edges and in a statistical sense) in that it looks the same in all directions. The highway represents a breaking of this symmetry to produce a more ordered state. Both the chaotic and the ordered states are intrinsic features of the behaviour of Langton’s Ant which is governed by specific but very simple rules. The ordered state represents an emergent property associated with symmetry breaking. It needs to be stressed that symmetry breaking is a descriptive feature of emergent phenomena. It does not constitute an explanation or even a contribution towards one.

Another feature of emergent phenomena is coherence. When a stone is dropped into still water, some of its energy is transferred to the water and results in the formation of a series of waves which involve the coherent behaviour of millions and millions of water molecules. Due to the viscosity of the water the waves soon dissipate, their energy being converted to heat. The waves are caused by the energy transferred by the falling stone but the form of the waves is the result of internal and intrinsic relationships between the molecules of water. With Langton’s Ant the highway represents a coherent pattern of behaviour of the ant, in this case a loop of 104 moves, compared with the previous chaos.

In this case the pattern of the highway does not dissipate because there is a continuous input of energy from the computer on which the ant programme is running.

Another form of cellular automaton involving simple rules and devised by John Horton Conway (see Conway’s Game of Life) can exhibit a wide range of coherent behaviours involving closed loops and moving patterns as well as chaotic and ordered sequences of moves. As with Langton’s Ant the outcome of any but the simplest of initial patterns is unpredictable and small changes can produce very different outcomes. One of the interesting features is that it is possible the establish an initial state consisting of a set of moving patterns called gliders which interact to produce a pattern which then generates a stream of gliders, coming close to being a self-reproducing system.

What these cellular automaton games show is that systems with simple rules can produce unpredictable results and some of these can take the form of quite complex emergent features.

Let me finish this brief introduction to emergent phenomena with an example from the real world. If you take a bundle of fibres and spin them together, over and under each other, and you get yarn. Take yarns and spin them together, over and under each other, and you get string. Spin strings together, over and under each other, and you get rope. This is an age-old craft; the earliest preserved bit of rope was found in the caves of Lascaux and dates from about 15,000 BC.

Yarn, string and rope possess properties of length, strength, flexibility and durability which are potentially present within a bundle of fibres but their manifestation depends on the internal structural relationships between the fibres. Yarn and string and rope are more than the sums of their parts, they are emergent entities, and over and under is an emergent rule or principle. As often happens, emergent entities provide opportunities for further emergent phenomena. And in this case these involve the same rule of over and under together with developments of it.

The classic Ashley Book of Knots contains diagrams and descriptions of 3854 things that can be done with rope and string, virtually all of which involve some version of over and under.

Woven materials are another example of emergence based on yarns combined according to the rule of over and under. Starting with a simple weave of over one and under one, there is an almost infinite variety of other options as the strict form of the rule is relaxed to allow other combinations of overs and unders. Weavers experiment and probe the limits of the constraints imposed by the basic rule. But it can never be totally neglected or the resulting material will simply fall apart.

The rule of over and under applied to fibres is an example of emergence essentially created by human ingenuity in which emergence builds on emergence and in which the application of a simple principle can lead to an almost endless variety of outcomes.

Probably the most significant of all emergent phenomena is the mind that you are using to read and I hope understand this account of emergent phenomena. It has taken ‘the struggle of becoming’ some fourteen billion years to produce it.

References

Clifford Ashley. The Ashley Book of Knots (Faber & Faber, 1947).
Brian Goodwin. How the Leopard Changed Its Spots (Phoenix, 1995)
Erich Jantsch. The Self-Organising Universe (Pergamon Press, 1980).
Michael Colebrook How Things Come To Be (GreenSpirit Press, 2006).
Stuart Kauffman. At Home in the Universe (Viking, 1995).
Stuart Kauffman. Investigations (OUP, 2000).
Ricard Solé & Brian Goodwin. Signs of Life (Basic Books, 2000).
Ian Stewart & Jack Cohen. Figments of Reality (CUP, 1997).

Dynamical Systems

by Chris Clarke

Key Concepts

A continuous dynamical system is any physical thing that can be regarded as having states which can be specified (in a small enough region of the space of all states) by a collection of numbers, and where the change in the system with time is given, in terms of these numbers, by a definite rule. For example, a pendulum is a dynamical system, and its state at any time is specified by the position of the pendulum (a point on a circle, if the pendulum is allowed to go all the way round) and the speed of the pendulum bob at that time, which is a number. If we decide on one fixed direction round the circle in which to measure the speed, then this number can vary over all negative and positive numbers (negative numbers meaning that the bob is moving backwards relative to the chosen direction). It we draw the circle representing position in one plane and the line representing the speed at right angles to this plane, then we can represent the state space of the pendulum by a cylinder.

If the rule depends only on the current state, and not on the time, the system is called autonomous. Note that a dynamical system is, mathematically speaking, deterministic because it is governed by precise rules, though practically it may be impossible to predict its motion. By contrast a quantum system is not deterministic, because of the measurement phase of its evolution (see the paper on Quantum Theory).

As the state of a dynamical system varies in time it traces out a path in state-space called a trajectory. The only exception is when the state is at an equilibrium point, which is a special case when the rule specifying the dynamical system requires that the state does not change at all with time. For the pendulum the equilibrium points are the states with zero speed at the top and bottom of the circle of positions. In these cases the trajectory consists of just a single point. Thinking of dynamics in this pictorial way moves the subject from the arena of numbers and equations into the arena of shapes and forms.

A system in some region of its space of states is called dynamically unstable if a small variation in the initial conditions can lead to a difference which, on some region of  the trajectory, starts growing exponentially. (Strictly this is called Liapoounov instability). In cases like this the system is practically unpredictable , even if theoretically its motion is determined by a precise rule. The Lorenz System (see below) provide a good example of this. The combination of dynamical instability with quantum theory demonstrates that, however one interprets quantum theory, the universe is fundamentally unpredictable on a large scale as well as a small scale. Dynamical instability is sometimes called “the butterfly effect”: a butterfly flapping its wings in Brazil can cause a hurricane in Bengal.

Chaos. The term was first used of a dynamical system whose behaviour showed no predictable pattern. Various attempts have been made to specialize the term to say something more positive – for instance, to describe behaviour where a trajectory wanders round a region of the space of all states, coming arbitrarily close to any specified sate (a situation also called ergodic)

An attractor is a trajectory in the state-space of a dynamical system which has the property than any state sufficiently close to the attractor will move towards it. If the attractor is neither a (stable) equilibrium point nor a closed loop (called a limit cycle) then it is called a strange attractor.

The Lorentz System 

The system here is a very simplified model of convection in a gas, which Lorentz was studying in order to understand weather system. It produced the first example of a strange attractor to be identified and explored in detail. While convection is in reality a complicated motion of the entire gas which would involve millions of parameters to specify the position of each part of the gas at all accurately, in this model the state of the system is described by just three parameters, and the equations governing them are reduced to a very simple form. The picture below (generated by Dynamics solver by Juan M Aguirregabiria) shows the way in which two of these parameters very with time.

Two trajectories are shown, in red and blue, starting close together in the inner part of the loops on the left hand side. The system illustrates the “butterfly effect” in which a small variation in the initial conditions produces a large difference in the eventual behaviour. The trajectories stay close together as they pass through a stable region. After performing three loops of a quasi-cyclical motion (nearly returning to their starting point, but drifting away slightly) they enter an unstable region. Thereafter the two behave quite differently, even though they started close together. The red trajectory veers off to the right hand region, where it completes one loop and then returns to the left and makes a further loop there. The blue trajectory completes one more loop in the left region before moving to the right, where it takes up a quasi-cyclical motion in the right hand region.

Emergence of Structure 

Dynamical systems have the property that structure can spontaneously emerge from them. Though this happens in continuous dynamical systems, it is easiest to study by reducing the dynamical system to a discrete one, in which time progresses by steps instead of changing continuously. We can get a discrete system out of a continuous one by taking “snapshots” at either a regular spacing, or when a trajectory cuts through a chosen surface. In the examples we are about to look at, the space of states is also discrete, being made up of the set of possible patterns on a grid. Again, this is for the sake of reducing the problem to something simple enough to handle. A system called “life” is described next. A simpler system called “Langton’s ant” is described elsewhere.

Conway’s Life (described in more detail on this page)

The “game” (for one player, with no choice at any stage!) is played on an infinite grid of squares, each one of which can be either black or white. The picture below (Pictures of Conway’s “Life” are generated by the programme Life32, by Johan G. Bontes) shows a part of a random pattern, imagined to continue indefinitely in all directions.

The pattern evolves one step at a time according to a deterministic rule:  At each step, if a black square is surrounded by more than three other black squares it “dies” (turns white) through overcrowding, while if it is surrounded by less than two black cells it dies from loneliness. If a blank square is surrounded by exactly three black squares then a new black square is “born” there.

After 21 steps the above random has evolved to

– which is starting to manifest the sort of grouping that we might call a pattern. After 62 steps quite stable structures have formed: static rocks, forms that oscillate regularly between two shapes, and forms that move steadily across the board. The picture below shows a snapshot of some of these.

Conway’s Game of Life

by Michael Colebrook

(The Game of Life, also known simply as Life, is a cellular automaton devised by the British mathematician John Horton Conway in 1970.

The “game” is a zero-player game, meaning that its evolution is determined by its initial state, requiring no further input. One interacts with the Game of Life by creating an initial configuration and observing how it evolves.)

 The Story So Far

 Self-replication in Conway’s Life has been a topic for discussion and research from the very beginning, over forty years ago now (!). The original purpose of Conway’s Life was to find a simplification of John von Neumann’s self-replicating machine designs, which used a CA rule with 29 states. A couple of non-constructive universality proofs for B3/S23 Life were completed very early on, though they were never published in detail — and my sense is that actual self-replicating patterns along the lines of these proofs would require something on the order of a planet-sized computer and a geological epoch or two to simulate a replication cycle.

The technology to build a Conway’s Life replicator out of stable parts has been available since at least 2004. A working pattern could certainly have been put together in a few years by a full-time Herschel plumber, with a high-energy glider physicist or two as consultants. But unfortunately there seem to be very few multi-year grants available for large-scale CA pattern-building — even for such obviously worthwhile Holy-Grail quests as this one!

In 2009, Adam P. Goucher put together a working universal computer-constructor that could be programmed to make a complete copy of itself. The pattern, however, is so huge and slow that it would have taken an enormous amount of work to program it to self-replicate — it would have been easier to come up with a new replicator design from scratch. Clearly, in hindsight, everyone was waiting for something better to come along.

The latest developments

Replicator Redux (from b3s23life.blogspot.com) January 11th, 2013

 

Play the game here:

http://www.math.com/students/wonders/life/life.html