John Woods#


John Woods of ESE played a key role in the Nobel Peace Prize-winning Intergovernmental Panel on Climate Change (IPCC) as lead author of the first crucial assessment of modelling transient climate change due to greenhouse gases. The Nobel Peace Prize 2007 was awarded to the Intergovernmental Panel on Climate Change (IPCC) and Al Gore for their efforts to build up and disseminate greater knowledge about artificial climate change, and to lay foundations for the measures that are needed to counteract such change.

In a lecture in the Department of Earth Science and Engineering, Imperial College, London, entitled “IPCC and all that: Linking science and government for climate research and policy” on the occasion of the 2007 Nobel Peace Prize award, Prof. Woods highlighted the difficulties of assessing climate data and models for governments and the key differences between unbiased and authoritative advice and scientific research and discussion.

Below, you can read John Woods’ Nobel Prize Lecture or you can download it as pdf file(info).

IPCC and all that
#

Linking science and government for climate research and policy[1]

John Woods
Department of Earth Science and Engineering, Imperial College London

Summary

The award of the 2007 Nobel Peace Prize to Al Gore and the IPCC (Inter-Governmental Panel for Climate Change) has highlighted the significance of climate change for humanity. John Woods was a lead author in the IPCC’s first scientific assessment in 1989. That publication was targeted at governments around the world. It influenced their policy on climate change in two ways. Firstly, it informed their debate on how best to limit climate change, and how to prepare for its impacts. And secondly, it established priorities for research aimed at reducing the uncertainties in climate prediction.

The challenge for IPCC members was to achieve a consensus across the scientific community about what was known and not known about how climate would change in the 21st century. The goal was to position that consensus as close as possible to the edge of the unknown. The subject advanced rapidly, so it became necessary to update the assessment every five years – the fourth has just been published.

The IPCC scientists are personally involved in research aimed at reducing uncertainty in advice to governments on climate change, they do this by improving climate prediction. Their research involves global experiments requiring advanced facilities for observation and modelling. Climate scientists had to learn how to convince governments to allocate the billions of pounds needed for these experiments. John Woods will describe how this was achieved for the biggest experiment so far, the World Ocean Circulation Experiment, which he co-chaired. Success depended on linking the United Nations and the International Council for Science, respected by government and scientists respectively. WOCE transformed our knowledge about how the ocean influences climate prediction. It led to permanent monitoring by the Global Ocean Observing System.

Opening remarks

The award of the 2007 Nobel Peace Prize to Al Gore was intended to send a message from Europe to the United States, which continues to be in a state of political denial about climate change due to greenhouse gas pollution. Al Gore is a serious politician, but many people fear that his Oscar-winning film “An Inconvenient Truth" has over-hyped the serious issue of climate change. Sharing the Nobel Prize with the IPCC was presumably intended to counter that concern by providing gravitas.

What I want to achieve in this lecture is to explain that gravitas. I want to show why the work of the IPCC was of Nobel quality. This will not be a lecture on the science of climate change. Rather it will address the difficult problem of how the science community can interact effectively with all governments around the world. I shall summarize the novel procedures invented by IPCC for that purpose. And I shall show how the IPCC reports have underpinned not only political actions designed to reduce and cope with climate change, but also funding for massive global experiments aimed at reducing the considerable uncertainties in climate prediction.

The challenge

When it was launched in 1987 the IPCC was an unprecedented experiment in building a consensus on giving the scientific advice to government. To be successful it had to draw on the scientific community wherever they were.

The subject of anthropogenic climate change is too complex for any one individual to know everything. Such knowledge as existed lay in the hands of thousands of research scientists each working in his or her own niche. The assessment had to weave a coherent pattern from these disparate sources. That was the task of the lead authors for the chapters of the report. They reported to one of three Panel chairmen, and the IPCC chairman. Each chapter of the draft report was debated by a plenary meeting of the authors. It was then opened for review by an international list of some thousand experts.

By the time the final report was submitted to governments it had been reviewed by pretty well everybody who was anybody in the climate prediction world. Inevitably there were a few mavericks who continued to ride their hobby horse – sorry, to use their independent scientific judgement. But this minority was never sufficient to discredit the messages in the report. Governments accepted that the assessments by IPCC were a reliable summary of what was known and not known about climate change due to greenhouse gas pollution. That was assured by the carefully-constructed consensus.

The World Climate Research Programme

The IPCC had only two years between establishment in 1987 and the first report was due in 1989.You may wonder how it was possible so quickly to identify and recruit hundreds of scientists to this business of advising governments. Many were academics, who felt the IPCC was an unwanted distraction from their day job, teaching and research. The quality of any organisations can be judged by its ability to recruit top rate people, and IPCC succeeded in recruiting the best in the world. How was that done?

The answer is that the UN sponsors (WMO and UNEP) had been working for years with the International Council for Scientific Unions (ICSU) to promote scientific research on climate prediction. In 1980 those international bodies had launched the World Climate Research Programme (WCRP). The WCRP continued work started by the Global Atmospheric Research Programme (GARP), which played a key role in developing weather forecasting as we know it today.

Predicting climate change requires global computer models of the Earth system. These did not exist in 1989 when the IPCC was preparing its first report. But the members of the WCRP were planning such models and designing global experiments to collect the data needed to support them during the research phase. Some far-sighted members were even beginning to think about the permanent monitoring system that would be needed to support operational climate prediction in the 21st century.

When the WCRP started in 1980 very few scientists worked on truly global problems. Geophysicists studied the Earth’s rotation, with the associated issues such as magnetism and tidal friction. But Earth remained the Cinderella of planetary science. Meteorologists had begun to use global models for weather forecasting, and in 1979 GARP had promoted the Global Weather Experiment to collect essential data. But 99% of terrestrial and marine scientists worked on small-scale local processes. The scientific discipline of Earth System Science in which those processes are linked in global models had not yet been born.

During the 1980s the WCRP articulated the need for Earth System Science and promoted global modelling and experiments needed to predict climate change. They were successful in making Earth System Science a popular career choice for young scientists and for not-so-young scientists who switched to researching global problems. I joined the movement in 1976, as a member of the 12-man JOC for GARP, and subsequently the JSC for WCRP. Before my own Damascene conversion to global research I had been working for the navy on ocean turbulence at scales of centimetres.

The WCRP was very successful in building a scientific community to work on climate prediction as an academic problem. Individuals usually had two motivations. The first was scientific: learning how to address a complex inter-disciplinary system. The second was ethical: working on a problem that had major implications for humanity. The members of that scientific community met regularly to plan ambitious research projects on a global scale to address the challenge of predicting climate change.

The IPCC was able to move fast because it was able to recruit in that pool.

Complexity of the Earth System

Before moving on to discussing how the IPCC worked, I thought it would be helpful to give a flavour of the inter-disciplinary complexity of Earth System Science and modelling climate change prediction. I shall do this through a case study.

The scientific problem was to understand how the Milankovich effect can produce massive changes in climate during the ice age cycle. As you know the Milankovich effect describes how the gravitational pull of Mars and Venus disturbs the Earth’s orbit, modulating the global distribution of seasonally-varying solar radiation. The magnitude of that modulation is only ten Watt/m2. Sir John Mason had shown that the Milankovich radiation is too small to produce the massive changes in polar ice caps, sea level and climate. At least when the Earth System is represented by a model of the atmosphere alone. A more elaborate model was needed, but what were the missing processes?

The relevance to the IPCC is that the rise in carbon dioxide during the twentieth century is expected to produce greenhouse radiation of about 10 W/m2, the same level as the Milankovich effect. The processes responsible for the large amplitude climate change in the ice ages are likely to be important in models used to predict climate change in the 21st century. The solution came in a theory published in 1990; too late for the first IPCC report, but assessed in detail in the third report, when the idea had been assimilated into climate models.

Plankton multiplier

According to this theory the large amplitude of climate change during each ice age depends on oceanic plankton changing the atmospheric concentration of carbon dioxide. Plankton consume carbon dissolved in the surrounding seawater. That carbon sink is replaced by carbon dioxide entering the ocean from the atmosphere. The oceanic demand for carbon dioxide depends on the annual production of plankton. If the plankton production declines, the carbon dioxide concentration in the atmosphere rises, and the planet warms. That is half the story; the other half links plankton growth to radiation entering the ocean.

That link is provided by the sensitivity of ocean turbulence to solar heating. (I had discovered that in the 1960s to explain why naval sonar did not work in the afternoon.) Increasing solar radiation by 5 W/m2 can halve the depth of the winter mixed layer. That halves the regeneration of nutrients in the winter, which in turn reduces the annual growth of plankton correspondingly. So if solar radiation increases, plankton growth decreases and the atmospheric concentration of carbon dioxide increases. In a nutshell it is the sensitivity of ocean turbulence to solar radiation that allows the tiny Milankovich effect to produce massive ice age swings in our climate.

Ocean turbulence is equally sensitive to infrared radiation, which will rise by about 12 W/m2 as atmospheric carbon dioxide doubles in the 21st century. This pollution will weaken the turbulence, causing a decrease in plankton, and therefore a reduction in ocean uptake of carbon dioxide. The ocean takes up about half the pollution today, but the fraction will decline substantially during this century. So the plankton provide positive feedback in the Earth System. They will accelerate global warming. When I published that theory twenty-five years ago, it was the first end-to-end model of ice age climate change. And it was the first theory to suggest that the marine ecosystem produces positive feedback in climate. The theory suggested that plankton would have to be taken into account in predicting climate change due to pollution of the atmosphere.

That hit the headlines: The Times put it on the front page; their headline writer named it the Plankton Multiplier. The reaction of the scientific community was two-fold. Many scientists followed their instinct and set out to refute the theory. Like Lady Bracknell, they consider it is not only a duty, but a pleasure, to show that rival scientists are wrong. Remember Popper’s insight that it is never possible to prove a theory. But if the theory has an Achilles heal it will eventually be revealed and the theory will be brought down. But until a theory is refuted the jury remains out.

In the case of the Plankton multiplier the jury is still out 25 years after the theory was published. So far nobody has found its Achilles heel (if it has one!). It has been around long enough to gain a certain respectability. The Hadley Centre includes the plankton ecosystem in its climate prediction model. Research councils and Space agencies are spending hundred of millions to gain more knowledge about the interaction between plankton and climate change. The original controversy has been muted by the noise of scientists jumping on the bandwagon. The IPCC now routinely assesses the state of knowledge about the plankton multiplier as a significant process in climate prediction.

Government reaction

I have recounted this story of the Plankton multiplier to make a point that influenced the founders of the IPCC. Science thrives on debate, and it may take decades for a theory to become generally accepted. Meanwhile there will always be scientists willing to express scepticism and to warn governments that the theory is unproven and controversial. That is true for each of the many aspects of the complex Earth system that governs how our climate changes when it is perturbed by Mars or by pollution.

Governments always want clear scientific advice on which to base policy. They don’t like it when the opposition puts up an expert who says the advice is uncertain because it rests on an unproven theory. That is a cheap shot because no theory can be proved. On the other hand there are some issues which are so important that they require government action, even when there is controversy among the scientific community.

And the scientific community sometimes gets it horribly wrong, as did our first Dean, T.H.Huxley, when he told the government that no amount of fishing could deplete fish stocks.

The IPCC

IPCC mission

The IPCC was established to replace reliance on the advice of one prominent scientist. The founders (WMO and UNEP) hoped that they could achieve such a consensus among the world scientific community that the inevitable mavericks could safely be ignored when governments developed policy for dealing with climate change.

The task was to answer two questions.
  1. Is climate issue truly one of those urgent issues that have to be addressed, or is it just another example of scientists shouting to get more money for their research?
  2. And if it was a serious issue, what were the predictions and sensitivities that could guide government action?


Question 1

It has taken a long time to answer the first question. Many governments, led by the USA, say they remain unconvinced. They argue that there is still time for more research before deciding on action, which may be unaffordable. Last year David King, the government’s chief scientific advisor, was still making speeches to convince policymakers around the world that climate change is an issue requiring urgent action. No doubt his successor, our colleague John Beddington, will continue to do so. And only last year the Stern report set out the economic case for action sooner rather than later.

Question 2

The second question has two parts. The first concerns the rise in greenhouse gases in the atmosphere. The second concerns how the climate will respond to that rise.

Greenhouse gases There is now irrefutable evidence that the atmospheric concentrations of greenhouse gases will reach a level that substantially exceeds any experienced in the history of the human species, and that the rise is faster than at any time in that period. This scientific evidence has been sufficient for governments to decide that action is needed to reduce the rise in greenhouse gases.

Global warming But the predictions about how our climate will change as the result of that unprecedented rise in greenhouse gases are much less useful for governments seeking to frame policy. The reason is simple. Climate models all support the view that the global average temperature of atmosphere and ocean will rise. But that warming is of little consequence in itself, apart from some general indications. First the warming of the ocean will cause the sea level to rise significantly during this century, causing flooding. And second a warmer ocean means more evaporation and therefore more rainfall, but not at the same place. It also means more hurricanes.

Local climate change The problem with climate models is that they cannot yet predict how the climate will change locally, which is the key prediction needed to spur government’s into action. Of course, the climate models have improved over the years, and they benefit from the rise in power of super-computers, which allows the models to represent ever more of the complexity of the Earth system. The task of the IPCC has been to report on that progress every five years. The fourth report was published at the end of last year.

The IPCC process

I shall now spend a few minutes describing the IPCC process. As I mentioned at the beginning of my talk, the concept of a world-wide scientific assessment on a complex issue was unprecedented in any discipline. My own involvement was in the first IPCC assessment in 1989, when the method was still quite uncertain. We had to determine what was a world-wide consensus and work out how to achieve it.

Much of the credit is due to the first chairman, Bert Bolin from Sweden, and the first chairmen of Panel 1 (John Houghton UK), Panel 2 (Yuri Izrael USSR), and Panel 3 (Fred Bernthal USA). I had served with Bolin on the JSC of the World Climate Research Programme. Many of the Lead Authors for the first IPCC report had been members of the WCRP.

Of course, there is a difference between formulating plans for scientific research and preparing advice for governments. The common factor is that the leading scientists tend to be involved in both functions. The success of IPCC depended on appreciating the difference between research and advice. We needed to articulate that distinction to the hundreds of research scientists who would be recruited to debate each issue to be included in the IPCC Report and review the draft text. Here is how we did it.

IPCC philosophy

Research scientists are motivated by what they do not know (yet). Their theories and experiments are designed to move forward the boundary between what is known and what is not. The boundary is blurred. My story about the Plankton multiplier illustrates how the transition may take decades. But the boundary does move forward leaving in its wake a body of knowledge that forms the paradigms of the subject. The task of the IPCC is to articulate what is known about climate change by examining the work of hundreds of specialists and fashioning it into a clear account designed to convince governments. Donald Rumsfeld neatly captured the task in his famous aphorism[2]:

“As we know, 
there are known knowns. 
There are things we know we know. 

We also know there are known unknowns. That is to say: we know there are some things we do not know. 
But there are also unknown unknowns, 
the ones we don't know we don't know.”


The IPCC needed to establish a consensus about
  1. what was known about climate change,
  2. what were the known unknowns for which the WCRP could to promote research and
  3. what were the unknown unknowns; ideas that were still speculation.

The goal is to achieve agreement among all the leading climatologists about what is known. It would of course be easy to agree on material in well-regarded textbooks like Kendrew’s Climatology published in the 1950s. The challenge is get closer to the edge between known and unknown, so that the assessment will be up to date and therefore useful to governments.

That meant the consensus must be sought in the no-man’s land where scientists play out their controversies about the evidence and its interpretation. The IPCC position lies on the broad beach between the comfort zone of dry land and the intellectual risks of turbulent sea. The IPCC chairman decided to devote a significant part of the limited time to clarifying the state of scientific controversies about climate change, and to show what actions were being taken to resolve them. That decision opened the door to identifying priorities for research targeted at improving the value of future IPCC assessments. I shall return later to this important consequence of the IPCC.

Meanwhile, back to the IPCC process.

IPCC Panels

The Assessment was divided into three parts, each the responsibility of an IPCC Panel.

Panel 1 dealt with climate prediction including scenarios for greenhouse gas pollution, Panel 2 with impacts, and Panel 3 addressed the remedial actions.


The idea was to develop a logical chain, starting with Scenarios that would be used as boundary conditions for Climate predictions, which in turn provide the physical basis for predicting the Impact of climate change on humanity. Thus the sequence was

Scenario – Prediction – Impact - Response.

I shall follow that sequence in discussing the work of each panel.

This was a fifteen-year workflow extending over three Assessments. For the fourth report published in 2007, impact studies were based on the climate predictions published in the third assessment, which used scenarios published in the second assessment.

The first Assessment

For the first Assessment in 1989, the panels had to work in parallel, rather than in sequence. So Panel 1 assessed climate predictions that had been based on simple scenarios which had not benefited any thorough assessment. The report discussed how the workflow should be implemented in the future.

Scenarios for greenhouse gases

Panel one was charged with predicting the future concentrations of greenhouse gases year-by-year during the next century. The best-known greenhouse gas is carbon dioxide, which is emitted from industry and domestically. But ton-for-ton, methane has a bigger effect on the Earth’s radiation balance. The agricultural source of methane rises at least as fast as the world’s population – faster where people become wealthier and eat better thanks to industrialization.

Panel one developed scenarios, each based on assumptions about demography and industrialization around the world. In 1989 there were few signs of the massive changes in the Russian economy or of the rapid development in China and India. Nevertheless such developments were a possibility during the next century, so they were factored into the scenarios. Other factors concerned the degree to which industrial and domestic practice would be changed to reduce carbon dioxide emissions during the next hundred years. The baseline scenario, called Business-as-usual assumed the 1990 practices would continue throughout the 21st century. So far that has been quite realistic given the refusal of the biggest polluters: USA today and India and China in the future, to do anything about reducing carbon dioxide emissions. Only Europe has good intentions, but action is proving politically difficult.

This work depends on macro-economic models to translate scenarios for geo-politics and demography into scenarios for greenhouse gas emissions. The Panel identified the best-available econometric models for this purpose and reported the results of model runs with those models. Needless to say those models and those numerical experiments had seldom been focused on the issues confronting the IPCC. That problem was addressed by spelling out what was needed, and by identifying priorities for future models, data collection and numerical experiments.

In 1980 there were very few institutes that addressed these problems. One was the International Institute for Applied Systems Analysis (IIASA) housed in the Palace of Laxemburg near Vienna. During the cold war IIASA defused tension by channelling ideas and data between East and West. It was a rather spooky place, funded partly by the CIA and KGB and manned by scientific Third Men. I met there the scientist who was reputed to be the model for Dr Strangelove. They pioneered models that combined natural science and economics on a level playing field. Just what was needed for the IPCC assessment of scenarios and impacts.

Since then economists around the world have developed econometric models especially for the IPCC mission. But they continue to suffer from the limits of all macroeconometric prediction: the impossibility of closing the model rationally. Econometric models need to take account of people’s expectations about the future. If a week is a long time in global finance, try predicting it decades ahead. There are some straws to clutch at, such as the long leadtime for introducing clean technology. For example, it is not unreasonable to assume that electricity generation by fusion will become important before the end of the 21st century. Econometric models reviewed in the fourth report embrace new technology. But IPCC scenarios are riddled with unknown unknowns.

Predicting climate change

The IPCC strategy is to use the scenarios as boundary conditions for climate modelling, mainly the pessimistic Business-as usual scenarios.

Predictability

Climate is defined by the IPCC as the state of the weather statistics in the atmosphere and ocean and the state of the land surface, including its use by man. The variables are almost all physical. These are the variables needed to diagnose impacts. The IPCC follows the pioneering work of L.F.Richardson who invented numerical weather prediction. It assumes that the earth system is usefully predictable for a century ahead. But that is one of the known unknowns. We know that the weather is not predictable beyond a month. The land surface may have useful predictability for a year ahead, but certainly not a decade. That leaves the ocean, which we know circulates slowly. We estimate that it will take a thousand years for 1950s bomb tritium to circulate the globe. But we do not know whether this slow change is usefully predictable. That is one of the biggest known unknowns of climate prediction.

The 2007 IPCC assessment was based on sophisticated modelling at leading laboratories like the Hadley Centre in UK. The state of the art was very different in 1989 when we made the first assessment. Available super computers had less than 1% of the power of those used today, so the spatial resolution and complexity of the models was correspondingly simple. But equally important was the structure of the modelling community – today’s great climate modelling laboratories had not yet been founded. The massive computers at the Los Alamos and Lawrence Livermore laboratories had not yet been diverted from simulating nuclear explosions to simulating climate change. In the 1980s climate modelling was still largely in the hands of academics, like Bert Bolin, the IPCC chairman.

Climate predictions in those days were largely based on comparing the state of the global climate computed from two runs of atmospheric models with different but stationary concentrations of greenhouse gases, usually the current value and doubled CO2. Chapter 6 of the IPPC Assessment reviewed those runs. That was the end of an era. The first steps had just been taken towards the method that is now standard practice, namely using a scenario in which the greenhouse gases change continuously at a rate predicted by demographers and economists. I was a lead author for Chapter 7 which reviewed the first glimmerings of that new approach. There was not much to review so we used our scientific judgement to assess the future prospects for such models.

It is surprising that climate prediction in the 1980s was based on models of the atmosphere. Not so much surprising as astonishing! One of the most important achievements of meteorology in the 1970s was the discovery that the weather could not be predicted beyond about one week. So how can an atmospheric model be used to predict the climate 100 years ahead? Initially it was hoped that, while the memory of the weather was limited to a week, that only concerned the phase of weather systems; it was hoped that the variance of the weather systems (one academic definition of climate) would have extended predictability. But that was – to put it politely – whistling in the dark. The problem lies in the boundary conditions.

For climate modelling, the atmosphere has two boundary conditions: the upper one is the greenhouse gas concentration, and the lower boundary condition is the temperature of the land and the sea. It was only in the 1990s that climate modellers began to include land, sea and air.

The key message of the 1990 assessment was that doubling the concentration of greenhouse gases would increase global temperature by 3K during the 21st century. But that prediction was known to be at risk because the ocean heat uptake was assumed to remain linear as the climate changed. Nevertheless, governments accepted that the atmosphere was likely to warm, and much more rapidly than it had in the past, even at the end of the last ice age.

Impacts and remedies

Assessing the human impacts depends on developing a consensus on predictions about how climate would change in the 21st century. One of the least controversial impacts is a rise in sea-level, which depends mainly on the ocean getting warmer (melting polar ice is a secondary contribution). We know that the additional heat that will be stored in the ocean will vary regionally, and that the impact of climate change on ocean physics will change the pattern of that excess heat storage. This is difficult to predict, even with 21st century climate models. In 1989 the scientific problem had not even been identified. So Panel 3 predictions of sea level rise were based on a global average rise to sea surface temperature. The predictions were known to be right only in the sign: the predicted magnitude - 60 cm - had a large regional uncertainty.

Other impacts of climate change concern flooding (the biggest cause of human misery due to natural causes) and agriculture (drought and temperature). In 1989 Panel 2 had to admit that predictions of local change in these impacts was uncertain in sign as well as magnitude. It was hoped that Panel 1 would be able to provide more reliable information in successive IPCC assessments.

Remedies

I do not intend to discuss the issue of options for remedial action, whether is reducing the sources of greenhouse gas pollution, or in coping with the changes that are predicted.

Political response to IPCC 1990

Most governments, led by the USA, were unimpressed. After all, 3K is less than the annual change in temperature. There was a official feeling of “so what?” One way to answer that scepticism was to ask what the climate had been like in the past when the global air temperature changed by 3K in a century. The warming during the Younger Dryas provided the best available surrogate, but it was not clear enough to provide guidance for government policy. The general conclusion was that only numerical modelling of the kind that had been successful for weather forecasting could provide the solid information needed.

The immediate response of governments was to fund national climate modelling centres equipped with state of the art computers and manned by meteorologists and programmers experienced in coding weather forecast models. The Hadley Centre in the UK is one of the handful of centres, that lead the business of climate prediction today.

At the political level, Margaret Thatcher launched an international campaign to get climate change onto the international agenda. Her targets were G7, EOCD, and the UN (where Crispin Tickell was the UK Ambassador).

President Mitterand hosted a conference on Planet Earth at the Elysée Palace to celebrate the 200th anniversary of the French revolution and put climate change on the G7 agenda that year. I chaired the oceanography debate.

This influenced the Second World Climate conference in Geneva, where I gave a keynote address that sought to establish the ocean as the principal element in climate predictability.

One important outcome was the establishment of the United Nations Framework Convention on Climate Change. The World Bank set up the Global Environment Fund with $150M per year to support projects under the climate convention.

I shall not rehearse other activities in inter-governmental circles that were stimulate by the first IPCC report. There was a lot of talk based on IPCC.

IPCC and research priorities

I mentioned earlier that the IPCC reports have been important in securing funding for WCRP projects. They convinced governments that those projects hit the nail on the head. If successful they will reduce the uncertainties in climate prediction. They would foster a Rumsfeld drift from Known unkowns to Known knowns.

Determining priorities

The first debates about priorities for climate prediction research took place in the JOC for GARP in the mid-1970s. The first step was to invent a criterion for deciding which of the multitude of processes in the Earth system should be featured in models designed for climate prediction. After much debate we decided that only those processes capable of perturbing the system by more than 10 W/m2 should be included. They were defined as the signal, all other processes were regarded as noise. Needless to say that upset many distinguished scientists whose careers had been devoted to processes that ended up below the salt. But, as we explained, that was only for the purpose of climate prediction. There was much lobbying from researchers who felt they might be missing the boat for new funding streams. Nevertheless prioritisation was essential, and that criterion continued to guide the WCRP when it was created in 1980.

Principal targets for research

The “signals” were then ranked in importance by the heat flux criterion. Two stood out above all the others.

  1. Interaction between clouds and radiation, and
  2. Ocean circulation

Not only would they play a major part on climate change prediction modelling, but they were aspects of the Earth system about which we had little hard data.

Ocean circulation

A small group[3] (called cuckoo – don’t ask) was established to explore what could be done to improve our knowledge about ocean circulation. Diagnosing the existing database had revealed that the wind carried only half of the heat transported from tropics to high latitudes, the other half is carried by the ocean. This transport is needed to balance the geographical difference between heating by the sun, and cooling by thermal radiation. The former is much stronger in the tropics, the latter is more uniformly distributed. In the 18th century it was believed that the Gulf Stream carried the ocean heat transport in the Atlantic. Any change in the Gulf Stream was expected to have a big effect on Europe’s climate. James Rennell (1785) suggested that is what happened during the Little Ice Age. We now know that the role of ocean currents in climate is more complicated than that, but the Gulf Stream syndrome was a good starting point.

Carl Wunsch, Bob Stewart and I proposed a WCRP experiment to measure the ocean currents everywhere, from pole to pole, top to bottom, with a accuracy sufficient to diagnose the heat transport. We called it the World Ocean Circulation Experiment (WOCE). Our idea was to collect a snapshot of the state of the ocean, what weather forecasters do every day in the atmosphere. Given the resources available to us, it actually took twelve years to complete WOCE (1990-2002), so the snapshot suffered from what photographers call focal plane distortion – the ocean currents were changing during those ten years.

It took ten years to plan WOCE in detail and to convince oceanographers around the world to drop what they were doing – mainly studying small-scale processes – and work on WOCE for the next twenty years (including analysis) and to get funding for the tools and staff.

New technology

WOCE was a hybrid between old and new technology, ships and satellites.

The new technology was in space, which alone gives rapid global coverage. One space instrument was essential: the radar altimeter, which measures the distance between the satellite and the sea surface to an accuracy of a few centimetres. The orbit of the satellite was measured equally accurately by laser. The result was a map of the elevation of the sea surface relative to the geoid, which is determined by the underlying geology and therefore essentially static on the ten years of WOCE. Any changes in elevation represent changes in the hydrostatic pressure field that drives the ocean currents. So the radar altimeter was the equivalent of the meteorologist’s barometer. It could map the areas of high and low pressure. The Gulf Stream is driven by a pressure head of two metres, which can easily be mapped by the altimeter. So too can the cyclones and anticyclones that make up weather inside the ocean: they dimple the surface by a few tens of centimetres.

But the pressure field only allows us to compute the velocity of the currents at the top of the ocean. To measure the change with depth it was necessary to have an instrument equivalent to the meteorologist’s radiosonde, which gives a profile of temperature and humidity. Oceanographers do that by lowering an instrument (called a CTD) from a stationary ship. Historically oceanographic research ships had done that on cruises, taking about a day for each station. It is a slow and costly business and needs large research ships that could stay at sea for many weeks.

Accuracy is critical – the temperature must be measured to one thousandth of a degree. A test revealed that only three laboratories could achieve the accuracy needed for WOCE. I shall not bother you by detailing the hard work needed to build WOCE. Oceanographers from 30 nations contributed. In the end they collected 97% of the specified measurements at the required accuracy. The result was the first complete map of the ocean circulation. The massive data set provides the essential “truth” needed to judge the performance of the ocean part of climate models.

Funding

The IPCC played a crucial role in securing the billions of dollars needed for WOCE. IPCC reports endorsed the WCRP plan for WOCE, pointing out that lack of reliable data about ocean circulation would be a critical constraint in climate prediction. Remember that a model of the atmosphere can only make useful predictions about one week ahead. So climate prediction for a hundred years ahead depends on the much slower evolution of ocean currents. It takes about one thousand years for chemical tracers such as bomb tritium to be circulated all round the world ocean. So if climate can be predicted for decades ahead, that will depend on being able to predict how ocean currents change, and how they transport heat and chemicals. To be honest, we do not yet know whether the ocean is predictable in this way; our models are not yet good enough to answer that crucial question. The IPCC has been very careful to make that point in its reports. But predicting the ocean is the only hope for predicting the climate, so it deserves top priority for research funding. That convinced governments and secured the funding for WOCE.

Conclusion

I have tried to give you something of the flavour of the IPCC, the challenge, method results and consequences.

After twenty years and four Assessments the IPCC has settled down to a routine. The philosopher of science, Thomas Kuhn, would have said that the 1989 revolution has led to normal science in which much good work is being done, but the excitement has drained away. There is now an IPCC industry, underpinned by vigorous scientific research in all disciplines.

The first assessment was unprecedented. It provided an adrenaline rush. No organisation had previously tried to build a worldwide consensus on a major issue in a way that would be accepted by governments everywhere as the definitive statement of the issue. In 1989 there was no recipe for a making a global assessment. In Donald Rumsfeld’s terminology, we were entering into a business where there was a high likelihood of unknown unknowns. Thanks to the genius of Bert Bolin, John Houghton and the other panel chairmen, the process worked. A consensus was achieved, and the few maverick voices were unable to shake the confidence of governments in the result.

It was not the task of the IPCC to perform scientific research, merely to report what had been done, the known knowns. But the report went much further than that. It identified what needed to be done to improve matters, to reduce the known unknowns. And by scanning the literature widely, it picked up early warnings of topics being researched in academe that had not previously been on the radar of climate prediction, the unknown unknowns. I mentioned one example, positive biofeedback due to the sensitivity of ocean turbulence to greenhouse radiation.

Rutherford said that science was about converting mysteries into commonplaces. The IPCC has tracked that evolution as the Earth System community has grown and WCRP projects have been completed. There has been a Rumsfeld drift. Unknown unknowns have become Known unknowns. And Known unknowns have become Knowns. The role of the IPCC is to pinpoint the boundary between those three states, and show what aspects of the Earth System lie in each, and to explain what that means for climate prediction.

This involves a massive task of discriminating between signal and noise. Every scientist sees his or her own research as signal. The IPCC has to classify most as noise, for the purpose of constructing advice to governments. The WCRP had developed a litmus test for that task. Only those Earth system processes that produce an impact of 10 W/m2 were classified as signal, the rest were noise. I cannot tell you how angry that made many very good scientists – some famous academicians - who had devoted their careers to studying processes that were deemed to lie below the salt. I still bear the scars from those debates. The IPCC had to take on the thick skin of the WCRP pachyderms.

The IPCC benefits greatly from the work of the WCRP, which concentrated on the known unknowns, such as ocean circulation. The IPCC reports show how ignorance about these topics is holding back climate prediction. The reports raise expectations that global projects promoted by the WCRP could reduce the uncertainties about climate change in the 21st century. That provided clear priorities for funding and recruitment of gifted young scientists. It greatly improves the willingness of governments to fund the WCRP global experiments. I have shown how that worked out in the case of WOCE, which cost at least two billion dollars.

Where do we stand today following four IPCC assessments over twenty years? Focusing scientific research on priority issues for climate prediction research has led to rapid progress. So successive Assessments at five-yearly intervals have had much to report that is new. There is now little doubt that the atmosphere and ocean will warm substantially during the twentieth century, and that significant and rapid warming will occur even if the emissions of greenhouse gases are substantially reduced.

The continuing rise in computer power has permitted climate models that contain more of the Earth System processes. And some of the unknown unknowns have been brought in from the cold. I mentioned plankton. Another example is sulphur, which has been shown to cool the atmosphere significantly. But the models have not yet delivered reliable predictions about regional changes in climate of the kind that governments need: especially the global pattern of drought and flood, and the frequency of extreme events like hurricanes and storm surges. Predictions of local climate change remain problematic.

But the IPCC process has proved robust and effective. It has proved capable of weaving a clear message out of an immensely complex subject of Earth system science, which extends across many scientific disciplines. The message meets the needs of both governments and scientists. I believe that the IPCC was fully worthy of the 2007 Nobel Prize for Peace.

My own involvement in science-government interaction in climate change

1976-79 Global Atmospheric Research Programme

1979-84 Climate Change and the Ocean

1980-86 World Climate Research Programme

1984-86 World Ocean Circulation Experiment

1987-90 International Geosphere-Biosphere Programme

1989 Planet Earth meeting in Paris (I led on oceanography)

1989 Intergovernmental Panel for Climate Change

1990 Second World Climate Conference

1990-95 ICSU Advisory Committee on the Environment

1992-95 Global Ocean Observing System

1992-94 OECD Megascience Forum (Coordinator oceanography)

1994-97 World Bank Global Environment Fund (Chair International Waters)

[1] Department Seminar, 29 January 2008
[2] Donald Rumsfeld, Feb.12, 2002, Department of Defense news briefing
[3] Committee on Climate Change and the Ocean (CCCO)


Any further pages in alphabetic order of their title as created by you.
#

Just click at "Create new page", then type a short title and click OK, then add information on the empty page presented to you (including maybe a picture from your harddisk or a pdf-file by using the "Upload" Button) and finally click at "Save".
...no Data available yet!

Imprint Privacy policy « This page (revision-20) was last changed on Monday, 17. July 2017, 14:50 by Kaiser Dana
  • operated by