- Sequestration of CO2 from the atmosphere can be modelled using a single exponential decay constant of 2.5% per annum. There is no modelling need to introduce a multi time constant model such as the Bern Model.
- The Bern model, favoured by the IPCC, uses four different time constants and combining these produces a decay curve that is not exponential but also matches atmosphere to emissions.
- The fact that both single exponential decline and multi-time constant models of emissions can be made to fit atmospheric evolution of CO2 means that this approach does not provide proof of process. Either or neither of these models may be correct. But combined, both of these models do provide clues as to the rate of the CO2 sequestration processes.
A couple of weeks ago Roger Andrews had a post called The residence time of CO2 in the atmosphere is …. 33 years? that stimulated a lot of high quality debate and for me a lot of new information came to light. This is the first part of an X part series of posts aimed at summarising what we know, aiming to zero in on “the truth” about CO2 sequestration rates from the atmosphere.
In this post (Part 1) I look at a single time constant exponential decline model, compare this with Roger’s model and the Bern model favoured by the IPCC and the climate science community. The Bern model uses 4 different time constants from fast to slow and infinity and this post illustrates how this works. I am not a mathematician and prefer visual illustration of mathematical equations.
This post has been significantly delayed since I could not get my XL model to produce the same results as Roger’s. Our models now agree but may still be providing slightly different results.
In Part 2 I will discuss the Atomic bomb 14C data and the proportional reservoir growth model put forward by Phil Chapman.
Why half life is important
We hear all the time form the climate science community that even if we stop burning fossil fuels (FF) today the CO2 we have already produced will still be in the atmosphere for centuries to come. We have already lit a long slow burning fuse that will lead us to climate Armageddon. How much of this is actually true?
What we kind of know for sure is that in 1965 the mass of CO2 in the atmosphere was roughly 2400 Gt (billion tonnes) and today (2010) it is roughly 2924 Gt. That is an increase of 524 Gt. And we also know that we have added roughly 1126 Gt of CO2 to the atmosphere through burning FF and deforestation (emissions model from Roger Andrews). And so while Man’s activities may have led to a rise in CO2 the rise is only 46% of that expected from our emissions. Earth systems have already removed at least 54%. How is this reconciled with the warnings of climatic meltdown?
To understand this requires understanding of the very complex carbon cycle, but in short, some of our emissions have been dissolved in ocean water and some have been taken up by enhanced forest and plant growth. Both of these enhanced uptakes are brought about by the increased partial pressure of CO2 in the atmosphere.
Understanding exponential decline and half life
In the context of atmospheric CO2, imagine a slug of CO2 added to the atmosphere, like manmade FF emissions, and how it may decline via sequestration into the oceans and trees. If the initial slug is absorbed by 5% in the first year, and 5% of the remaining 95% the following year and so on then the decline curve would be like that shown in Figure 1. The half life is the time it takes for 50% of the initial slug to be removed. In the case of 5% per annum decline it turns out that the half life is about 13 years (t1 in Figure 1). After another 13 years (t2) another 50% of what was there after t1 is removed and so on. As a rule of thumb, after 5 half lives have past there is hardly anything left of the original slug.
Figure 1 This chart illustrates how a pulse of 13.3 billion tonnes (Gt) of CO2 injected into the atmosphere in 1965 would decay if 5% of the remaining CO2 is removed each successive year. After 13 years (t1) 50% of the pulse has been sequestered. 50% of the remainder is sequestered in the following 13 years (t2) and so on.
The residence time is defined as follows:
Half life / 0.693 = Residence time
My XL spread sheet model has the exponential decline rate as the main input variable where:
P2 = P1*r
P1 = initial amount
P2 = amount remaining after 1 year
r = the annual decline rate. For example, for annual decline of 5% r=0.95
My spread sheet gives the same result as the general decline formula:
P(t) = P0*e^-rt
P0 = initial amount
P(t) = the amount remaining after time – t
t = time in years
r = the decay rate
It also allows me to estimate half life from the output. Therefore in this discussion I will stick to using decline rate and half life as illustrated in Figure 1 and where possible avoid using the more abstract residence time term.
Single time constant, multi pulse model for the atmosphere
This may sound complicated but I hope to make it simple to understand. Roger already laid the groundwork with a chart that shows the same as Figure 2. In Figure 2, the single pulse declining at 5% per annum (Figure 1) is the layer labelled as (1) in 1965 (Figure 2). The next year there is a new pulse (2) that declines at the same rate and the next year another pulse (3) and so on. The size of each annual pulse equals emissions for that year. So we have multiple pulses but they all decline at the same rate of 5% per annum. After 13 years, half of pulse one is gone and so forth. In the 16 years shown in Figure 2 a total of 289 Gt of CO2 is added to the atmosphere but sequestration has removed 70 Gt meaning that only 210 Gt remain, that is the height of the 1980 column.
Figure 2 In his earlier post, Roger produced a chart near identical to this and one reason for reproducing this here is to show that we are both singing from the same spread sheet. The pulse shown in Figure 1 is that labeled as number (1) on the chart. The next year there is a new pulse, scaled to the emissions model, that also decays at 5% and so forth. Because of sequestration into the oceans and biosphere the amount of CO2 left in the atmosphere is always much lower than the amount we have added.
We can now expand this model to a full time series, 1965 to 2010 and adjust the exponential decline rate that the model uses to produce a best fit between the model and the observed evolution of CO2 in the atmosphere (Figure 3). The atmosphere model is based on 750 Gt C in the atmosphere in 1998 (IPCC Grid Arendal) when the atmosphere had 367 ppm CO2. The C content of the atmosphere is then projected backwards and forwards from that date in proportion to annual CO2 concentrations from Mauna Loa. The data are then converted to Gt CO2 by multiplying by 44/12 (the molecular weight of CO2 / atomic weight of carbon).
Figure 3 The model is now expanded to include all years from 1965 to 2010. The black line (right hand scale) is the atmosphere based on observed CO2 at Mauna Loa. The decline rate was adjusted to give this “best fit”. Notably 1126Gt CO2 has been added but only 516 Gt remains and this fits the overall observation of CO2 sequestration reducing emissions by 54%. In detail, the fit of the emissions to the atmosphere is not as good as that achieved by Roger.
It was at this point that I encountered the first problem trying to reconcile my model with Roger’s model. The fit of additions to atmosphere is nothing like as good as that achieved by Roger and the half life yields a residence time of 18.8 years somewhat different to Roger’s 33 years. The problem with the model shown in Figure 3 is that it is built on a flat baseline that does not account for the decline (sequestration) of pre-1965 emissions.
Building in decline to the pre-1965 emissions produces an excellent fit of emissions and the actual evolution of the atmosphere (black line) with a half life of 27 years equivalent to a residence time of 39 years. Closer to but not exactly the same as Roger’s result (Figure 4).
Figure 4 Single time constant, 2.5% per annum, exponential decline model gives an excellent fit between emissions (LH scale, coloured bands) and actual atmosphere (RH scale, black line). This confirms Roger Andrew’s assertion from a couple of weeks ago that it was possible to model sequestration of CO2 from the atmosphere using a single decline constant. The Blue wedge at bottom is the pre-1965 emissions stack that is also declined at 2.5% per annum. The half life of ~27 years is equivalent to a residence time for CO2 of 39 years.
A major conclusion of this post, therefore, is that emissions can be fitted to the observed evolution of the atmosphere using a single time constant model that has a 2.5% per annum decline rate. This credit really has to go to Roger Andrews if no one else has achieved this before. To achieve this fit, it is essential to have a model where the longer term emissions also decline. In my model, the emissions are initiated in 1910.
At this point it is important to stress that matching emissions to observations assumes that all of the rise in atmospheric CO2 comes from emissions. As we shall see in Part 2, the atomic bomb 14C data suggests a much more rapid decline of 7% per year that yields a half life of ~5 years that creates the need for some of the increase in CO2 to come from other sources. I hope to show why the bomb data give a false picture.
This leads into consideration of the Bern Model which has multiple time constants. If it is possible to get a good model fit using a single time constant why use 4? Doing so has lead to much debate on sceptic blogs since it is difficult to conceptualise why different processes should discriminate between different parts of the overall CO2 budget. For example Willis Eschenbach writing on WUWT:
So my question is, how do the sinks know the difference? Why don’t the fast-acting sinks just soak up the excess CO2, leaving nothing for the long-term, slow-acting sinks? I mean, if some 13% of the CO2 excess is supposed to hang around in the atmosphere for 371.3 years … how do the fast-acting sinks know to not just absorb it before the slow sinks get to it?The Bern Model
I have not found it easy to find information on the Bern model simply through Google. And it is worth declaring that until a few weeks ago I had barely heard of it. I have this from Clive Best via email:
AR4 page 213 of WG1 defines the BERN model as
a0 + sum(i=1,3)(ai.exp(-t/Taui)) , Where a0 = 0.217, a1 = 0.259, a2 = 0.338, a3 = 0.186, Tau1 = 172.9 years, Tau2 = 18.51 years, and Tau3 = 1.186 years.
The term a0 is the fraction which remains for ever in the atmosphere ( tau = infinity) – roughly 22%
Of course it doesn’t stay in the atmosphere for ever. It is eventually removed through rock weathering and build up of sediments on the ocean floor. Tau > 1000 years.This is translated into the following:
Time constant (Tau) % of annual pulse removed at that rate
1.2 y 18%
18.5 y 34%
173 y 26%
∞ 22%
The Bern model may therefore be described as a multi pulse multi time constant model. In real terms it says that certain processes will sequester CO2 emissions very quickly, for example solution into ocean water, some act more slowly, for example removal by tree growth and soils and some will act very slowly, for example removal of surface water CO2 into the deep oceans.
Figures 5, 6, 7 and 8 show what these different time slices weighted according to the % of emissions they apply to look like. Note the variable Y-axis scales, the very fast slice accounts for virtually none of the accumulated CO2 growth. These models are built on a flat baseline and so are for illustrative purposes only.
Figure 5 The super fast time constant removes most of annual CO2 additions within 5 years. This is represented by the thin yellow band in Figure 9.
Figure 6 The second time constant of 18.5 years applied to 34% of emissions is the only one that resembles the single time constant, single pulse model (Figure 3). Note this is slightly convex up while the next two charts are concave up and combining the two provides a way of producing the observed linear increase in CO2.
Figure 7 With the spread sheet model I’m using it is difficult to model the t173 year slice so I set the decline to 0.1%. With the 45 year time scale involved from 1965 to 2010 this makes little to no difference. While Figure 5 shows little carry over from one year to the next the T173 slice shows virtually 100% carry over from one year to the next. The concave up style of this slice cancels the convex up style of the T18.5 year slice.
Figure 8 The T∞ slice has decline set to 0 and is virtually the same as the T173 model shown in Figure 7.
Adding the 4 time constant slices together provides the picture shown in Figure 9.
Figure 9 Combing the slices shown in Figures 5, 6, 7 and 8 produces the picture shown here. It is important to understand that this is a picture of what remains, not what went in. Tweaking the input variables of the Bern model or the atmosphere model it should be quite straight forward to produce a better fit. The pre-1965 emissions (underlying decline) are modelled as a single exponential which is an approximation since Bern is not an exponential decline model. It would be a lot of work to adapt my spread sheet to handle the underlying decline in the proper way. Doing so would likely improve the fit. The purpose here is to illustrate how the model works.
So where does this leave our understanding? Since CO2 emissions can be matched to atmosphere evolution using different modelling approaches it is clear that the approach of matching model to observations provides no proof of the underlying process. In the Bern model, 48% of emissions remain in the atmosphere for a long time. I do not believe that the model itself provides any evidence that this is the case.
In this comment to Roger’s post Phil Chapman presented an idea of redistribution of a slug of CO2 between the fast reservoirs. The atmosphere began with 21% of the fast reservoir CO2 and Phil argued that following multiple slugs, once equilibrium was reached, the atmosphere would end up with 21% of the increased amount of CO2 circulating between the fast reservoirs (assuming linear processes). In other words, 21% of emissions will remain in the atmosphere until the slow fluxes have time to remove it. This idea rhymes with the Bern model and I am currently thinking along the lines of a model that does not decline to zero but to 21% above the original baseline.
What about Willis Eschenbach’s enigma? At the moment I think it may be helpful to think about this from a different angle. We seem to know that different processes remove CO2 at different rates. They are all acting simultaneously. Hence, maybe some of the slow processes get to CO2 emissions before the faster processes grab them? But it is possible that sequestration is dominated by fast processes, just that at equilibrium 21% of emissions may remain.
In part 2 I hope to illustrate using simple models why the bomb 14C cannot be used to model CO2 sequestration rates. And I currently believe that for the same reasons natural variations in d13C are unlikely to be useful tracers either. I will also take a closer look at Phil Chapman’s idea (I do not know if this is originally Phil’s idea) and see how this may be incorporated into a refined model.