Understandable Earth Science

A nice explanation of why there is no evidence for a global warming pause

Open Mind

UPDATE: A new post at RealClimate is very relevant, and well worth the read.


One day, a new data set is released. The rumor runs rampant that it’s annual average global temperature since 1980.

artdat

Climate scientist “A” states that there is clearly a warming trend (shown by the red line), at an average rate of about 0.0139 deg.C/yr. She even computes the uncertainty in that trend estimate (using fancy statistics), and uses that to compute what’s called a “95% confidence interval” for the trend — the range in which we expect the true warming rate is 95% likely to be; it can be thought of as the “plausible range” for the warming rate. Since 95% confidence is the de facto standard in statistics (not universal, but by far the most common), nobody can fault her for that choice. The confidence interval is from 0.0098 to 0.0159 deg.C/yr. She also…

View original post 1,029 more words

The IPCC 2014 synthesis report was approved by the IPCC on 1st November 2014.

The draft report (i.e. still needs copy-editing and formatting) is available here, if you want to read it. It is 116 pages long. The related press release is available here:

I thought it might be useful to summarise some of the more important points of the report for anybody who is interested but doesn’t have the time or inclination to wade through the entire document.

This first blog post explains the first diagram in the report. It is in the section “Topic 1: Observed changes and their causes” and it shows the real, measured changes that have been taking place on Earth due to global warming. The report states “Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, and sea level has risen.”

Figure 1.1 of the report presents 5 different sets of information – I have split the diagram up into individual parts and will explain each part separately. (Apologies for the poor quality of the diagrams – I used print screen to take them from the pdf of the report – hope the IPCC don’t mind me doing that – all images by IPCC.)

Panel (a):  Global temperature  change

Let’s start with panel (a). This shows the average temperature over time between the years 1850 and 2012. The temperature is plotted as the difference in temperature compared to the average (mean) of the years 1986–2005. So, looking at the diagram, between 1850 and 1900, the average temperature was ~ 0.6 °C colder than it was between 1986-2005, and nowadays it is about 0.2 degrees warmer than 1986-2005. The different coloured lines (black, blue and orange) correspond to different data sets and in the bottom panel, the grey boxes are an estimate of the uncertainty on the mean for one of the data sets. If you want to check out the data sources and how they are measured, the black line is data from the Met Office Hadley Centre and Climatic Research Unit, the blue line is from the NOAA (US National Oceanic and Atmospheric Administration) National Data Centre, and the orange line is from the NASA Goddard Institute. Importantly, all 3 data sets show good agreement. Annoyingly, the current report doesn’t actually list the data sets, and the links it provides in the figure caption are broken, but the same graph is shown on the Met-Office website, which lists the data sources.

The top panel shows the temperatures averaged over a single year, and you can see that yes, there is some variation – some years are colder than others. But then look at the bottom panel – this is the temperatures averaged over a decade. This is the more important diagram when you are looking at long term trends, because averaging over decades smoothes out any variability due to short-lived processes, like El Niño / El Niña events, cooling from volcanic eruptions, and temporarily reduced emissions from recessions. This bottom graph clearly shows that warming started in the early 1900s, paused during the mid 20th century, and then dramatically increased during the later part of the 20th century and into the 21st century.

Now, also consider that this data only shows up until 2012. Looking at data from the UK Metereological Office shows that average global temperature for last year, 2013, was 0.05 °C warmer than it was in 2012 and 2014 is already recording temperatures well above average (see here, here and here)

Summary: Average global temperatures are rising over time. Global warming is happening – absolutely no doubt about it!

Moving on to panel (b):

Global temperature change

This map shows which parts of the Earth are warming and which are cooling. You can see that the data set isn’t complete – we are missing much of the Arctic and Antarctic, large parts of the Pacific Ocean, and some central continental areas. These areas were excluded because the data record was less than 70 % complete and / or the first and last 10% of the time period had less than 20% data availability – i.e. the map is based on data that is robust and unlikely to be influenced by random outliers at the beginning and end of the time period. The little “+” symbols indicate grid squares where the data shows a statistically significant warming trend. Coloured squares without a “+” are not as statistically robust.  Even without a complete global map of data, we can see that most places are warming (yellow, orange, red, purple). Continental areas have warmed by as much as 2.5 °C over 111 years between 1901 and 2012. The only place that seems to have cooled  (pale blue) at all is a small patch of the north Atlantic (which I would hazard a guess is related to changes in thermohaline circulation of the Gulf Stream). The data on this map is surface temperature and it is derived from the orange data (NASA) in panel a.

Summary: The temperature rise over the last 111 years was not evenly distributed across the Earth. The vast majority of the Earth’s surface has seen a rise in temperatures.

Panel (c):

Change in Sea Ice

This shows the extent of summer Arctic sea ice since 1900 (averaged for each year over 3 months) and the extent of summer Antarctic sea ice since the late 1970s (averaged for each year over 1 month). The areal extent of summer Arctic sea ice has almost halved since the 1950s and is still reducing. At the moment the Antarctic summer sea ice looks like it hasn’t changed much, but we don’t know how extensive the ice was earlier in the 20th Century. Again, the different  coloured lines represent different data sets, and they show good agreement (at least in pattern, if not in absolute value for Antarctica). Unfortunately the links in the current draft of the IPCC report don’t work so I can’t tell you where the data comes from at the moment.

Summary: The areal extent of summer sea ice in the Arctic has almost halved since 1960.

Panel (d):

Global Sea Level Rise

This shows the change in global mean sea level between 1900 and 2010. Like panel (a), it is plotted as difference in sea level relative to the average sea level between 1986 and 2005. So, looking at the graph, back in 1900, sea level was about 0.15 m (15 cm) lower than the average for 1986–2005, and in 2010 it was about 0.05 (5 cm) higher than the average for 1986-2005. The 1986 – 2005 mean value is based on data from the longest running (i.e. most complete) data set.

Summary: Global mean sea level has risen around 20 cm since the year 1900.

Panel (e):

Change in precipitation

This map shows how on-land precipitation (rain and snow fall) has changed over time since 1951. The data used in this diagram were assessed and included using the same criteria as in panel (b) – i.e. data available for > 70% of the time period and with > 20% of the data available in the first and last 10% of the time period. The scale on this map is a bit confusing at first glance – it is precipitation in millimetres per year per decade. This isn’t an absolute change in value, like for the map in panel (b); this is plotting a change in the rate of change. In simple terms, the darkest blue shade represents areas where, every decade, the amount of precipitation has increased by 50 – 100 mm per year – i.e. the average precipitation per year between 1961-1971 was between 50 and 100 mm higher than the average precipitation per year between 1951-1961. And the average precipitation per year between 1971-1981 was between 100 and 200 mm higher than 1951-1961. Looking at the map, large parts of the African and Asian continents are drying out and seeing much less precipitation (incidentally, note the area of North Africa that hasn’t seen much change in precipitation levels – remember that this is the Sahara Desert – it can’t dry out much more…), while Northern Europe, much of the mainland USA, the east coast of South America and northern Australia are all seeing increased precipitation.

Summary: Patterns of rain and snowfall are changing, with some areas receiving much more precipitation and some receiving less.

So, that is the end of Figure 1.1 of the IPCC 2014 synthesis report.

Nice summary highlighting some of the problems with ignorance in climate science.

Critical Angle

Geologists, especially those, like me, of a certain age, often have problems with climate science and the idea that humans may be triggering a massive and abrupt change in the climate. Global change, we were taught, occurred slowly and by commonplace mechanisms: sediment carried by water, deposited a grain at a time: erosion effected by water and wind, the hardest rocks slowly ground down crystal by crystal. The great features of the Earth—the canyons, mountains and basins—were built this way and owe their grandeur to Deep Time, geology’s greatest intellectual gift to human culture. In the face of the history of the natural world, geologists feel a certain humility at the insignificance of humans and our tiny lifespans. But we also feel some pride in the role of our subject in piecing together this history from fossils and outcrops of rock. It’s an amazing detective story: diligent scientists patiently working…

View original post 1,437 more words

Interesting article regarding a major challenge for the renewables industry – when storing energy ultimately costs more energy.

Brave New Climate

Pick up a research paper on battery technology, fuel cells, energy storage technologies or any of the advanced materials science used in these fields, and you will likely find somewhere in the introductory paragraphs a throwaway line about its application to the storage of renewable energy.  Energy storage makes sense for enabling a transition away from fossil fuels to more intermittent sources like wind and solar, and the storage problem presents a meaningful challenge for chemists and materials scientists… Or does it?


Guest Post by John Morgan. John is Chief Scientist at a Sydney startup developing smart grid and grid scale energy storage technologies.  He is Adjunct Professor in the School of Electrical and Computer Engineering at RMIT, holds a PhD in Physical Chemistry, and is an experienced industrial R&D leader.  You can follow John on twitter at @JohnDPMorgan First published in Chemistry in Australia .

Several recent analyses of the…

View original post 1,723 more words

I enjoyed reading this account of an astrobiology field trip to some of Iceland’s least accessible areas. I’m also very jealous 🙂

dr. claire cousins

Landing into Keflavik airport has become a familiar sight, as the vast flat expanse of moss-covered lava flows stretches out into a horizon of cold grey clouds as we approach the runway. This is my ninth trip to Iceland, and for this trip we are sampling from various sites in the northeast – some old, some new. Our key target as always is Kverkfjoll – a dormant volcanic caldera that peeks out from the northern margin of Vatnajokull ice cap. Iceland is often referred to as ‘the land of ice and fire’, and the geothermal environments at the summit of Kverkfjoll epitomise this name. Here, scattered clusters of small fumaroles – which vent hot volcanic gas – interact with overlying ice and snow to produce localised and short-lived pools of geothermal meltwater, which provide a haven for microbes within an otherwise remote and frozen environment. These environments provide a fascinating analogue to hydrothermal environments…

View original post 1,620 more words

Is it too late for us to maintain our comfortable western lifestyles AND avert catastrophic climate change?

Despite all of the lobbying from climate sceptics, the absolute vast majority of scientists have, for years, agreed that release of CO₂ into the atmosphere by humans (both from burning fossil fuels and from cutting down forests for farmland) is the main cause of global warming, and that YES, planet Earth is warming much faster than it would without all that extra CO₂ in the atmosphere.

For years there has been a general agreement in the scientific community that we need to keep CO₂ concentrations in the atmosphere at less than 500 ppm (parts per million – that is the equivalent of 0.05%) if we want to avoid raising the average surface temperature of the earth by 2 °C, which is the maximum temperature rise the Earth can tolerate without irreversible, catastrophic (for our way of life and economy) climate change[1].

Back in 2004, when atmospheric CO₂ levels were ~375 ppm and global CO₂ emissions were ~ 7 billion tonnes of carbon per year, a paper published in Science by Pacala and Socolow[2] looked at our current global CO₂ emissions, the rate they have been increasing and how this affects the concentration of CO₂ in the atmosphere. They calculated that, if we are going to keep the atmospheric concentration of CO₂ below 500 ppm over the next 50 years, annual carbon emissions cannot increase above 7 billion tons per year – i.e. they had to stay the same as in 2004. They also calculated what our carbon emissions would be in 50 years’ time if the rate of emission increased “business as usual” – i.e. if we continued generating more power and emitting more CO₂ without making any effort to reduce emissions. They calculated that, in 2054, our carbon emissions would double to around 14 billion tons per year. That meant, we needed to find a way to reduce carbon emissions by 7 billion tonnes over the next 5 years, and we needed to start doing it quickly.

Wedges1

This graph (after Pacala and Socolow, 2004) compares the maximum amount of CO₂ we can emit per year whilst avoiding raising the atmospheric CO₂ concentration above 500 ppm (blue line) and an estimate of how much CO₂ we will be emitting if we carry on as normal (red line). In short, CO₂ emissions needed to stay the same as they were in 2004 to avoid major climate change. Unfortunately, as the world population increases, and developing countries gain a better quality of life, more energy is consumed and more CO₂ is produced.

But there was hope! Pacala and Socolow pointed out that, even back in 2004, there were plenty of options for reducing our CO₂ emissions – they just needed scaling up. They identified 15 different ways that we could start to reduce our CO₂ emissions by increasing energy efficiency, decreasing energy use, changing land-use, switching to lower carbon power generation and capturing and storing CO₂ from power plants.

To make the task even less daunting, Pacala and Socolow came up with the concept of “stabilisation wedges” (if you look at the space between the red and blue lines on the above diagram, it is kind of wedge shaped). This means that all we needed to do in the short term was to make small changes to reduce our CO₂ emissions, and gradually scale them up to make bigger changes in the long term. To make things even easier, this wedge was split up into 7 separate wedges that were each the equivalent of saving 1 billion tonnes of carbon per year after 50 years. All we needed to do was pick 7 existing CO₂ reduction methods from the list and scale them up so that in 50 years time, each of them would be reducing our carbon emissions by 1 billion tonnes every year. If we include these 7 wedges on the above diagram, it starts to look something like this:

Wedges2

This was brilliant! Pacala and Socolow identified a very real and dangerous problem, but importantly they gave an achievable solution. We could avert catastrophic climate change, if we just started using and investing, NOW, in low-carbon technology.

So what happened?

Fast forward to 2013 when Davis and others published a paper in Environmental Research Letters[3]. They looked at recent CO₂ emissions and worked out whether the wedge system would still work. They showed that in 2010, just 6 years later, worldwide carbon emissions were already more than 9 billion tonnes – that is much higher than Pacala and Socolow predicted for “business as usual”. Not only had the world failed to stabilise CO₂ emissions, it had increased emissions much faster than predicted. Davis et al worked out what would happen to atmospheric CO₂ concentrations if we started implementing the stabilisation wedge NOW. First of all, they worked out that at least 9 wedges would be needed to just *stabilise* carbon emissions over 50 years (i.e. keep emissions at 9 billion tonnes per year). Then they looked at how this would affect the total amount of CO₂ in the atmosphere and calculated that there would still be more than 500 ppm CO₂ by the year 2049;

If, today, we were able to stabilise CO₂ emissions at the same level as they were in 2010, we would reach that scary threshold of 500 ppm CO₂, or a 2 °C global temperature rise in less than 40 years!

I am scared! I am angry!

10 years ago (I am writing in August 2014), the problem of greenhouse gases and climate change was a big scary problem, but it might have been possible to avoid the worst of the damage by starting to make relatively small changes and investing in the right kind of technology and development.

Today, it is too late for that. To prevent 500 ppm of CO₂ in the atmosphere, we need to not only stabilise CO₂ emissions, we need to drastically reduce them. This is because the concentration of CO₂ in the atmosphere responds to the absolute amount of CO₂ we pump out, rather than just the rate we pump it out at – yes, if we pump it out faster, we increase the concentration faster, but if we keep emission rates the same, we are still increasing the concentration and eventually it will become too high; all of the CO₂ we have pumped out since the Industrial Revolution won’t just magically disappear overnight, especially while we continue to cut down forests to make way for farmland. Pacala and Socolow recognised this in 2004, but hoped that new technology would be developed in the next 50 years to not just stabilise, but reduce and maybe eliminate CO₂ emissions. The wedges concept was supposed to be a kickstarter – a way to start taking action to protect our climate NOW. It wasn’t supposed to be a magic wand that can be waved whenever we feel like it – it was a way of  buying us time (50 years) to develop ways to stop emitting CO₂ completely.

Here is what the CO₂ wedges diagram looks like today, adapted from Davis et al, 2013:

Wedges3

I have left on the 2004 “business as usual” prediction line so you can see just how much extra CO₂ we are emitting, compared to what was expected. The red stars show the annual global emissions from 2006 – 2013[4].

Davis et al worked out that now we might need to use as many as 31 wedges just to delay the 2 °C temperature rise until 2060. They also calculated that 1 wedge is the equivalent of creating ~ 1 TW (terrawatt – 1 TW = 1 billion kW) of carbon-free energy. For comparison, the London Array, which is a wind farm of 175 wind turbines and one of the largest wind farms in the world, has a peak-capacity output of 630 megawatts[5] – that is 0.00063 TW. You would need 1600 London Array windfarms, all operating at maximum capacity all of the time, to create just 1 carbon wedge (also consider that most windfarms operate at ~one third efficiency, so 4800 London Array windfarms per C-wedge is more likely).

10 years ago we had an opportunity to prevent long-term damage to our infrastructure and quality of life by making small, gradual changes and small sacrifices to our quality of life and by investing in and developing low-C technologies.

We may have missed that opportunity. Why? Because we all sat back and ignored the problem. We elected governments who give jobs such as “Secretary of State for Environment, Food and Rural Affairs” to climate sceptics. We are more concerned about paying less money for our energy than investing in renewable and low-C energy. We are more concerned about how a wind-farm will alter the view in the British countryside than sea-level rise swallowing entire island-nations. We allow our governments to give in to lobbying from industry when they should be implementing measures that force us all to adopt low-C technology, not just so we can reduce our own emissions, but so that we can fully develop and share this technology with the developing world who can’t yet afford to develop it themselves.

As you can see from the title, I initially addressed this blog to the World Leaders who have failed to take appropriate action on preventing man-made climate change. However, we are all responsible. Each-and-every-one-of-us! The threat of climate change is real and it is scary! Please remember that when you are voting / switching energy provider / buying stuff.

 Footnotes and references:

[1] This is actually much higher than the “safe” concentration recommended in a 2008 paper by James Hansen and others published in the Open Atmospheric Science Journal (http://benthamopen.com/openaccess.php?toascj/articles/V002/217TOASCJ.htm). They noted that the last time there was more than 450 ppm CO₂ in the atmosphere was around 50 million years ago, back when Antarctica was completely ice free! and they suggested that we need to limit CO₂ concentrations to a maximum of 350 ppm to avoid catastrophic, irreversible climate change over the next few decades.

[2] Pacala, S. and Socolow, R., 2004. Stabilization Wedges: Solving the Climate Problem for the Next 50 Years with Current Technologies. Science 305: 968-972 DOI: 10.1126/science.1100103

[3] Davis, S. J., Cao, L., Caldeira, K., Hoffert, M. I., 2013. Rethinking Wedges. Environmental Research Letters 8: DOI: 10.1088/1748-9326/8/1/011001

[4] http://CO2now.org/ and Carbon dioxide information analysis centre http://cdiac.ornl.gov/GCP/carbonbudget/2013/

[5] http://www.londonarray.com/

Many ⁴⁰Ar/³⁹Ar dating publications use age spectrum and isotope correlation diagrams to interpret their data and calculate ages. These can be quite confusing if you don’t know how to interpret them so I have sketched some schematic examples to explain how they work.

First up, let’s look at an Age Spectrum:

In the diagram below I have drawn 2 different age spectra. The bottom, green spectrum is what we would expect to see if we had an ideal sample that has no excess-Ar, and the top, blue spectrum is what we might expect if the sample contained excess-Ar in fluid inclusions. Both of these “experiments” involved 7 heating steps where the temperature increased for each step and the gas released from each step was analysed. The data for each of those 7 steps is represented by one of the 7 boxes on the diagram.

Age Spectrum

The Y-axis (vertical) shows the ⁴⁰Ar/³⁹Ar age calculated for each step (these are schematic diagrams, so I have not put a scale on them). On an age spectrum, the ages are plotted as boxes to show how big the errors are on each step. On the green diagram I have also drawn age data points and error bars at the end of each box to help you visualise it better. Hopefully you can see that, on the green diagram, all the ages are very similar, but on the blue diagram the first three steps give older Ar-ages.

The X-axis (horizontal) shows the cumulative percentage of ³⁹Ar released in the experiment and each step is plotted sequentially, which means, for stepped heating experiments, temperature increases to the right.  If you look at the width of the boxes you can tell which heating steps released the most ³⁹Ar (for the green diagram steps 4 and 6 are the widest, so we know these released the most ³⁹Ar). Remember that ³⁹Ar is the same as K, which is the source of ⁴⁰Ar* and should usually be evenly distributed throughout the crystal, so we can use this to understand something about where in the crystal the gas we analysed came from.

If a sample is well behaved and has no excess-Ar, all of the temperature steps should give the same ⁴⁰Ar/³⁹Ar age and this is what we see in the green diagram – all of the ages overlap within error. In this situation we can use all of the data to calculate a more precise age for the sample – that is represented by the dotted black line.

But what if there are fluid inclusions in the sample that add excess-Ar, like we discussed in the last blog? Well, it is quite common for these inclusions to break down and release their gas at relatively low temperatures.

Excess Ar in magmatic / fluid inclusions

This means that the ages we calculate from the first few temperature steps will be older than the later steps that release gas from the crystal lattice. You can see how this typically manifests in the blue age-spectrum, where the first 3 steps have older ages than the later steps. In this situation we can just discard the data from the steps contaminated with excess-Ar and calculate an age from the steps that give a nice flat, consistent spectrum. We call this part of the spectrum the plateau, because it is flat.

So, hopefully you now know a bit more about what those strange block diagrams mean. Let’s move onto isotope correlation diagrams. These are basically just data points along mixing lines between 2 or more different things. Let’s start with a non-geological example. I am going to make a creamy chocolate coconut dessert. I’m going make this by alternating layers of ganache, which is a mixture of cream and chocolate, and a pre-made coconut pudding that is a mixture of cream and coconut. Both of these ingredients contain cream. If I know what the proportion of chocolate to cream is in the ganache, and the proportion of coconut to cream in the coconut pudding, then I can calculate how much chocolate, coconut and cream there is in different mixtures of ganache and coconut pudding. I’ve done this in the table here.
recipe

So, the ganache is 70% chocolate and 30% cream. The coconut pudding is 50% coconut and 50% cream. If the final dessert has half ganache and half coconut pudding, then it will have 35% chocolate, 25% coconut and 40% cream.

If I calculate some ratios of the ingredients, and then plot them up on some graphs, we can see that the composition of the ganache – coconut pudding mixtures all lie on a straight line between the compositions of the 2 main ingredients.

Pudding

The x-axis is chocolate / cream (amount of chocolate divided by the amount of cream). That means, as you move to the right of the graph, you either have more chocolate, or less cream (or both).  That also means that, if you move to the left of the graph, you have less chocolate or more cream (or both). When you reach zero on the x-axis, that basically means you have no more chocolate <sad face>.

We know that there is no chocolate in the coconut cream, so any data points that have 0 on the x-axis, must be 100% coconut cream, and the value they have on the y-axis (we call this the intercept, because it is where a line intercepts the axis) is the ratio of coconut to cream. We can see from the diagram that this ratio is 1; if you look at the table, you can see that the composition of coconut pudding is 50:50 coconut and cream; 50/50 = 1. Similarly, the value of the x-intercept is the composition of the ganache – we can see from the table that ganache is 70% chocolate and 30% cream; 70 / 30 = 2.3, which is what we can see on the graph.

Next, let’s reverse this. Pretend we don’t know the composition of our 2 basic ingredients – all we have are the compositions for 3 different mixtures of ganache and coconut pudding – they make up the 3 data points in the middle of the graph. If we draw a line through them and extrapolate it to both axes, we can calculate the compositions of our original ingredients.

OK, now let’s use this tool to help work out some ages.

Here are a couple of schematic isotope correlation diagrams that I sketched. They are both the same kind of diagram, but illustrating different types of samples.

Isochrons

On the x-axis, we have ³⁹Ar/⁴⁰Ar and on the Y-axis we have ³⁶Ar/⁴⁰Ar. Recall that ³⁹Ar comes from the K in the sample, ³⁶Ar comes from the atmosphere, and ⁴⁰Ar can come from radiogenic ⁴⁰Ar*, from the atmosphere, or from excess-Ar (or, in many cases, all three). Data points are usually plotted as ellipses, to represent the analytical errors on both the x and y axes. A regression line is calculated to fit through the data points and this also calculates the values on the x and y intercepts (shown as stars on this diagram ). We use this diagram for 2 things: 1) to work out the  ratio of  ⁴⁰Ar*/³⁹Ar  in the sample, so we can calculate an age, and 2) to identify the composition of any trapped Ar – is it atmospheric or is there excess-Ar present?

If we move to the left on this graph, that means we either have less ³⁹Ar or more ⁴⁰Ar. We know that ⁴⁰Ar* is always going to be associated with ³⁹Ar because K is needed to produce the ⁴⁰Ar*. So that means, when we hit the y-axis, ³⁹Ar = 0 and that must mean all ⁴⁰Ar is non-radiogenic – it is either atmospheric or excess-Ar; the value of the y-intercept is the composition of the trapped Ar in the sample. If the trapped Ar is purely atmosphere, the y-intercept should have a value of 0.0033 (this has been measured on samples of air). If the value is smaller, that means there is more ⁴⁰Ar than we would expect in the atmosphere, which means excess-Ar is present.

So, how do we get the age? ³⁶Ar only comes from the trapped, atmospheric-composition gas in the sample. Moving down the graph means we either have less ³⁶Ar or more ⁴⁰Ar (or both), and when we hit the x-axis, that means there is no ³⁶Ar, and so no trapped-Ar; All the ⁴⁰Ar on the x-intercept is radiogenic. We simply read off the ³⁹Ar/⁴⁰Ar* value and use this to calculate the age of the sample. Using this method it doesn’t matter whether excess-Ar is present or not, because the calculated age is completely independent of the trapped Ar composition.

In the sketch diagrams I have illustrated a few examples. In the left diagram I am comparing how data would look, for 2 samples that are the same age, but one has excess-Ar (red) and the other doesn’t (blue). You can see that the blue data fall on a mixing line between air (y-axis) and the radiogenic component (x-axis), while the red data fall on a mixing line between the same radiogenic component and a trapped Ar composition with more ⁴⁰Ar than in air. If you were to calculate Ar-ages from each of the individual data points, the blue data would all give the correct age, but the red data would give ages that are too old.

In the right hand diagram I am comparing two samples that do not have excess-Ar but are different ages. This might happen if a volcano erupts explosively and the ash picks up some old crystals from a previous eruption. In this case, the data fall onto 2 mixing lines between air and different radiogenic components. The yellow data have an x-intercept that is a lower value than the green data – this means the yellow data contain more ⁴⁰Ar* and so they are older.

The lines that we calculate and extrapolate between the data points represent a range of isotopic compositions that are all associated with the same age, and so these diagrams are often called “isochron diagrams” instead of isotope-correlation diagrams. There are other ways of plotting and analysing the data to make isochrons, and the example here is actually called an “inverse isochron” to distinguish it from another diagram that plots ³⁹Ar/³⁶Ar against ⁴⁰Ar/³⁶Ar. I focussed on inverse isochrons for this blog entry because they seem to be the most popular way of displaying and analysing Ar-isotope data at the moment, but the principles of mixing between 2 different components (trapped and radiogenic Ar) are the same for all isochron diagrams.

So, I hope that if you now decide to pick up a paper about Ar-dating you’ll be able to understand a little bit about some of the strange diagrams in it.