Graphs of global temperatures in media rarely show uncertainties yet no measurement is completely accurate, ±0.5C uncertainty is a plausible minimum uncertainty for monthly average temperatures. UK Met Office say there has been a rise in temperature of 0.8C since 1880 and this is statistically significant. Uncertainties of ±0.5C mean this ‘statistical significance’ is not really significant at all.
HadCRUT4, the temperature record produced by Met Office and Climate Research Unit at University of East Anglia, looks like this
A 2005 paper by P. Brohan, J. J. Kennedy, I. Harris, S. F. B. Tett & Phil Jones estimated uncertainties in mean monthly temperatures as about
0.2C/√60 = 0.03C
The argument was based on an earlier paper by CK Folland, Phil Jones and others which estimated the error in any single land temperature measurement was 0.2C (or more precisely they estimated the 2 σ measurement error was 0.4C).
Brohan and mates argue
The random error in a single thermometer reading is about 0.2C [Folland et al 2001]; the monthly average will be based on at least two readings a day throughout the month, giving 60 or more values contributing to the mean.
So the error in the monthly average will be at most 0.2C/√60 = 0.03C and this will be uncorrelated with the value for any other station or the value for any other month.
Firstly to nit pick they don’t appear to have heard of February.
Second the claim
if there are 60 separate measurements of temperature at a weather station, where the measurement error of the thermometer is ±0.2C, then the error in the monthly mean is 0.2C/√60
is simply wrong. It’s so wrong I don’t hedge with weasel words like, ‘in my opinion’.
The reason it is wrong is this formula applies to repeated measurements of the exactly same quantity, e.g. measuring the resistance of a resistor. Why would anyone want to measure the resistance of the same resistor over and over again?
To increase the accuracy of the overall measurement!
Temperature measurements made at different times and days during a month are obviously not measuring the same thing, not least because temperature usually changes over time so the same quantity is obviously not being repeated measured. If you’re still not convinced would you say the error in the mean of 60 temperature measurements at 60 different locations (separated in space) should be less than the error of any single measurement. Why should 60 measurements separated in time (when the temperature may have changed) be different?
This is usually taught in first year undergraduate science courses.
Here are two books which discuss uncertainties in measurements
At least one paper (by Pat Frank) has looked at likely error in monthly means. Pat Frank used a mathematical argument to analyse systematic errors, he estimates a lower bound for combined measurement and systematic error as ±0.463C
I thought a simpler way would be to look at actual measurements from a weather station.
Nowadays people are running weather stations and publishing their results on internet as a hobby.
I chose November 2013 from Martyn Hicks weather site, which gives minimum and maximum temperatures for each of the 30 days in November 2013.
A histogram of the temperatures shows the data is roughly normally distributed.
In case the data disappears here are the numbers
Day 
Max Temp

Min Temp

Average Temp


1  14.3  9.7  11.9 
2  13.3  8.5  10.4 
3  11.8  6.3  8.4 
4  12.5  4.3  7.2 
5  11.7  6.2  9.6 
6  13.9  9.7  11.7 
7  12.3  6.7  8.9 
8  7.9  5.1  6.4 
9  7.7  4.3  6.2 
10  11.5  3.9  6.9 
11  14.1  7.0  11.6 
12  13.8  3.9  9.6 
13  10.7  2.2  6.6 
14  11.2  4.7  8.2 
15  8.8  1.2  5.4 
16  8.1  2.0  5.5 
17  9.5  6.5  8.1 
18  9.2  3.7  7.4 
19  7.3  0.3  3.4 
20  8.9  0.2  5.5 
21  8.5  3.7  5.8 
22  7.2  1.5  3.9 
23  6.3  0.2  2.7 
24  6.9  1.5  4.3 
25  7.2  1.4  4.7 
26  6.2  1.4  3.1 
27  10.2  5.6  8.5 
28  10.4  7.4  8.8 
29  9.9  6.3  8.4 
30  8.9  2.0  5.5 
If there are N measurements t_{i}
Mean  μ = ∑ t_{i} 
Standard Deviation  σ = √((∑(t_{i} – μ)^{2})/(N1)) 
Standard Error  σ/√N 
The standard error is the uncertainty in the mean.
For Martyn’s November 2013 temperature data
Montly Mean Temperature  7.1C 
Standard Deviation  4.0C 
Standard Error = Uncertainty in monthly mean 
0.5C 
Just in case you are not convinced and you think the error in measuring the monthly mean should be about 0.03C what do you think is the meaning of the 0.5C standard error?
I know this is a sweeping generalization but what would the temperature graph look like with error bars of 0.5C.
Hang on a minute you’re thinking, you only have a single estimate of error for a single location and month.
Yes but if you only have one error estimate that is then your best estimate (and also your worst). And (not that copying others makes anything right), this is almost exactly what Phil and chums did, they took a single estimate of error and applied it to every single month since about 1850 and every weather station around the world.
The error bars yellow. As the temperatures could be anywhere within error bars, and the range of the error bars is greater than the apparent increase in temperature you can not say there has definitely been an increase. You also can’t say there hasn’t. In fact there could even have been a fall, or a larger increase than 0.8C since 1880.
The claim is the graph shows a definite increase in temperature so the answer has to be
not proven.
It is very likely the errors shown are an underestimate especially for 19^{th} century data. A partial list of reasons
 Older thermometers were less accurate and were less accurately read (Mark Cooper, TonyB at WUWT, TonyB at ClimateReason)
 There were less thermometers around the world in 19^{th} century and southern hemisphere hardly covered at all (Clive Best)
 70% of world is ocean and sea surface temperature measurements now there are buoys which automatically collect some temperature measurements. In the past ( 1880?) sea temperature was measured by ships collecting buckets of sea water (wuwt)