By Mariachiara Fortuna | June 4, 2020
Usually, looking at the time series of observed temperatures, we see graphs as the one below. These graphs show the temperature fluctuations over the past 150 years, where temperatures show a systematic increase. The length of the time series is a crucial feature for understanding temperature fluctuations, because temperatures show cycles that seem to persist for several decades: so, if we want to understand the amplitude of the climate fluctuations, the limited length of the time series may compromise our understanding of the phenomena.
Luckily, temperature data are at times available for periods of time longer than 150 years. In fact, the invention of the thermometer dates back to the seventeenth century, and from the end of the eighteen century the meteorological measurements became systematic and accurate enough to start to produce valuable recordings. This is for instance the case of temperature time series from Uppsala, Sweden, whose records date back to 1722, or Berlin, from 1756, or Paris, from 1757, and several others from European countries.
In recent years, moreover, there have been many attempts to reconstruct temperatures from earlier periods using proxy data, such as growth and density measures from tree rings, laminated sediment cores, annually resolved ice cores and others. These reconstructed data indicate that there have been very long local temperature trends over the last two millennia. For example, the reconstructed data by Moberg, et al. (Nature, 2005) show that temperatures in the Middle Ages were almost as high as the current ones. In the 16th century the temperature dropped to a minimum in the so-called small ice age, but then rose gradually to the current level, as visible in the figure below.
In a recently concluded analysis ( How does temperature vary over time? Evidence on the stationary and fractal nature of temperature, during publication in the Journal of the Royal Statistical Society, series A) we have performed a wide analysis to investigate the properties of the temperature fluctuations from a statistical point of view. We did not take into account any other component of the complex object that we call “climate”, nor tried to understand the impact of anthropic activities on climate or temperature. We focused solely on temperature variations, to analyze their cyclic fluctuations and try to characterize their behaviour. In order to achieve this goal, we have used the longest and cleanest monthly temperature time series available across large parts of the globe (96 weather stations in total), as well as the reconstructed data mentioned above. Data, code and results are all available in this website.
Using various statistical methods, as well as a statistical model that can be justified by theoretical arguments, we find that all the temperature time series exhibit what is called a “long-term dependence”, a property that creates very long persistent trends.
For example, from Figure 2 we see that this is the case for the reconstructed temperatures, which just shows a decreasing trend from approx. 1100 to approx. 1600. Our statistical analysis indicates that both such longer as well as shorter trends are a typical pattern for the temperature variations. Such a pattern is due in part to the fact that, according to our models, the temperature variations have what are called fractal properties. This means that the temperature variations are very irregular (cf. the figures above), and that the oscillation pattern is the same regardless of the time scale used. For example, the oscillation pattern during a month on average, will be the same as the oscillation pattern over a year, 10 years, or 500 years for that matter, but with different amplitudes. In other words, this oscillation pattern is such that there are “waves within the waves” where they have different length and amplitude but in a certain sense (fractal sense) the same qualitative properties. Furthermore, it follows that the co-variation between temperatures at two different times decreases slowly with the distance between the times. In other words, one can say that the temperature process has a built-in memory that is naturally erased, but it goes slowly. This type of model was first used by English engineer Harold Edwin Hurst in his studies of the Nile water flow in the 1950s. But it was Frenchman Benoit Mandelbrot who introduced the fractal concept and first formalized Hurst’s model layout. He later became famous for his articles and books in which he discusses fractal phenomena and forms in nature as well as models that can represent such properties.
The presence of this very strong built-in memory (that we have found both in the observed and reconstructed time series) make the temperatures very persistent. If the temperature is high one day, it is not certain, but likely, that it is also high the next day*. But the same behaviour, due to the fractal property, holds also to a larger scale, such as year or decade. The presence of a rising trend over a long period of time does not necessarily mean that this trend will continue: at some point in time it will reverse, as we see in figure 2 for the decreasing trend from approx. 1100 to approx. 1600. Before this point, however, the trend can last for hundreds of years. As stated before, with our study we do not account for phenomena that are external to the temperature recordings in se. So, from a pure statistical perspective, we can conclude that air temperature data exhibit long range dependence and fractal properties, that lead to the presence of persistent trends that can last for centuries.