Decadal scale coolings not all that unusual

Most people spending much time on the blogosphere are well aware of claims that “Global Warming stopped in 1998” or similar-style remarks. Even though the 1998-2008 period contains most of the warmest years on the instrumental record (something that is very unusual), and all the years are well above the traditional 1951-1980 (or 1961-90) climatologies, a key focus for skeptics has been the lack of upward slope in a linear regression over the 1998-2008 period.

It has also been emphasized by many that a traditionally defined climatology involves roughly 30 years of data, and so at least a few decades are needed to say much about the underlying trends in climate. However, there is not much in the peer-reviewed literature regarding the probability (or significance) of decadal flatlines, or coolings, when the climate regime is superimposed on a long-term warming trend due to radiative forcing. This is the subject of an upcoming paper in Geophysical Research Letters by David Easterling and Michael Wehner in Is the Climate Warming or Cooling? (subscription required, abstract not available since the paper has not been formally published yet). Their conclusion is that these kind of decadal time-frames can yield slopes of either warming or cooling in a warming world, even in the later 21st century, and nothing is odd about the 1998-2008 trend.

We can get a glimpse of how flimsy short-term trends are for climate change analysis. Here is the full NCDC (base period 1961-1990) record (see Smith et al 2005 which the authors take after).

book1_19916_image0011

Sorry, not very good at clear images from Excel. A closer look at the 1970-2008 period reveals a steep upward trend that is statistically significant.

book1_3881_image0012

A look at 1998-2008 reveals little trend

book1_18025_image0012

However, 1999-2008 looks much different, with a slope almost a factor of two greater just based on removing an anomalous data point (the 1998 El Nino)

book1_19246_image0012

The lack of trends over a small timeframe in the last several decades is not unusual. For instance, the 1977-1986 interval contains essentially no trend

book1_21068_image0012

The bottom line is that intervals are imbedded in the modern anthropogenic time period which contain no statistically significant trends, and such period of one, even up to two decades can exist without significant rises in temperature. Furthermore, one can remove or add just a single year when doing these small time-frame analyses and get a much different picture of what is happening.

The authors construct probability distributions with observations and models on the likelihood of getting a decade of warming/cooling conditions over pre-industrial, modern times, and the 21st century. The likelihood of a cooling decade diminishes under a warming trend and especially in the extreme A2 scenario (which postulates a “business as usual” future), but is still not impossible .  Like playing with loaded dice

prob_plot

And the verdict from the paper:

“Therefore, it is reasonable to expect that the natural variability of the real climate system can and likely will produce multi-year periods of sustained “cooling” or at least periods with no real trend even in the presence of longterm anthropogenic forced warming. Claims that global warming is not occurring that are derived from a cooling observed over such short time periods ignore this natural variability and are misleading.”

140 responses to “Decadal scale coolings not all that unusual

  1. I can’t help to think that it is sad how awaited this paper was…

  2. Pingback: Topics about Climate » Archive » Decadal scale coolings not all that unusual

  3. Everyone knows the overall trend has been warming since the end of the little Ice Age. And true, under those scenarios, there isn’t anything unusual going on. That’s because, there isn’t anything unusual going on. The question is runaway human induced global warming. That isn’t happening. The cooling trends do contradict that theory. Also, much of the predictions are now a 30 year cooling trend.

  4. Tommy,

    “runaway human induced global warming” – runaway?

  5. 1998-2008 period contains most of the warmest years on the instrumental record

    Which goes back how far?

    Response– About 150 years for solid global-scale temperature measurements– chris

  6. There are several problems with this post. Firstly, the use of NCDC data instead of SST data is problematic. All land temperature variation is a product of SST variation, and I think that we can all admit that land temperature data as its problems. SST is much more reliable for there is no need to try to account for urban/rural and coverage problems. Use of SST data will reveal a different trend from 1998 to the present.

    Response– I used the NCDC land+ocean product, not just land– chris

    These comparisons are inaccurate because ENSO is such a dominating presence in short-term temperature variation. It would be much more helpful to look at SST variation with the immediate effects of ENSO removed. When this is done, the 11-year trend of zero holds. This means that the flat trend is not the effect of the El Nino of the Century, but rather a product of underlying SST variation.

    Response– But the real world doesn’t remove ENSO and thus temperature products show an ENSO signature. The point was that small timescales don’t need to show a statistically significant patterns…I never reeally made a suggestion as to why, although the oceans are obviously a reason– chris

    The fact that the PDO phases correlate with the phases of warming (positive) and stable (negative) temperature is notable. Starting in 1946, temperatures remained flat. And starting in 1976, temperature began rising. And starting after the 1997/8 El Nino (where the PDO went negative in my books), temperatures remained flat. This means that the PDO is influencing temperature strongly on a decadal time scale. The fact there are periods of flat and then rising temperatures indicates that there is a strong amount of natural variation in the system that is not being accounted for.

    Response– The PDO is an oscillation which does not influence global temperature trends. Comparing global temperature change with the PDO makes no sense based on how the PDO is defined. It involves subtracting off the global mean SST for each month, which amounts to subtracting off the global mean warming trend. The popular internet pictures of the PDO going in a “warm” or “cool” phase and comparing that to GMST say nothing physical about what is happening– chris
    So here are a few questions:

    Is it just a coincidence that phases of the PDO are consistent with either flat or increasing temperature anomalies?

    Response– But the PDO is defined as the leading PC of pacific ocean SST departures from monthly mean global average SST’s north of 20 N. See http://atmoz.org/blog/2008/08/03/on-the-relationship-between-the-pacific-decadal-oscillation-pdo-and-the-global-average-mean-temperature/

    You show that the trend from 1977-1986 is near-zero. Is it just a coincidence that the time frame is directly between the climate shift of 1976 and the El Nino of 1986/7, which drove long-term step changes in temperature in the Pacific, Indian, and North Atlantic Oceans?

    You write: “Even though the 1998-2008 period contains most of the warmest years on the instrumental record….”
    Is it just a coincidence that the warmest decade of the century followed the “El Nino of the Century?”

    Response– The 1998 El Nino is a spike. The long-term trend is due to radiative forcing, and there is very low probability of having the number of warmest years we’ve had in the last decade in a system characterized by just noise, but that changes when you introduce the trend– chris

    Your post hints (although it would do so better with the use of SST) toward an large, unrecognized role for the El Nino/Southern Oscillation (ENSO). I’ve written a post on this topic here: http://climatechange1.wordpress.com/2008/11/29/how-enso-rules-the-oceans/

  7. 1968-1978 Cooling
    1978 -1998 Warming
    1998-2008 Cooling
    There is a bigger picture.
    http://climatechange1.wordpress.com

  8. Re Colose: The use of a pure SST dataset would introduce fewer uncertain variables.

    The point of my discussion of ENSO-removal is that it resolves much of the noise. If you want to discuss the underlying trends in the oceanic SST rise, ENSO removal makes it clear that there is no significant trend in temperature from 1998 to the present, from 1986 to 1996, and from 1977 to 1986. The reason temperature has risen to its current height is due to step changes during the 76/7, 86/7, and 97/8 El Ninos. ENSO is a radiative event (notably in the West Pacific and Indian Oceans); this is clear from regional responses to the three El Nino events along with changes in cloud cover. ENSO has also been responsible for modulating the Atlantic Meridional Overturning Circulation, which plays a significant role in the AMO.

    The PDO is a product of ENSO. The climate shift of 1976 was manifested in a step change in the ENSO index. While the PDO is positive, more El Nino events occur; since ENSO is a radiative oscillation, the period of time will see a rising temperature. While the PDO is negative, strong El Nino events do not occur, and temperatures remain flat. Variable cloud cover may also be associated with the PDO. Douglass has written a paper that will soon be published on radiative flux during the four climate periods (as defined by the PDO) of the 20th century. It oscillates between positive and negative depending on the mode of the PDO.

    I agree that the long-term trend is due to variable radiative forcing. I am not arguing that the mere presence of the 97/98 El Nino made the trend. I am implying that the El Nino had long-term effects on climate. All of this is summarized in the link that I gave earlier.

  9. The recent cooling is probably still too short in order to speak of a cooling trend. But ocean’s have also not warmed over the last 6 years (See e.g. Pielke Sr) and this would be an indication that the total amount of heat in the earth system has not increased since 2003.

    For me the most relevant aspect is how the climate modellers will take this recent cooling as an input to refine their models. It does appear that the cooling is atypical for their forecasts. Allthough I have no direct insight in the inside of the modellers community I have not seen any signs of critical review of the models based on observations being “out of center” from predictions.

    Paul

  10. The authors are (correctly) arguing that if you only look at short periods in a longer noisy sequence, some of them will appear to have a statistically significant positive trend, while others will appear to have a statistically significant negative trend. Since both can’t be reflective of the true trend, the trends measured from short enough periods are spurious.

    So, let’s apply exactly the same argument but at a larger scale: ie, how do we know the trend over the period of the instrumental record (1850-today) is not such a spurious short-term signal in the longer temperature record measured over millenia?
    Response– Because magicians can’t pull energy out of a hat– chris

  11. PaulVo, I don’t think the consensus is that cooling is happening.

  12. Chris,
    “magicians can’t pull energy out of a hat”
    We don’t need magicians.

    About 60% of the energy that reaches the Earth system actually gets through to, and is absorbed at the surface. But that proportion varies on short and long time scales. ENSO involves a changing radiative balance as the proportion of solar energy reaching the surface changes. Unless we are prepared to properly account for that influence we have no hope of understanding climate change.

    The cooling since 1998 is a response to growing cloud cover associated with gradually falling upper troposphere temperature. Upper troposphere temperature actually peaked about 1981 and despite some wild fluctuations directly associated with ENSO has been declining ever since.

  13. Response– About 150 years for solid global-scale temperature measurements– chris

    At the risk of quibbling, using the term “150 years” is probably more descriptive than “on record.”

    Beyond that, though, say 100 years ago, what was the instrumented coverage in Russia, Africa, or Eurasia?

    If, as I suspect is the case, the coverage was spotty to none for a large portion of the Earth’s landmass, how can global-scale measurements be “solid”?

  14. “Response– Because magicians can’t pull energy out of a hat– chris”

    Oh, so heat can move around the climate system to give significant decadal temperature change, but not centennial? You have a proof of that Chris? I wasn’t aware that we were even close to determining the heat distribution of the climate system to that degree of accuracy.

    Response– It’s something that’s fairly obvious to anyone who learned about the basis statistics of climate, which has been defiend by the WMO to be 30 years, a number people don’t pick out of hats. Depending in the characteristics of the base climate, one might detect trends on timescales slightly smaller or larger than this. The Holocene provides fairly good constraints on the magnitude of unforced variability. Energy has to go somewhere, and there’s no evidence that trends can happen for centuries in the absence of forcing. Models which simulate the present climate well don’t produce trends on long timescales in the absence of forcing, and they don’t significantly underestimate the climate variability. Why don’t we just say all of Earth’s history is noise if we’re going to make things up all day? The fact is climate is a boundary value problem that exhibits trends on climate timescales with noise on all timescales that average to zero.– chris

  15. Ok, so you don’t have any justification.

    If the Holocene is your evidence then the LIA and MWP both suggest natural variability on a longer timescale.

    Since it takes 1,000 years or more for the oceans to mix, and there’s good evidence that the ocean circulation can make a large difference to surface temperature, and there’s also good evidence to suggest small changes in net or regional forcing can sometimes have a disproportionate impact on ocean circulations, it is quite likely that trends could persist for a lot longer than 30 years without violating conservation of energy.

  16. >The Holocene provides fairly good constraints on the magnitude of unforced variability. Energy has to go somewhere, and there’s no evidence that trends can happen for centuries in the absence of forcing.

    I don’t get what you’re saying here. If something isn’t pushed it stays the same. The smaller scale changes aren’t due to forcing and are just weather?

    Seems like circular logic. What makes the one weather and the other climate?

    Response– Climate is the statistics of weather. And climate is clearly defined by a trend+noise. The longer the timescale, the more easily one can detect trends against background natural variations (characterized by El Nino, etc). You don’t need to be in equilibrium on very short timescales. Noise operates on all timescales, it doesn’t go away on “climate periods,” it’s just that there’s a point where it cannot swamp the statistical detection of trends. No climate period spontaneously generates centennial-scale trends without being forced, and there’s no example a coupled ocean-atmosphere model that spontaneously generates changes as large as a doubling of CO2 with no forcing. If people here want to develop their own model to make a point, they are more than welcome to, but not much of this is controversial.

    If any possible-but-unproven internal oscillations were responsbile for the modern day warming, we’d expect much different signatures in terms of ocean heat content changes and the TOA energy imbalance. Thus the argument is baseless. I don’t understand why people continue to make up nonsensical theories which are not physical, particularly when the GHG-induced paradigm provides the best predictive and explanatory power, and the CO2 RF can be calculated independently of what else is going on. — chris

  17. the CO2 RF can be calculated independently of what else is going on.

    And CO2 RF gives you 1C sensitivity. No one is arguing with that. But you need a whole lot more than that to justify alarm.

    No climate period spontaneously generates centennial-scale trends without being forced,

    Huh? LIA? MWP? Neither of those periods had anything like the forcing necessary to drive the degree of temperature change.

    and there’s no example a coupled ocean-atmosphere model that spontaneously generates changes as large as a doubling of CO2 with no forcing.

    The current models build in a large multiplier for CO2 forcing, which gives far too high sensitivity for the instrumental record. So they cancel out the positive feedback with aerosols. Now that might be ok if they all used the same aerosol figures. But they don’t. Models with large sensitivity use larger aerosol forcing, models with less sensitivity use smaller aerosol forcing. All within the measurement error of aerosols (which is to say, huge), but unfortunately, it means the predictive power of the models is nearly zero. Read Kiehl. There’s also no example of a coupled ocean-atmosphere model that correctly models clouds either.

    Even Hansen admits that you cannot bound climate sensitivity from 20th century modeling because of the forcing uncertainties. Many climate scientists also agree that there is large uncertainty in water vapor feedbacks (clouds etc). So while centennial-scale variations have not been established, they are most certainly not excluded by current modeling as you claim.

    “If any possible-but-unproven internal oscillations were responsbile for the modern day warming, we’d expect much different signatures in terms of ocean heat content changes and the TOA energy imbalance.”

    How so? We’re already seeing inexplicable TOA energy imbalance.

  18. You assume there is no such thing as internal radiative forcing. Can cloud cover not change on its own? That is a dubious claim. El Ninos are radiative events – not just a recirculation of water. Once again, this is shown in my link. I will be making another post soon showing this even more clearly. At this point, available, RELIABLE proxies do confirm the existence of a LIC and MWP. These must have involved radiative forcing.

    “and there’s no example a coupled ocean-atmosphere model that spontaneously generates changes as large as a doubling of CO2 with no forcing.”
    How can you model random variations? How can you model proceses (ENSO, PDO) whose basic physical nature you do not understand in order to predict it? If you are looking for hindcast predictions, Roy Spencer has done tons of work on cloud cover, random variation, and the PDO and used simple models that replicated most of the GW trend.

    Response– Are you actually being serious about Spencer? The guy doesn’t even know what the PDO is, let alone overturned its physical meaning to cause a long-term global temperature trend. I have seen his ability to make graphs and I’m not impressed. No, I believe in weather. Oscillations are just that, they aren’t global-scale, decadal scale trends. The idea the system randomly jumps to a new state due to unforced variability without anyone being able to detect it, while showing oppositely directed trends in OHC measurements, and consistent spatio-temporal patterns with CO2, etc is a rather dubious claim. You guys can continue to make up “what if” scenarios that have no preditive or explanatory power, but it doesn’t work. Period.

    I don’t deny some forcing involved with the LIA or MWP. The events are not very significant in terms of temporally synchronous and global-scale temperature change, especially the MWP, whereas the LIA clearly involves some volcanoes and solar decline. The magnitude of the proxy-measured climate over the last millennia is not inconsistent with realistic models taking these factors into account, and yes, the Holocene record and models provide a good contraint on any “unforced” magical jumps that may be lurking in the system.

    If you guys think this general understanding of climate science is all off the mark, why are you commenting here? Publish your results showing a paradigm which can explain the modern warming and how your results negate CO2 physics. That’s my last comment on this until you do so.- chris

  19. “Response– Are you actually being serious about Spencer? The guy doesn’t even know what the PDO is, let alone overturned its physical meaning to cause a long-term global temperature trend.”

    He disagrees with you, and you have turned this into a personal attack on him. Why turn polite arguments nasty?

    “Oscillations are just that, they aren’t global-scale, decadal scale trends. The idea the system randomly jumps to a new state due to unforced variability without anyone being able to detect it, while showing oppositely directed trends in OHC measurements, and consistent spatio-temporal patterns with CO2, etc is a rather dubious claim.”

    Once again, this is dependent on the assumption that there are no internal radiative oscillations. The 1986/7 and 1997/8 El Nino caused visible step changes in West Pacific, Indian, and N Atlantic SST. This is shown very clearly in my link. High, middle, and low cloud cover is highly dependent on ENSO in the Indian and Pacific Oceans. The data is available, and I’ll be posting on it soon, but you can find the same relationships I did.

    “I don’t deny some forcing involved with the LIA or MWP. The events are not very significant in terms of temporally synchronous and global-scale temperature change, especially the MWP, whereas the LIA clearly involves some volcanoes and solar decline. The magnitude of the proxy-measured climate over the last millennia is not inconsistent with realistic models taking these factors into account, and yes, the Holocene record and models provide a good contraint on any “unforced” magical jumps that may be lurking in the system.”

    If you are referring to the variation produced by those studies made by the same paleoclimate authors and using the same tree ring data, then you’re correct. However, I think Loehle 2008 would be a better place to go.

    “If you guys think this general understanding of climate science is all off the mark, why are you commenting here? Publish your results showing a paradigm which can explain the modern warming and how your results negate CO2 physics. That’s my last comment on this until you do so.- chris”

    Look at my post! There’s no need for it to go through peer-review, all data is available online, and the analysis is done fairly simply. You can do it yourself and come to the same conclusions! All arguments that I engage in on websites of this nature always conclude in the same way. The author of the site wants to see my work pushed through peer review for no reason. It’s merely presenting data that is already available!

  20. “If any possible-but-unproven internal oscillations were responsbile for the modern day warming, we’d expect much different signatures in terms of ocean heat content changes and the TOA energy imbalance.”

    And what might that be? And how does it differ from what we see? What is it that you see?

  21. Chris,
    The atmosphere itself tests the greenhouse theorem on an annual basis. It does so at the tropical tropopause, where outgoing radiation meets ozone, producing a strong temperature maximum in August at that level. That maximum is in turn due to the seasonal heating of the northern land masses and a loss of global cloud cover. Global temperature peaks in August, even though the sun is furthest from the Earth at that time. That should tell you something about the importance of clouds in determining surface temperature.

    At the surface near the equator the temperature peaks in May as it does in the atmosphere all the way to 200hPa. For those unfamiliar with this way of referring to altitude, the surface has an air pressure of about 1000hPa and the tropopause 100hPa. At 100hPa (about 10km) 75% of the atmosphere is beneath you. So this atmosphere is actually very thin.

    I guess the radiation that is absorbed by ozone at the tropopause is coming from the near surface air as it is moved by the trades towards the equator. The surface could not be responsible for the radiation and temperature peak at the tropopause.

    The potential for downward transfer of energy from the tropopause where the maximum is in August to the 200hPa pressure level where the maximum is in May, is of course there, and if greenhouse theory were valid, we would see it. But it does not eventuate. I guess that testifies to the strength of the convectional force that cools the troposphere at all levels.

    Greenhouse theory is based on a misunderstanding of how the atmosphere works. The nature of the troposphere is apparent in the Greek derivation of the word ‘tropos’. Although I speak no Greek I believe it means ‘turning’.

    If one takes the trouble to actually look at the data from 1948 onwards the troposphere has not warmed at any level above the near surface layers that are in contact with a warmer ocean, layers that are warmed by surface contact, the release of latent heat of condensation and no doubt absorption of radiation by susceptible molecules.

    The warming in the tropics, where more energy is received than emitted is slight. Most of the extra energy received has gone into evaporation rather than increased sea surface temperature. So, the increase in temperature at the equator, at cloud level, where the latent heat is released is about three times that at the surface. The surface warming at high latitudes in winter is strong, amounting to about five degrees in both hemispheres. There has been no warming in summer. In fact Antarctica has cooled in summer. That winter warming at high latitudes, when radiation is at a minimum, should indicate the importance of energy transfer by the ocean.

    There is another explanation as to why the Earth warmed strongly between 1976 and 1983, more slowly until 2005 and has cooled since, and you will find it at http://climatechange1.wordpress.com
    The explanation you will find there actually fits the observed pattern of temperature change.

  22. Alan D. McIntire

    In response to erlhapp: If atmospheric pressure at sea level
    is about 1000hPa, then the fraction of atmosphere below 100hPA would have to be 90% by definition.

    From the context of your post,

    “At the surface near the equator the temperature peaks in May as it does in the atmosphere all the way to 200hPa. For those unfamiliar with this way of referring to altitude, the surface has an air pressure of about 1000hPa and the tropopause 100hPa. At 100hPa (about 10km) 75% of the atmosphere is beneath you. So this atmosphere is actually very thin.”, I’d guess those pressure zones are rounded, and the 75% would refer to the 200hPa zone, which covers 250hPA to 150hPA.

  23. Alan,
    By whose definition. Check it on Google. Part of the problem of estimating the volume of the atmosphere in the troposphere is that the pressure level and the height of the tropopause varies with latitude diminishing as you approach the pole. The tropopause reaches the surface short of the pole in winter. So I imagine that the 75% estimate for the amount of the atmosphere below the tropopause would be an estimate that takes these factors into consideration

    The real issue to address in my post is the lack of downward propagation of energy from the ozone rich tropopause to the 200hPa level. If there is no effective downward transfer, greenhouse theory is invalidated.

    This is not an issue that should be ignored. It is at the heart of the problem of validating greenhouse theory. But then again the greenhouse theorists have never been strong on observation.

    That’s why I consider that greenhouse theory is the central tenet of the green religion. Its a belief and beliefs are never subject to question. Its like the virgin birth for Catholics.

  24. routine xrefs:
    atmoz.org/blog/2008/01/29/on-the-insignificance-of-a-5-year-temperature-trend/
    scienceblogs.com/stoat/2007/05/the_significance_of_5_year_tre.php

    As to

    > By whose definition. Check it on Google.

    That’s like saying “by whose definition, look it up in a book” or “… find it in a library” — what’s your actual source for the number you quote?

  25. Wow, Chris, I’m sure glad you’ve got Killfile implemented.
    You’ve become a venue for the self-publishers who have no peers.
    An honor, of a sort, I guess (wry grin). As long as peopple dont’ respond to them.
    My mistake, above, doing so. Sorry.

  26. BTW Chris, if you want to educate yourself on the science behind centennial and millennial-scale climate oscillations, you might like to start with this Science article: A Pervasive Millennial-Scale Cycle in North Atlantic Holocene and Glacial Climates

    Apologies in advance for posting a heretical link. I believe Pope Innocent VIII blamed witches for the cold climate of the Little Ice Age (and burned them). That was 600 years ago. Today Hansen advocates violence against the executives of public utilities as punishment for the warmer climate of the 21st century. .

    Plus ça change, plus c’est la même chose.

    Response– I’m very aware of the abrupt climate changes in the ice core record. It is you who are misusing them to fit your hair-brained idea about climate change– chris

  27. Erlhapp some possibly awkward questions for you.

    In recent weeks I have repeatedly been confronting “AGW skeptics” who have pet theories of their own of how the greenhouse effect should work, or how convection should work, etc etc. There seem to be Galileos all over blogspot and wordpress, it’s quite amazing.

    These are becoming standard questions I ask:

    1) have you looked at the GCMs to check that the “greenhouse theory” says what you think it does? (eg have you looked at downward emission from the tropopause in GCMs to know that it is not consistent with observations?)

    2) Have you written a model of your alternative theory to verify that it is at least somewhat consistent with the climate when numbers are applied and doesn’t have some glaring hole?

    3a) Have you identified the specific part of the physics which the GCMs/greenhouse theory are getting wrong? I assume you have because you imply you know they are getting it wrong.

    3b) Can you figure out why scientists have missed a major part of the physics. A reason that makes sense, not a conspiracy theory. I mean if you are proposing something really really obvious then it’s going to be very very hard to imagine how everyone could have missed it.

    4) For yes to any of the above, why haven’t you published a paper reporting such sensational results? Why are you just posting it up on a blog, where to be honest no expert is going to scrutinize it? Even Chris is probably not going to want to spend even an hour going through it (like I said these pet theories are a dime a dozen recently) so it’s odd why you are even linking to it here..

  28. Response– I’m very aware of the abrupt climate changes in the ice core record. It is you who are misusing them to fit your hair-brained idea about climate change– chris

    I think you meant “hare-brained” as in “mad as a march hare”.

    But nevermind the terminology; that would be my hare-brained idea that changes in ocean circulation could be contributing significantly to late 20th and early 21st century warming. FTA:


    We know too little thus far to identify the origin of the 1470-year cycle. Its constant pacing across major stage boundaries, especially the last glacial termination, almost certainly rules out any origin linked to ice sheet oscillations. Rather, the close correlation of shifts in ocean surface circulation with changes in atmospheric circulation above Greenland is consistent with a coupled ocean-atmosphere process.

    You don’t say…

  29. “The tropopause reaches the surface short of the pole in winter.”

    !!!

    As much as the same debunked pieces of faulty information are repeatedly used in attempt to overturn our understanding of climate, occasionally a new such piece does come along that triggers a rapid movement of palm to forehead.

    At first I thought he was perhaps confusing some PV or potential temperature surface with the tropopause, but I believe it is probably just confusion of some zonal average temperature contour with the tropopause. But perhaps there is some anomaly graph being confused with the mean state. Hard to tell.

    Problems are not always as obvious as some of those in the musings from erlhapp, but one is still always best served to remember that web postings from random persons claiming to drive a stake into well-established scientific theories are most assuredly not doing so.

    The site linked by erlhapp has an impressive quantity of information, but from a quick glance it appears to include a lot of incorrect assumptions and at best dubious leaps of logic. Related to the subject here it seems often to find preferred meaning in short-term variation of (perhaps not even relevant) time series.

  30. Hank and Chris,
    Open your eyes guys. Be curious. You too can be learners.

  31. > an impressive quantity of information

    What’s impressive is not his, and what’s his is not impressive.

    >> “The tropopause reaches the surface short of the pole in winter.”
    >> !!!

    That certainly answers the question about what’s been happening to the polar bears. They’re blowing away.

  32. erlhapp writes:

    If there is no effective downward transfer, greenhouse theory is invalidated.

    Do a Google search for the word “pyrgeometer.”

  33. Let me see if I understand your reasoning for why climate is defined as a 30 year time scale. That the models simulating historical climate work best in creating trends over that time period, with the planet driven by external forcing every 30 years or so?

    So you would have to redefine the climate period if someone could come up with a physical model in which the historical temperature is a sine wave plus noise?

  34. Barton, RE: “Do a Google search for the word “pyrgeometer.””

    Can I take it that you are offering an entry to discussion. You could perhaps be a little more direct in pointing me towards your point of concern.

    I am happy to acknowledge that radiation is emitted in all directions. Observation informs me that it is ineffective in raising the temperature of the air in the layers beneath the point of emission.

    There are two instances worthy of consideration.

    Sea surface temperature has increased by less than in degree over 60 years between the equator and 10°N latitude and about half that south of the equator. However, at cloud level the rate of temperature increase is about three times as much. The extra insolation absorbed in tropical waters has driven evaporation rather than surface temperature increase. The energy is released as latent heat at cloud level, about 850hPa. It is absorbed by the air and radiated in all directions. However there is no evidence that it actually raises surface air temperature. The heat is convected upwards as soon as it manifests. If there is an energy return it might increase evaporation.

    The second example is perhaps more clear cut. At the tropopause there is sufficient ozone to cause a strong increase in temperature in July, time of peak warming in the northern hemisphere. But, there is no effective transfer to the 200hPa level where temperature peaks in May, as it does at the surface. Again, any tendency to warming of the lower layer seems to be overwhelmed by the agency of convection.

    The atmosphere is affected by many processes one of which is the tendency of air to absorb warmth from radiating molecules in the vicinity. A more important influence seems to be the tendency for bodies of air so warmed to be rapidly displaced upwards.As they rise they cool via decompression.

    Theory or model is one thing. The next step is observation, to see whether theory/model is confirmed in the real world.

  35. gmo

    Perhaps you can find the tropopause for me in the midst of the Antarctic night when the air is at a temperature of minus 80°C and it is warmer than the surface.

  36. Hank Roberts

    Very rude, but a real person I presume.

    what’s your actual source for the number you quote?

    Answer: Wikipedia.

    “The troposphere is the lowest portion of Earth’s atmosphere. It contains approximately 75% of the atmosphere’s mass and almost all (99%) of its water vapor and aerosols. The troposphere is constantly convecting air.”

    I have seen others at 80%. It depends upon the definition you adopt. If its according to ozone content it will be different again.

    Hank, why not address the question rather than behave like a brat.

  37. erlhapp, you have said: “I am happy to acknowledge that radiation is emitted in all directions. Observation informs me that it is ineffective in raising the temperature of the air in the layers beneath the point of emission.”

    That seems to be a non-sequitur, or else a simple error. It’s hard to say. The simple fact of the matter is:

    (1) The atmosphere is being heated from the surface of the Earth, because it absorbs IR radiation. The atmosphere is in general cooler than the surface, and there is a net flow of heat from the surface to the atmosphere.

    (2) If we didn’t have an atmosphere, this planet would be one heck of a lot colder than it is at present. Hence adding an atmosphere means more heat at the surface.

    A lot of people get mixed up between two statements. The atmosphere is not warming the planet like a heater. It’s not a source of energy. It gives back to the surface less than it receives from the surface. In that sense, it is like a blanket, which is heated by your body, and yet keeps you warmer.

    There is a lot of radiant heat energy coming down to the surface from the atmosphere, which is directly measured. The quantity is large. It’s normal for the surface to be receiving well over 300 W/m^2 of heat energy coming down from the atmosphere. Remember, this can be measured. It’s importance has been known for over 100 years, and it has been directly measured for over 50 years.

    You can give two equivalent ways of describing why an atmosphere leads to a warmer surface. You can say that the surface the atmosphere makes it harder for the Earth to shed heat from the Sun. Or you can say that the surface has to be hot enough to get rid of both the solar input and atmospheric backradiation.

    What you CAN’T say is that atmosphere is an independent source of energy. The atmosphere gets most of its energy from the surface; it is giving back what it is given in the first place.

    So, when you say that the heat is “ineffective” at raising temperatures, what can you possibly mean? Take away that backradiation somehow — nearly impossible without just removing the atmosphere altogether — and you will indeed cause temperatures to plummet as rapidly sheds heat directly into the coldness of space.

    None of this is “global warming” theory — which is about possible changes to the equilibrium. This is simply about why an atmosphere leads to a warmer surface at all. It’s really basic physics, known for well over a century.

    The “effective” radiating temperature of Earth as a planet radiating into space is obtained well below the tropopause, by the way. So it’s another non-sequitur to worry about IR from the troposause to the surface. It’s really about the transmission of radiation all through the troposphere. Basic atmospheric physics deals with all levels of the atmosphere, but the basic greenhouse effect (a slightly misleading name for the warming effect of an atmosphere that absorbs IR radiation) does not depend on radiation from the tropopause to help heat the surface!

    Best wishes — Duae Quartunciae

  38. Duae

    Thank you for a closely reasoned comment and your willingness to engage.

    Greenhouse theory posits that ozone is a greenhouse gas that absorbs energy, re-radiating it downwards towards the surface. No problem at all with that. No problem with any molecule absorbing and re-radiating energy in all directions from any position in the atmosphere. Ozone is a very strong absorber of radiation exiting the atmosphere despite its tiny concentration, much less than carbon dioxide. Its concentration increases quite abruptly at the tropopause. So, the tropopause becomes a good test bed for the theory.

    There is a problem however with the notion that such radiation is effective in materially raising the temperature of the air at any point below the point of emission. The power of convection in the gaseous medium ensures that this does not happen.Hence the lack of response at 200hPa.

    The ocean stores warmth. It covers 70% of the planet. Take away the ocean and you will produce the temperature regime that you see in a desert. That is, a dramatic reduction in temperature as soon as the sun goes down. For this reason daily thermal range is a function of distance from the sea. It’s called the ‘maritime’ effect.

    End of the day you have to look at the data. Above 700hPa there has been no increase in the temperature of the air in the period of record. Consult the record. Its at http://www.cdc.noaa.gov/cgi-bin/data/timeseries/timeseries1.pl

    The increase in the temperature of the air below 700hPa is related to release of latent heat. More energy received at the surface drives evaporation. More energy received at the surface relates to the fact that slightly more radiation from the sun has run the gauntlet of atmospheric diversion that eliminates about 40% of incoming energy.

    There is a cycle of energy gain in the tropical oceans that accounts for the increase in winter temperatures at high latitudes. That cycle accounts for the change in temperature that we have witnessed recently…..the small span of a human lifetime.

    A good day to you. I trust you are warm enough.

    Response– For one thing, there has been warming in all layers from the tropopause to the surface. Above the tropopause cools with greenhouse gas increase and/or ozone loss and is (approximately) where convection doesn’t reach. Second, the enhanced greenhouse warming does not occur primarily through downward re-radiation increase (this becomes only one term in a changing surface energy balance), but rather what is happening higher up in the atmosphere, where convection then stirs heat rapidly (keeping the vertical temperature profile on some adiabat) and communicates warming to the surface. — chris

  39. Chris, in response to Erl’s comment: what dataset do you refer to in this statement: “there has been warming in all layers from the tropopause to the surface”?

    Response– There is discussion in USCCP 2006 “Temperature Trends in the Lower Atmosphere,” IPCC 2007, and many papers on the issue. One can look at some figures and references in this RealClimate post (at least as it is pertaining to the tropics). I had actually thought this was well known, sorry. — chris

  40. Ozone has two distinct roles, in the troposphere, and in the stratosphere.

    The ozone layer is “stratospheric ozone”. It is formed through interactions of UV light with oxygen. The net effect of these interactions is that UV light is absorbed, and a layer of ozone is maintained by these reactions.

    The absorption of UV light also heats up this layer. The stratosphere tends to be heated top down, from the Sun, rather than bottom up, from the surface, which is what we find down in the troposphere. Hence, up here ozone is a kind of anti-greenhouse gas; absorbing most of the energy at short wavelengths, not long wavelengths. The impact of thermal emissions from the ozone layer is almost completely negligible for other parts of the atmosphere. They help regulate the temperature of the stratosphere; but the air here is thin, which means the net energy flows are small. The most important consequence for surface temperatures is the absorption of a part of the UV spectrum.

    Ozone also exists in the troposphere, where it is a greenhouse gas in the usual sense; though much less important than the major greenhouse gases. It is a greenhouse gas because it absorbs thermal energy coming up from the surface. This is well below the tropopause; tropospheric ozone is mostly a consequence of “photochemical smog”, and it is formed near ground level, from interactions of sunlight with carbon monoxide, nitrous oxides, and other molecules. Hence the term “photochemical”.

    Erlhapp, you are mistaken about the nature of energy transfers here. The greenhouse effect works mainly because infrared radiation from the surface is being absorbed into the atmosphere, and this occurs mainly well below the tropopause. The effect results in temperature increases all the way though this lower part of the atmosphere; and it certainly does not depend on thermal emissions from the stratosphere! Troposospheric ozone has its own contribution to the greenhouse effect. Stratospheric ozone does not. It exists up where the atmosphere is optically thin in IR wavelengths, and where thermal radiation is essentially free to escape out into space.

    The reason the tropopause stays at about the same temperature is because it has a temperature roughly corresponding to the theoretically expected “skin” temperature of the atmosphere, which mostly independent of the surface. (This is horribly over-simplified, but it will do for now.) Below the tropopause, however, there is a steady increase in temperature as you approach the surface (the lapse rate).

    The greenhouse effect works because thermal emission into space is mostly from somewhere up in this part of the atmosphere, where temperature falls with altitude. The main signature of greenhouse warming, therefore, is not a change in temperature for the tropopause, but a change in altitude. With a strong greenhouse effect, the tropopause is found at a higher altitude, because it still has about the same temperature, but there is a larger difference between this skin temperature and the surface. This increase of altitude of the tropopause has been measured and studied; it is several hundred meters over recent decades. This is further confirmation of the increasing strength to the greenhouse IR absorption in the troposphere.

    See: Santer, B.D., Wehner, M.F., Wigley, T.M.L., Sausen, R., Meehl, G.A., Taylor, K.E., Ammann, C., Arblaster, J., Washington, W.M., Boyle, J.S. and Bruggemann, W., 2003: Contributions of anthropogenic and natural forcing to recent tropopause height changes. Science 301, 479–483.

    The ocean is indeed very important for regulating heat. It mainly works not as a blanket, but as a heat sink, to damp out the changes. It’s why the day night variation is so much less at sea than on the land. Latent heat effects are an important part of the transfer of energy from the surface into the atmosphere, and hence an important part of how the atmosphere is being heated from the surface.

    Here again you must be careful not to mix up the notion of “warming” as a measure of which way energy flows, and the notion of “warming” as a shift in the equilibrium balance to a new climate norm. The atmosphere is being warmed, all the time, in part by the effects of latent heat. This process transfers heat into the atmosphere. It does not serve to explain a change in the normal equilibrium temperature, which is what is addressed in the study of changes to climate.

    As for me personally… down here in Australia we’re moving into autumn. It’s very pleasant right now! Hope you all had an enjoyable Easter.

  41. Chris and Duaue
    You can see the data at: http://climatechange1.wordpress.com/2009/02/04/a-cooling-story-involving-ozone-the-sun-and-the-sea/
    Figures 3 and 4 pertain to the lower and upper troposphere.

    Duaue, Have a look at figure 5 that shows you the variation in temperature at the tropopause in the tropics. There was a big jump as you can see in 1978 and a gradual decline to the present. That shows you the reaction of ozone to outgoing long wave radiation. The temperature at 100hPa has now returned to 1948 levels.

    Chris, I am not impressed with the supercilious tone at Real Climate. I have tried on a number of occasions to post there but never succeeded. It’s a closed society of true believers and they don’t like their little boat rocked. Congratulations, you seem to have an open door here.

  42. “Let me see if I understand your reasoning for why climate is defined as a 30 year time scale. That the models simulating historical climate work best in creating trends over that time period, with the planet driven by external forcing every 30 years or so?”

    MikeN seems to want to believe consensus view on climate is entirely based on models that he thinks are flawed and that certain climate modes of variability (“oscillations”) can explain most trends.

    The 30-year period is a rule of thumb that was developed over time from working with observational data of various types. For determining normals (averages) it has been found to be long enough to not be unduly influenced by individual extreme-value years but not being so long that it becomes too difficult to have even enough data.

    A similar period is good for trends for similar reasons as this post illustrates. Underlying trends can be obscured by individual values over a short period, but with a longer timeframe such noise is less likely to have that influence.

    Suppose I start a new casino-type game with overall odds you do not know. If you play it just a few times, luck (noise) will be a major factor in how well you do. In say 5-10 plays you could come out ahead even though the advantage favors the house, or you could come out behind even though the odds were even. But if you play more times the luck is more likely to average out and give you results consistent with the mathematical long-term odds.

  43. erlhapp,

    I should have asked how you were defining the tropopause. I am guessing you simply are taking the tropopause to be a point where lapse rate goes negative (temperature increases with height).

    The WMO uses “the lowest level at which the lapse rate decreases to 2°C/km or less, provided that the average lapse rate between this level and all higher levels within 2 km does not exceed 2°C/km.” Other options are by potential vorticity or even chemical concentrations.

    Your description of Antarctic winter where “air is at a temperature of minus 80°C and it is warmer than the surface” is not representative of anything besides a most extreme and unlikely moment. The all-time record low temperature observation is -89°C at Vostok. But that does contribute to my thinking you are considering the tropopause simply a point of lapse rate going negative.

    There is a paper called “The Tropopause in the Polar Regions” by Zangl & Hoinka in Journal of Climate Volume 14 Issue 14. That was just the first article I found showing some data on the polar tropopause. Of course the tropopause is lower in polar regions than the tropics, but it is more like perhaps 10km or 400hPa. Intersecting the surface so that the surface is in the stratosphere does not occur by any reasonable definition.

    You seem to do much analysis using the NCEP reanalysis. That data is not perfect, but it does have tropopause level data. NOAA has a website where you can plot data. Try plotting tropopause level pressure. That is another way to see that the tropopause remains well up into the free atmosphere even in polar regions.

    A further note about the NCEP reanalysis. You seem to be taking them as absolute truth. As I said, that data has some issues. It is basically a model being run to “fill in” all the gaps of the spotty observational network. Data prior to 1958 is most questionable because the basis observations are much fewer. Data from before 1979 also does not have the benefit of the satellite data from then and later, which is a more significant factor in some regions than others.

  44. OK gmo, but why not a longer time period than 30 years?

    >MikeN seems to want to believe consensus view on climate is entirely based on models that he thinks are flawed and that certain climate modes of variability (”oscillations”) can explain most trends.

    No, that was someone else that posted that. I am trying to understand Chris’s response with regards to trends and climate periods.

  45. Erl, you refer me to a “big jump” in your plots. The particular jump you mention in your figure 5 is not real: it is an artifact of the dataset. You’re using the NCEP/NCAR reanalysis product. This is a pretty cool product, and I’ve used it myself from time to time. But you have to pay attention to the caveats.

    This dataset is based on a number of different sources of raw data, and then processed using a climate model. This processing smooths out gaps that occur between different data sources, but it does not correct all the systematic discontinuities perfectly. The sources of discontinuity are potentially any of the various sources of data used in the climate model; not just the temperature record directly. The real reason for the jump is likely to be the response of the model to encompass satellite data. It’s certainly not a real measurement.

    You should read the documentation. There’s a good description published when it first came out in 1996, which shows the nature of what you are using. It’s not data in the sense of measurement, but a highly processed set of calculations attempting to make a single consistent picture using data from many sources. See Kalney et al (1996) The NCEP/NCAR 40-Year Reanalysis Project in Bull. Amer. Meteor. Soc., Vol 77, pp 437–471.

    In fact, this lack of understanding of the nature of the data you are using pervades the whole page, but it is most drastic in your physically impossible speculations about your figure 5 and the putative jump.

    A much better way to see if there is a jump or not in reality is to use actual observational data from some continuous instrument records. The only source that works across the period where you think there is a jump is the radiosonde record. Here, for example, is a record for 1958-2005. The citation is: Sterin, A.M., 2007. Radiosonde Temperature Anomalies in the Troposphere and Lower Stratosphere for the Globe, Hemispheres, and Latitude Zones. In Trends Online: A Compendium of Data on Global Change. Carbon Dioxide Information Analysis Center, Oak Ridge National Laboratory, U.S. Department of Energy, Oak Ridge, Tennessee, U.S.A., and online data is available at CDIAC. Graphs are available here. There’s no jump. Neither is there a jump in any of the observational datasets used in the reanalysis. There are, however, discontinuities as different instrument record are merged. That’s what you have plotted.

    There’s a heap of other really fundamental problems with your page, which it is not appropriate to address on this blog. I am not going to spend a lot of time trying to persuade you of this. It will be obvious to anyone familiar with the physics of climate and the available data. You are not going to be able to fix it with a few minor corrections; the whole page is basically useless.

    No offense is intended, truly. I don’t expect you do be persuaded, but that’s up to you.

  46. Duaue

    You say:

    “The real reason for the jump is likely to be the response of the model to encompass satellite data. It’s certainly not a real measurement.”

    Better take that up with Kalnay et al. I am sure that they have had plenty of opportunity to revise the dataset since 1996 should it have the faults that you purport.

    The satellite record begins in 1979 and I imagine that Kalnay et al used it to document 100hPa temperature.

    gmo The location of the tropopause at the pole in winter is immaterial to the argument.

    Guys, you can be relevant if you take the trouble to look at the annual flux of monthly mean temperature at the tropopause, 150hPa, 200hPa and the surface at the equator or nearby, using whatever data-set you choose for whatever period you wish.

    The issue to address is the behaviour of the average monthly temperature at the 100hPa level by comparison with the levels immediately beneath the 100hPa level.

    Don’t get your knickers knotted about the inessentials.

    And don’t imagine for a moment that you can fluster me with this sort of comment:

    “In fact, this lack of understanding of the nature of the data you are using pervades the whole page”

    “There’s a heap of other really fundamental problems with your page”

    Manners gentlemen. Try and be specific if you can.It’s much more helpful.

  47. Erl, the NCEP reanalysis folks are correcting stuff all the time, and this particular issue is a known problem. The 1978 discontinuity for stratospheric temperature in the NCEP reanalysis is already known and discussed in the scientific literature. By the nature of the way this data is calculated, it’s not at all simple to correct. I’m just telling you the nature of the data you are using. Personally, I don’t know where they are on fixing this problem, and since I am not using this data myself it’s not something I worry about. But they don’t need to be told about it.

    Of course the reanalysis used satellite data; that’s explicit in the documentation I have cited for you. The jump is an artifact of the integration of the satellite data with older records. It doesn’t show up at all in any continuous records.

    Note also – again, as I told you last time – this is not simply a combination of measured data. It’s the output of a climate model. That makes correction very subtle; changes to one part impact all the others as well. There’s some good discussion of precisely the jump you have plotted in your figure 5, in Huesmann, A. S., M. H. Hitchman, The 1978 shift in the NCEP reanalysis stratospheric quasi-biennial oscillation, Geophys. Res. Lett., 30(2), 1048, 10.1029/2002GL016323, 2003. Nor is this the only such paper. The jump is well known to those working in this field, and everyone knows it’s a consequence of the integration of satellite data. No-one is claiming a real jump, and there is no real jump in any measurement record across this period.

    I also recommend that if you are going to start to talk about “manners”, that you avoid remarks about “knickers in a knot”. There’s no call for it and no basis for it. Let’s avoid speculations about other people’s motives or feelings.

    Similarly, you should not take offense at being told your work is full of errors. This is not a personal remark at all – it is a legitimate comment on your work with no personal insult or lack of manners involved. I HAVE been specific on some of the problems; such as confusion on the nature of ozone, or heat flow to the tropospause, of conflating the ideas of warming as the direction of energy flow with the idea of a shift in climate norm, and of confusing modeling artifacts with measured trends. I simply note for the record that there are many more errors that I have space to address.

    I don’t expect you to be at all “flustered” by this. I fully expect that nothing will change, except perhaps cosmetically. I expect you will continue to treat corrections as a matter of “debate”, and that you will – as is your right – disagree about the evaluation of the technical merits in your writing. It will remain as bad as ever, and that’s not my problem. I’ve limited myself to commenting on matters you have raised here yourself, and even that is too much for the comment stream of another article entirely.

    There’s no aim to insult. If you find it personally belittling to be told that there are more errors in your work than I can take up explicitly, then that’s too bad. It’s legitimate non-personal non-insulting comment, given in good faith and with no intent of rudeness. I will certainly not attempt to itemize them. If you find any value in my specific corrections, that’s good. If not, that’s fine too, and what I expected. I don’t expect you to have a high view of my technical expertise either; and that troubles me not at all. I’m not an expert in any case.

    Best wishes – Duae Quartunciae

  48. Chris I have a question about that last chart? What does the Kelvin on the x-axis represent? Is it a change of temperatures to be expected over a decade, or on a per year basis?

    Response– It’s Kelvin/year– chris

  49. MikeN,

    There is nothing special about 30 years. Again, it is just a rule of thumb that has developed over time and been found useful for its purposes. For weather/climate data and the magnitude of their inherent variations that is good.

    If you want to define a normal or average, then ideally you would have the entire population or all possible data from which to make the calculation. In reality you do not. You use want you have available, but you make sure to take enough so that luck or random noise is unlikely to pollute the results.

    Are you implying a period longer than 30 years may be needed become climate may feature cycles with longer periods so that the 30-year interval may only catch a period of movement in one direction along the supposed cycle sine wave? That is a reasonable idea on the mathematical side. But the 30 years is likely enough to show the movement in that direction, while a shorter period is less likely to even show that movement. The idea of such very decadal period internal cycles is not supported by much evidence, so there is not considered a need to look at say 200 years. Long fluctuations if found over say a 30-year interval are likely indicative of external climate forcing like from changing insolation, aerosol effects, greenhouse gases, etc, which is probably the signal of interest.

  50. erlhapp,

    I think Duae has covered more of my concerns. You may think tropopause height does not explicitly matter for any particular argument you are making, but your misunderstandings as relate to that seem to be representative of multiple issues. I still have not seen you define what you mean by tropopause height. That inclines me to believe that in your temperature analyses around the 100-200mb region you are interspersing troposphere and stratosphere when you think you are keeping them separate.

    You appear to regularly make assumptions that are not accurate and build from them often with more incorrect assumptions, and you also seem to be either unaware of or to ignore widely held theory. Instead you hypothesize mechanisms like solar fluctuations driving ozone or ion changes in the middle atmosphere through unrealistic processes. Your mathematical analysis seems to consist of averaging and eyeballing plots and graphs.

    You imply that CFCs do not matter for ozone loss. Your apparently assume that the QBO observed in the stratosphere is driven by solar changes. That is in sharp contrast to the long-held theories about the QBO being driven by atmospheric waves generated in the troposphere and breaking and depositing momentum in the stratosphere. Similarly sudden stratospheric warmings in the NH are understood to be due to dissipation of Rossby waves in the middle atmosphere and not some mechanism involving solar wind and UV. Why do you dismiss this and seemingly vast aspects of atmospheric dynamics? You seem to just declare it “unphysical” and demean it (yet you still make a point to be critical of the tone of those conversing with you here). Why do you claim relatively low concentration GHGs cannot influence climate, yet assert that the orders-of-magnitude-less-dense middle atmosphere drives changes in the more dense troposphere?

    I agree that there are simply prevalent fundamental flaws in your page. To go with the issues with the data, you attempt to throw away current understanding built through decades of work with simple dismissals, innuendo, and hypotheses based on basically themselves plus much misunderstanding. You can get kudos from others who are not well-informed on the subject by simply posting write-ups like you have. That has the benefit of only likely being criticized in a limited amateur fashion like in this thread. However with how much you think the establishment is wrong and how you think “the AGW guys” “must get out of the way” that you would seem to be obligated to submit for the peer-review and publishing process. That is how ideas compete in the science marketplace. I suggest that because as much as you think you have to teach, maybe you could learn something from it.

  51. Duaue,

    The datasets you cite is too aggregated to be useful.

    The supposed discontinuity in 1978 is not relevant. You can use the data post 1978 if you wish. I invite you to:

    “look at the annual flux of monthly mean temperature at the tropopause, 150hPa, 200hPa and the surface at the equator or nearby, using whatever data-set you choose for whatever period you wish.”

    That is the way to address the question of whether greenhouse theory passes the observational test below the tropopause or not. This is an annual test, not a test that relates just to 1978. Each year there is a climb in the temperature of the tropopause and the lower stratosphere to a maximum in August. Above the tropopause convection is weak and the temperature curves reflect that fact. Below the tropopause convection is strong and there is no response to the heating of ozone that could be attributable to downwelling radiation. Temperature is maximal in May even at 150hPa.

    Just by the way, the “discontinuity” in 1978 appears in lots of data including sea surface temperature in oceans south and north of the equator. It is normally described as the Great Pacific Climate Shift. It marks the transition from the weak solar cycle 20 to the strong cycle 21and a run of continuous El Nino heating events from 1976 through to 1983. Sea surface temperature in the northern tropics abruptly moved to a new plateau about half a degree higher than previous levels. Mid latitude sea surface temperature in the northern hemisphere had been cooling since 1948 and after 1978 it began to warm. In the southern hemisphere sea surface temperature at high latitudes had been stable and in 1978 it also began to climb. The increase in sea surface temperature at high latitudes in both hemispheres is the most spectacular manifestation of the recent warming of the Earth that dates from 1978.

    Attacking the dataset because of a supposed discontinuity in 1978 is not a relevant response. The decline in 100hPa temperature since 1979 is documented in the satellite record. Are you saying that the best estimates of 100hPa temperature from radiosondes prior to 1979 were all falsely low?

    Response– Why do you insist on defending the indefensible? Duae has explained very clearly several errors/misunderstandings that you have, either concerning the data or the underlying physics of the subject. These concerns he had are well documented in the literature and are not news to the scientific community. Please stop this now.– chris

  52. Now the AGW community is recognizing there are long periods of natural warming and natural cooling.

    I would note there is no “forcing” built into the climate models to account for this. A really large volcano could result in up to 7 years of cooling but there is no forcing capable of a 10 year decline.

    This internal climate variability is thought of as simple “noise” in the modeling field.

    But there is a signal in the noise. How do you know the rest of your model is right when there is such a large noise signal.

    Your climate model run in 1999, using the 1998 temp data as your end-point base dataset, would have forced you down a certain road to hindcast/match that record that would have been completely different to using 1976 or 2008 as your end-point. I hope you see my point.

    And 30 years is, in fact, too short to do a proper analysis. One of the big drivers of natural climate variability has a longer cycle than 30 years (or at least the trends cannot be fully captured in a 30 year timeframe).

    The AMO here which can provide up to +/- 0.3C in natural varibility to global temperatures.

  53. OK, then why is A2 ‘extreme’? Business as usual doesn’t sound like it should be labeled as an extreme scenario. If I’m understanding the graph right, then I’m thinking that this paper doesn’t say what it’s authors intend it to say.

    It looks like under the A2 scenario, there is less than 5% chance of breakeven over the course of a decade and less than 1% chance of losing .2 degrees over one decade.

  54. Also what is the probability measuring? Chances of a specific decade, or chance of some decade over a century?

  55. What Chris doesn’t seem to realise is that most of the previous ‘flat’ periods are associated with cooling due to volcanoes. For example the period that he shows 1977-1986 with cooling in the early 1980s is related to the El Chichon eruption of 1982. Similarly there was a dip in global temperatures following Pinatubo in 1991. There is a good review paper ‘Volcanic eruptions and climate’ by Alan Robock.

    Response– This behavior still occurs in models without volcanic eruptions, even in simulations for the whole 21st century. The point is that nothing about the 1998-2008 period is inconsistent with models or expectations.– chris

  56. So Chris, at what point would flat temperatures suggest that something is wrong in the models? Or would it never be inconsistent, merely being passed off as non-radiative noise? How many years of flat or cooling temperature would it take to suggest that there are internal radiative oscillations?

  57. Chris, isn’t that chart showing less than a 5% chance of a 10 year freeze?
    Isn’t calling that consistent with models or expectations similar to saying that a 1 degree C warming by 2010 is consistent with models and expectations?

  58. There is a lot of people who talk about the climate models and don’t even know what is included in them.

    This is the 2005 version of GISS Model E broken into the “GHG forcing” component and the “other forcing” components.

    I can break down the “other forcing” into its various components as well if someone wants – Volcanic, Aerosols, Solar, Land-Use, Net Other – but there is no other forcing that can overwhelm the 0.23C per decade trend of GHGs over a whole decade (except for perhaps a VEI7 volcano).

  59. Chris, I was talking about the real world, not computer simulations of models.

  60. Erl explicitly asked me about a jump in temperature in his figure 5, back in this comment.

    Now that I have explained what causes that jump, he’s calling it “irrelevant”.

    None of his subsequent challenges are at all difficult; and they all involve further misunderstandings of the underlying physics or data. The effect of gases that interact strongly with thermal IR radiation (greenhouse gases) is to help raise temperatures in the troposphere by increased absorption, and depress temperatures in the stratosphere by increased emission.

    Explaining this in further detail is likely to be as much benefit for Erl as was explaining why his comments on the jump were wrong. It’s not “debate” in any meaningful sense. So thanks for the exchange anyway, and I’ll keep an eye out for your wine next time I want a bottle. And goodbye.

  61. Actually, since weather and climate are chaotic systems this paper does address the change in states which the climate models do not.

    Click to access 2008GL037022_all.pdf

    I have seen other papers that have addressed this topic and it all comes down to regular state changes for warming or cooling super imposed on the longer term trends.

    As for your numbers you started the post with. GISS, CRU, UAH all show the similar trends from 2001 to the present.

    Year 01 02 03 04 05 06 07 08 Linear
    GISS .56 .67 .65 .59 .75 .64 .72 .55 +.0037
    CRU .41 .46 .47 .48 .42 .41 .32 .36 -.0154
    UAH .20 .31 .28 .20 .34 .26 .28 .05 -.0133

    That trend being there has been no trend. I hate to break this to you but if I get to “adjust” the actual global record, I too can make the trend look like anything I want.

  62. Vernon, that’s just silly.

    The point is that ten year windows (or eight year windows, if that is what you are using) will have their own distribution of trends. The mean warming trend seen over a randomly selected eight year window anytime over the last 30 years will be about 0.14, but the 2σ limits for natural variation in a randomly selected 8 year window is from about -0.4 to 0.7. I’m using HadCRUT3v monthly data for that calculation.

    You’re comparing apples and oranges. We EXPECT 8 year windows to show lots of variation, and it shows up all along the last 30 years. This is the period which shows up pretty clearly as a warming trend, shown in the main blog post above. Superimposed on the main trend is variation of roughly 0.1C above or below the main trend in any year.

    That’s precisely why it is statistically ignorant to pull out the last eight years, or the last ten years, and see it as the end of the trend of the previous 30. You want at least 3σ outliers before you suspect anything out of the ordinary.

    You certainly CAN falsify the trend with a suitably long window. Above it was asked how long you need a flatline temperature trend before you have falsified the hypothesis of a longer warming trend. Well, over the last 30 years, using monthly data from HadCRUT3v, I find it takes a 12 year lull before you are 2σ outside the normal variation of a short term trend, and a 16 year lull before you are 3σ outside normal variation.

    So if you can find a 16 year window in the last 30 where there’s a flatline trend, then there may be something going on other that the kinds of variation we’ve been seeing anyway over the last 30 years. (Which is NOT just from volcanoes). Less that that: and statistically its still well within what we expect with for ongoing continuation of what we’ve been seeing over the last 30 years of warming.

    For an eight year window to be outside the 3σ range seen over the last 30 years, you need a very strong cooling signal; of about -0.3 degrees per decade. Observations could in principle be used to falsify expectations of the trend. The point is… THEY DON’T. The trend over the last 30 years remains consistent with what is seen in recent years, over any window you like.

  63. Duae, sounds reasonable. Now the issue I have is that there are many different future projections for climate, depending on the model used and the amount of various feedback inputs. Some of these results are also outside the trend of the previous 30 years.

    Suppose there was a ten-year flatline, which seems to be the assumption behind this paper. Then wouldn’t this make the more extreme scenarios unlikely? In other words, the chances of a 6C increase by 2010 would have to drop substantially.

  64. Actually, that would only be true with HadCrut3v because you picked the one with the biggest uncertainty. CRU does not adjust for UHI, they increase the uncertainty. Both GISS and UAH have less uncertainty. The IPCC in 4th AR states that there will be .22C per decade between 2000-2030 with the bounds of .35C per decade and a lower bound of .09C per decade and the current trend for 8 years is below the lower bound. Now I did not set the IPCC boundaries, they did.

  65. Vernon,

    Your assertion seem to be that since you think the global average temperature is supposedly outside the range of the IPCC scenarios that warming must not be occurring, at least as the IPCC says. You also seem to imply that the groups that publish global temperature indices adjust the results for some reason other than trying to produce the most accurate results for what they aim to calculate, but as I hope you are not saying that I do not want to address it.

    There are places around the web where you can see people working out the statistics of short term trends and showing what all can be consistent with a continuing long-term trend. I would recommend this link as demostrating how the very narrow 2001-2008 period is not something in opposition to model projections:
    http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say

    For what exactly are you taking the 0.22(+/-0.13)C/dec for 2000-2030? Does that source indicate that the trend for any period at all during 2000-2030 will be within that confidence interval range or that the trend for the entire 2000-2030 period will be within that confidence interval range? What exactly are those “bounds” you mention? The HadCRUTv3 linear trend for 2000-2003 was +0.0613C/yr which is well outside the bounds you mention, so what is the significance of that alone and compared to your preferred 2001-2008 period?

  66. If you go above the bounds, then you should adjust your expectations upwards. If you go below the bounds, then you should adjust them downwards.

    Say you flipped a coin and it showed up 8 heads and 2 tails, instead of 50-50. This makes it slightly more likely that the coin is biased towards heads. If after 100 flips you get 80-20, then it is even more likely, and if after 1000 it is still 800-200, the chances are very good. More data would adjust your thinking. * heads and 2 tails doesn’t provide much, but it still is more likely to come from a coin that is biased heads than one that is biased tails.

  67. Vernon, NO. This is NOT a measure of uncertainty in a dataset. It is a measure of the real world variation, which is similar for different datasets, as they are measuring the same real world. You get pretty much the same thing with GISS and HadCRUT3v. I have not calculated with other data sets.

    Your comments on IPCC and uncertainly means that you STILL don’t get this simple matter of real world natural variation. The IPCC comment on trends is NOT a comment on the natural variation you get by picking one isolated short term trend. It is a measure of the long term trend and confidence limits on it.

    If you do the calculation with ANY dataset measuring surface temperatures, you should see about the same thing. A strong trend over the last 30 years, with high confidence, and with plenty of year to year variation that means a short term window will have trends above and below the persistent long term trend.

    Differences in datasets will mainly show up in short term numbers, not long term trends, and they have to do mostly with how regional numbers are aggregated for a global total.

    I’ve repeated the calculations with GISS and HadCRUT3v monthly anomalies. I’ve got it in a spreadsheet, so I can vary the parameters quickly and easily.
    HadCRUT3v: 30 year trend = 0.157 C/decade, +/- 0.015 (95% conf)
    HadCRUT3v: 10 year trend = 0.074C/decade, +/- 0.068 (95% conf)
    HadCRUT3v: variation in 10 year trend over last 30 years.
    HadCRUT3v: 3σ range is -0.139 .. 0.499 mean is 0.180 C/decade

    GISS: 30 year trend = 0.158 C/decade, +/- 0.018 (95% conf)
    GISS: 10 year trend = 0.173C/decade, +/- 0.083 (95% conf)
    GISS: variation in 10 year trend over last 30 years.
    GISS: 3σ range is -0.107 .. 0.466 mean is 0.180 C/decade

    Note that the biggest different is the recent trend. That is because this is what depends most on the differences in calculating an individual year from regional data. Note especially the lower confidence when you calculate a trend in a shorter window. That’s basic statistics also.

    When you look at long term trends in different datasets measuring the same thing, they are pretty similar. If you are interested in such things, the GISS method appears to give a bit less variation in short term trend. The very close match of the mean for 10 year windows is probably partly co-incidence. They should be close, but a bit of difference in datasets would be expected, at least in the third decimal place.

  68. MikeN: good question! You ask: “Suppose there was a ten-year flatline, which seems to be the assumption behind this paper. Then wouldn’t this make the more extreme scenarios unlikely? In other words, the chances of a 6C increase by 2010 would have to drop substantially.”

    That’s subtle, because there are two different factors in the “6C” number. That’s a projection, and it depends on scenarios, and on uncertainty about sensitivity. I’m not well up on the scenarios, but mostly they involve how much we manage to reduce emissions, and estimating consequences. A “head in the sand do nothing” scenario involves an acceleration in greenhouse forcings, and an increase in the trend.

    That is, the “6C” is a projection of increasing trends, and we won’t be able to test that against any measurement of the existing trend.

    A 10 year lull doesn’t falsify anything, because as we’ve shown, the lulls are a normal part of the existing 30 year trend. And in any case, the lull doesn’t work so well now, because the trend over the last ten years is clearly up again. People who tell you different might be measuring the ten years up to about the end of 2007, which was nearly flat. The 10 year window has picked up since then, and this is seen in both the datasets I’m looking at.

    The last time there was a 10 year window that was actually negative was the window up to about mid 1997, just before the big El Nino spike of 1998.

  69. Duae – If the Earth cools or fails to warm for eleven years due purely to the re-organization of heat within the system (the radiative imbalance remains the same), then your point is correct. We have two questions we must answer. First of all, can the Earth’s recent flat trend be explained by internal non-radiative oscillations? How long must a flat trend persist to necessitate unrecognized internal radiative oscillations as a player? If the Earth does not warm or cools for 30 years, wouldn’t we have to recognize that something else is at play? That the radiative imbalance is being altered by internal forces? Secondly, has upper ocean heat content (the best proxy for TOA radiative imbalance) maintained its rise? OHC has been either falling or remaining flat (Loehle v Willis) since at least 2003. Craig Loehle wrote this in a comment on Marohasy’s blog: “So, of course it could cool for a decade due to natural causes, but then it could have warmed in the previous 2 decades from natural causes. Natural causes is a two-edged sword and one must be careful how one wields it. You can’t say recent cooling is natural but be 90% certain that 2 decades of warming is AGW.” Douglass also has a paper coming out that uses updated OHC data to show four climate periods associated with different phases of the PDO in the past century also saw radiative imbalances of different fluctuating signs. This indicates that the PDO, an internal oscillation, may also be radiative. I’d argue that behind it all is ENSO, but that’s irrelevant to this discussion.

    Response– I just don’t get what is so hard to understand about this. 10 years of little trend is expected, a century of +0.8 C is not under the confines of internal variability. Multiple fingerprints and observations are not consistent with internal induced climate change and require an external forcing.

    Further, the effects of internal variabiltiy are not completely non-radiative since they do influence water vapor and clouds which influences SW/LW fluxes. You’re not going to understand any of this if you continue to get your background form disinformation artists like Marohasy and Loehle.– chris

  70. Guys,
    Good morning to you. Very sunny and warm here.

    There is obviously a lot of feeling behind your assertion that the temperature change in the last decade is within the bounds that might be expected from the statistical point of view.

    Let’s take a step backwards and ask ourselves what it is that is causing the temperature variation that we see. What is the countervailing force that can stall the temperature advance that is supposedly due to greenhouse gas? If the temperature flat-lines for 10 years we must acknowledge that there are important sources of variation that are not understood.

    Let’s ask ourselves if we can agree as to where the temperature change is occurring. That will surely help us to work out what might be responsible.

    Temperature is most volatile at high latitudes. It appears to me that with all the variation that we have seen at high latitudes, as we have measured it over the last 60 years (setting aside the question of how well we have actually measured it) today’s high latitude temperature lies within the bounds of what might be called ‘expected natural variation’. The variation that has occurred in the last half is no greater than in the first half. Perhaps you could spend some statistical effort in validating that assertion one way or the other.

    So, it seems to me that we should spend some effort in trying to work out what the source of that natural variation might happen to be. That is what I am trying to do. Its tragic in my opinion that people have taken up entrenched positions.

    One more thing. I think we have to get away from this notion of a “global temperature”. I don’t think it is helping the analysis.

    Response– Duh. Under a situation, where for example, you can mine some cold, deeper water from the ocean to the surface (which is not easy since it takes energy to lift heavy things) then you can delay some warming for a bit. Eventually the extra energy has to go somewhere, and eventually the CO2 and other GHG’s is going to overwhelm the natural ups and downs that have happened over the entire Holocene since their forcing is ever-rising and cumalative. Climate Change doesn’t take away weather; only the wild imaginations of skeptics take away weather. And the ironic thing is that the “warmists” supposedly ignore natural factors. — chris

  71. Carl asks: “First of all, can the Earth’s recent flat trend be explained by internal non-radiative oscillations?”

    It isn’t flat, Carl. It was almost flat up to the end of 2007, but the most recent 10 year trends are well back up again. I guess you could get a “flat” or even negative trend by cherry picking an even shorter window. That would be pretty silly, but it would work. Apart from that quibble — the answer to your question is “maybe”. If you rephrase to ask whether those oscillations are a contribution (rather than the whole explanation), then the answer becomes “of course”.

    Temperature doesn’t change at “random”; it takes a lot of energy to give even a tiny shift. There are all kinds of natural effects that can give a push one way or the other, which show up as “variation” for our purposes, because none of us make any pretensions to being able to predict all those pushes one way or the other. Oscillations, by definition, are not cummulative. There certainly are internal non-radiative oscillations, such as shifts in ocean current that bring colder water to the surface or let colder water sink again. That’s an important part of the natural variation. There’s also good old “weather”, with variations in cloud cover and hence albedo, that can shift the balance one way or the other. In recent years the extended solar minimum probably also has an impact; you can find a solar signal in the datasets if you do it carefully; you need to look for periodic 11-year cycles, and it takes a bit more than a simple regression. But it’s there and it has a role as well.

    None of these are cummulative effects, and none of them work for explaining the long term multi-decade trend, which is the one of the clearest signals seen in the data, and that is being driven mostly by a steadily increasing greenhouse effect.

    You ask “How long must a flat trend persist to necessitate unrecognized internal radiative oscillations as a player?”

    We already know that such oscillations are a player.

    If what you are really asking is what it would take to falsify the notion that the steady rise of the last 30 years has finished, then I answered that above. A flat trend for about 16 years would be a reasonable indication that the longer term trend may have moved into some new mode. Less than that and you are not outside the normal 3σ variation corresponding to what’s seen with the existing mode. Surprising perhaps, but easily confirmed by actually looking at the data.

    Real science does indeed look at all these various factors when looking at climate and weather. On the other hand, the elephant in the room you clowns are ignoring is that greenhouse effects are a big player as well. And, ironically, one of the easiest to quantify. The physics of thermal absorption and emission in the atmosphere is basic undergraduate thermodynamics, and it means that along with everything else, there is a steadily increasing forcing at work of a magnitude that is of a similar order to the 30 year trend we keep talking about here.

    CO2 in particular is increasing at about 2 ppm/year, which is log2(1+2/385) = 0.0075 of a doubling per year. The radiative forcing is 3.7 W/m^2 per doubling, and that is known with high confidence and good accuracy. The climate impact (sensitivity) is less well known, but it’s about 3C for that amount of forcing. +/- 50% or so. So CO2 alone is expected to give a trend of something like 0.2 C/decade, at present. Now where have we seen numbers of that kind of magnitude before?

    The effect is moderated by damping effects from the ocean, and there are certainly lots of other smaller but still important impacts needed to get a full picture.

    What is faintly contemptible is the pretensions of you guys to be doing something useful while you keep asking trivial questions, writing blogs with lots of basic errors in statistics and empirical science, and all the time ignoring this particular impact for some inexplicable reason.

    Sigh.

  72. Duae, you are mistaken about the models and projections and forecasts, and just seem to be throwing words around.

    There are many different model projections, with different estimates for feedbacks. I am saying that if you run the models with a do nothing approach, you still have many different forecasts because of different possible feedbacks.

    It’s not just a yes/no answer on warming but rather how much. IF there were a decade of flatline, then this should create a change in the expectations with regards to which model forecast is correct.

  73. Re Chris’ response to me:

    “Response– I just don’t get what is so hard to understand about this. 10 years of little trend is expected, a century of +0.8 C is not under the confines of internal variability.”

    11 years of flat temperatures (even after adjusted for ENSO), with flattened (or falling) OHC levels, indicates that internal radiative forcings are at play. My argument is that these same forcings drove the warming from 1976 to the present. You give a small role to internal variability. So I ask again, how long must SSTs remain flat or falling before you recognize that there may be internal radiative forcing at play?

    “Multiple fingerprints and observations are not consistent with internal induced climate change and require an external forcing.”

    What fingerprints and observations are not consistent with internal radiative oscillations? The pattern and timing of warming SSTs since 1975 implicates two El Nino events (1986/7 and 1997/8) that did not coincide with volcanic eruptions. These events caused a reduction in cloud cover and altered circulation in the North Atlantic. The SST record in the Indian and West Pacific Oceans show that warming only occurred during those two El Ninos (temperatures rose but never fell). I just wrote a very short post at climatechange1.wordpress.com that shows this. And what fingerprints are consistent with enhanced GH theory, and not with internal oscillations?

    “Further, the effects of internal variabiltiy are not completely non-radiative since they do influence water vapor and clouds which influences SW/LW fluxes.”
    So we disagree on the size of internal radiative oscillations? I ask you to take a look at the nature of the rise in SST since 1976 in my link above. Looking at the data, I have concluded that ENSO represents a strong, radiative, internal oscillation. So you have two arguments to make: that my interpretation of the data is wrong, and that the warming signal rules out internal oscillations.

    “You’re not going to understand any of this if you continue to get your background form disinformation artists like Marohasy and Loehle.”

    Why do you always insist on insults? This is science, not politics. We could argue all day about what we think of various scientists, but when it comes down to it, science must be judged on its logic, so why bother with the Ad Hominems?

  74. MikeN: I don’t know what you think I am mistaken about, as I agree with your latest comment, and said similar things myself in my reply to you (here)

    You asked about “6C”, and so I said: “That’s a projection, and it depends on scenarios, and on uncertainty about sensitivity. “ The sensitivity is precisely about different feedback estimates; so I am here saying that you get different projections if you use models with different such feedback response. Now, you say: “There are many different model projections, with different estimates for feedbacks. I am saying that if you run the models with a do nothing approach, you still have many different forecasts because of different possible feedbacks.” It’s the same thing.

    I’m glad we agree on that. I may have missed your other comments, but all I saw was a question about just one particular projection: 6C, which uses the high end of sensitivity estimates. So I noted in my reply that this was asking about a value that is uncertain and depends on estimates for uncertain feedbacks (sensitivity).

    I’m writing short comments in a blog comment stream. I’m not just “throwing words around” but trying to pull out useful points that can be expressed quickly.

    Where I think you are still not quite following the nature of these projections is your comment on the implications of a short term lull or flatline. It does not alter the “6C” projection. It remains precisely as before, a high end sensitivity estimate with a “do nothing scenario” (like “A1FI”), for the end of this century.

    Your remarks about “adjusting expectations” are true enough for expectations based simply on extending an observed trend, but that says nothing about the 6C projection, based on a model that remains consistent with observed trends and variations. For what it is worth, I also think the “6C” should generally be given in parallel with the corresponding lower bound estimates, which would be something like “2.5C” for the same scenario and low end sensitivity estimates. The “best” estimate sensitivity would be around “4C”.

    Risk analysis techniques properly focus on the upper bounds; but for discussion of the basic science involved, upper and lower bounds are equally relevant and important; and that’s what we’re speaking of here.

    A “lull”, such as we have seen recently, is a real phenomenon; but it is not statistically out of line with all the real temperature change patterns seen in the last 30 years of increasing temperatures. More fundamentally: a lull does not make any difference to bounds on sensitivity. Observational constraints on sensitivity can be made when a strong and quantifiable source of short term variation shows up, like a big volcano, but you can’t say anything at all about sensitivity or feedback based on a lull where the short term forcings involved are not quantified.

    Models, including those with strong feedback, still produce “lulls” similar to what is observed recently. Strong feedbacks can even make them more likely, because of a stronger short term response to unknown transient forcings.

  75. Well, if it takes a 10 year trend to matter, then:
    UAH CRU
    1999 0.041 0.302
    2000 0.035 0.277
    2001 0.198 0.406
    2002 0.312 0.455
    2003 0.275 0.465
    2004 0.196 0.444
    2005 0.339 0.475
    2006 0.261 0.421
    2007 0.282 0.399
    2008 0.048 0.326

    UAH is 0.013C/decade
    CRU is 0.007C/decade

    Here is the 10 year trends and I just do not see any warming.

    About the Arctic warming, it was shown a few years back that 35+ percent of the warming is due to carbon black, an aerosol, and not CO2 warming.

  76. Duae, OK we are in agreement, I think there’s just a misunderstanding between different words being used.

    My point is that the IPCC and others frequently issue a projection for warming based on model runs and whatever else as 2.3C-4.5C with 95% confidence or some range like that. Other prominent names suggest numbers like a 6C warming. a 6C is less likely to produce a flatline than a 2C warming. so if a flatline is produced than an estimate of 2.3C-4.5C or whatever it is should be recalculated, and a different probability estimate should be doable. My issue with this post and others like it is that people seem to be saying that a flatline is not unreasonable, therefore nothing has changed with regards to the science. However, the original calculations for the likely effects of global warming have to be adjusted in the presence of a flatline. It doesn’t invalidate a model forecast, but it makes it less likely compared to other forecasts.

  77. Vernon,

    Why not figure the trend for 1992-2001 also? I contend that period (just like the last 10 years) is unsurprising and no reason to disbelieve that the underlying long-term trend is about 0.2C/dec.

    I would also like to see the uncertainty with your trend values. Of course how you estimate that uncertainty “error bar” would also be important to give since you surely would not want to leave people assuming you used an model that does not make much sense such as signal plus white noise.

  78. Wow, Vernon. You have just set a new record for me of total oblivion to the whole discussion. And in this topic, that’s a real achievement. The whole discussion seems to have gone in one ear and out the other without even slowing down.

    But what the hey.

    You need to multiply those trends by 10. What you have calculated is a trend in C/year. You are, in fact, obtaining precisely what I gave before. CRU is the HadCRUT3v dataset. I’m using monthly data, but the trend is the same. Here it is again:

    HadCRUT3v: 10 year trend = 0.074C/decade, +/- 0.068 (95% conf)

    The UAH figures give 0.13 C/decade, which is nearly up to the level of the trend over the last 30 years, of 0.16. Frankly, I tend to stick with HadCRUT3v; but I admit vast amusement at you posting a 0.13 C/ decade trend and also saying you can’t see any warming. I don’t doubt it; but it’s more about the lack of seeing than the lack of trend!

    It helps, I guess, that both 1999 and 2008 were anomalously cold, and so the plot looks like a hump. Hence, for the determined contrarian who only sees what they want to see: the rise at the start is invisible and the drop at the end is all that matters. Take a shorter window to chop out the inconvenient bits, and do actually get a measured cooling. You’ll need to use an eight year trend or less, at present. This would just make you look silly for people who know any statistics, but that’s the case already, so it’s no loss for you.

    The whole discussion has been explaining at some length that 10 year lulls don’t matter. They are an expected result when steady cumulative warming combined with short term natural oscillations. The lull is real, and can often be associated with a cause (in this case, for example, we have the tail off from a big El Nino, and a longer than usual solar minimum, both of which probably contribute). But lulls don’t show an end to a longer cumulative trend until you move outside the range of expected natural short term variations.

    Finally, yes, course the Arctic trend involves more than CO2. The Arctic trend is way up at a whopping 1.5C/decade or so; well above what greenhouse effects can contribute directly. There are regional factors for climate that mean different regions are either above or below the global trend. I’d say 35% is an underestimate. I expect that MOST of the Artic warming is directly associated with local regional effects (aerosols, black carbon, and ice-albedo), with the greenhouse contribution being of about the same magnitude as the general warming for the rest of the planet.

  79. Thanks, MikeN; I appreciate it.

    When you said 6C, I immediately associated it with the A1FI scenario, and in the FAR-WG1 report this scenario is associated with a range of 2.4 to 6.4, with 4.0 as the best estimate.

    The range of 2.4 to 6.4 is a direct reflection of uncertainty on feedbacks, and the scenario involves a strong acceleration of prevailing trends, with unfettered growth of population and consumption. Note how wide this range is. There’s a factor of almost 2.7 difference between hi and lo bounds. This is about right. You mention a range of 2.5 to 4.5; which would be based on a much tighter range of possible sensitivity values than the IPCC uses (and it is also probably for some less drastic scenario). The IPCC tends to give full range to what is uncertain on this point.

    We continue to disagree on the implications of a “lull” for such projections, I guess.

    Projections don’t depend on extension of a prevailing trend at all; and the sensitivity estimates don’t depend on lulls either. Just the reverse. The big spikes are much more useful. Also, high sensitivity is just as able to give a lull as low sensitivity; perhaps even more so, as sensitive overreactions to small short term non-cumulative forcings.

    There are likely to be adjustments of some kind to theory if there is a statistically significant change in trends, but that isn’t the case here. Even then, it is never as simple as just trying to follow a trend. It’s a case of getting an understanding of the physics, and testing theory against observation. Just like in any other area of science, simply fitting a model to data is the least satisfactory level of understanding. Instead, you build a theory, and try to falsify it. If the observations are consistent with theory, you certainly don’t just try and over-fit the data! A more satisfactory result would be actual identification of physical causes for short term variation, and attempts to incorporate them in the theory.

    But this is a detail, I guess. Cheers — Duae Quartunciae

  80. 2.4-6.4 it is then.

    >Also, high sensitivity is just as able to give a lull as low sensitivity; perhaps even more so,

    If this is the case, then my argument is invalid. The charts above only show the probability of a lull with different CO2 scenarios. I don’t think this is the case though.

    If there is a large negative feedback factor, then this should produce a lower the long term trend, and without this factor I don’t see how you would be as likely or more likely to get large swings.

    So the statement I am making is IF 1)there is a decade flatline, and IF 2)it is more likely to get a flatline in model scenarios that produce low warming projections than high warming projections, then the current uncertainty estimates and temperature projections from the IPCC have to be adjusted downwards with the new information so that the 6.4C is no longer as likely as the 2.4C..

    Do you agree with this statement?
    1) was assumed at the outset, and is the whole point of this thread.
    2) is where we disagree.

  81. Duae

    You say: “It’s a case of getting an understanding of the physics, and testing theory against observation. If the observations are consistent with theory, you certainly don’t just try and over-fit the data! A more satisfactory result would be actual identification of physical causes for short term variation, and attempts to incorporate them in the theory.”

    I guess we are all interested in what you have to say about the causes of these short term variations and why such short term variations just cancel out to leave no trend. So, let’s have the physics of the short term variations. (For gods sake leave out volcanoes, black carbon, ice albedo, aerosols and coloured noise will you).

    You could have a go at the causes of world wide tropical sea surface warming events. You might even start with the 1978 climate shift. Perhaps you can predict the next El Nino for us or fill us in on your notion of the length of period necessary for internal oscillations to cancel each other out.

    Chris could perhaps chip in and expand on his notion of how dense heavy cold water can levitate to the surface lowering the global temperature. He could explain for us where the light, warm stuff comes from too.

    Perhaps you have some illuminating stuff to tell us about clouds.

    Failing that, just quietly admit that you don’t have a clue. Your secret will be safe with us.

  82. Good grief the Dunning-Kruger syndrome is running rampant through the blogsphere. Climate fraudit, wattswrongwithwatt, icecap and CO2science should have their computers scanned to see if they contain a DK virus which appears to spread to people with big egos and low IQ’s.

    There is a rational explanation of “how dense heavy cold water can levitate to the surface lowering the global temperature”, but why should I waste my time explaining it to you since your mind is completely blocked as far as accepting real science is concerned..

  83. >Climate fraudit completely blocked as far as accepting real science is concerned..

    Let’s see one of the latest posts at ClimateAudit is about how Mann turned a proxy upside-down, reversing it’s meaning, in order to build a temperature correlation. That is smaller numbers meant warmer, but Mann flipped it upside down so that bigger numbers meant warmer, to show that the numbers were valid.

    But certain people would like to say that he is not accepting real science.

  84. Erl,

    You’ve really lost me here. What do you mean by “short term variations [that] just cancel out to leave no trend?” Do you mean variations in SST, and do you have a particular period in mind over which there is no trend? I’m asking in order to square your question with the discussion of trends in the comments above. And why would we want to leave volcanos etc. out of an explanation of short-term variations? They seem like relevant short-term forcings to me.

    Making causal attributions about climate change doesn’t require predicting the timing of future El Nino events (to whatever degree of precision you may have in mind). It sounds as if you’re saying, in effect, that you won’t believe climate attribution until someone can fully explain the weather to you first…

  85. Erl raises a common gambit: “you guys don’t know everything”.

    Well, duh. It doesn’t follow that we don’t know anything. As an argument for ignoring the really elementary points of physics that we’ve now nailed down pretty solidly, it’s just ridiculous. You don’t need to be able to model every last rise and fall to see that there’s a lot of natural short term variation, and it’s not a big puzzle in statistics to see when a trend is statistically significant.

    The elephant-in-the-room here is basic thermodynamics of atmospheric radiation transfer and the greenhouse effect. This effect has been known for well over a century, and by now we can calculate the underlying radiative forcings from first principles to a good level of accuracy. It applies for any electromagnetic transmission through a gas: in a lab, in the Sun’s photosphere, in Earth’s atmosphere. The calculations are arduous, as they go line by line through the entire spectrum, but the principles are as solidly established as Maxwell’s laws themselves. It means that increasing greenhouse gas levels are necessarily a strong climate forcing over recent decades.

    That remains true whatever else is going on. The considerable uncertainty of sensitivity to forcing, and the impact of all kinds of other less well known forcings, gives no license whatsoever to the army of self-identified faux-experts who think any old ignorant speculation should be treated with respect just because there are gaps in our knowledge.

    MikeN: You ask about feedbacks. Since I’m going to be pointing out stuff we don’t know, I felt it worth making that brief aside on Erl’s little distraction. The thing is, what we call feedbacks are complex interactions of multiple processes. Physics is easiest when you can isolate some phenomenon of interest and describe its salient features. When you are addressing interactions, descriptions are generally more approximate and use some broad abstractions to get leverage on a big picture when you can’t work bottom up from fundamental laws.

    The feedback parameters are just such an abstraction for interacting processes, and “sensitivity” is an abstraction for dealing with lots of simultaneous feedbacks. But different feedbacks work on different time scales; they can be regional or local; and they can sometimes have mutual interactions that mess up the simple sum of feedback parameters that is used to relate several feedbacks to a single sensitivity value. In the literature there are multiple definitions of sensitivity, depending on the time scale you are working at. The short term climate sensitivity (transient response to a sudden forcing) is different from the long term equilibrium sensitivity (the net response after long settling time to a sustained forcing). Thinking in terms of a single number is misleading.

    The two easiest ways to get long term trends in global temperature are to apply a continuously increasing forcing, and to apply a very sudden strong forcing, and let the slope arise from the strong damping of response by the heat sink of the world ocean. The long trend we have now is from a continuously increasing forcing AND it is moderated by the world ocean damping. You can’t separate these signals just from the trend, and hence you can’t really estimate long term sensitivity from the trend either. That’s why I said the “spikes” are what is used to estimate sensitivity. The time it takes to recover from a big volcanic eruption can be used to constrain sensitivity … up to about the level of accuracy we’ve been using! 1.5 to 4.5 C per doubling. Doubling of CO2 is a common benchmark because the forcing is quite well defined.

    If sensitivity values are high, then there is a longer time delay involved in recovering from a big volcano. If you think about it, that is going to mean a longer lull showing up in the record. The principle generalizes. High sensitivity doesn’t only mean high sensitivity to the long term greenhouse forcing. It also means high sensitivity to the short term variations that mess up the trend on the scale of a decade or so, and which are behind lulls in the record.

  86. >High sensitivity doesn’t only mean high sensitivity to the long term greenhouse forcing.

    I wasn’t suggesting that there is a single number that represents feedbacks. Rather, there are different possible feedbacks, that lead to different projected temperatures. If there is a higher sensitivity to negative feedbacks, then this should lead to a lower long term trend.

    If I understand you correctly, you are suggesting one mechanism whereby a lull is as likely in a high warming scenario as a low scenario. That doesn’t strike me as entirely plausible, but I think I get your reasoning,

  87. Not much here to suggest that you know anything about any other possible long term forcing than greenhouse gases……and no way to demonstrate that the greenhouse notion is responsible for the climate change observed. Any departure from the supposed relentless warming is just ‘weather’ (Ian) or a “distraction” (Duae) and can only apparently be contemplated by people with big ego’s and low IQ’s (Ian)

    Kicking and screaming you will come.

    Perhaps we can go back to the following comment from Chris and ask him to elaborate:

    “If any possible-but-unproven internal oscillations were responsbile for the modern day warming, we’d expect much different signatures in terms of ocean heat content changes”.

    What are the signatures in ocean heat content change that support your notion that the warming is due to greenhouse gases?

    Or failing that perhaps you can tell us about the signature in the atmosphere?

  88. My view of the sources of climate variability is different to that of Richard Lindzen. However, this statement from Richard is apt, particularly in relation to statements made here:

    …..one must always remember that this is a political rather than a scientific issue, and in a political issue, public perception is important. Moreover, the temperature record does demonstrate at least one critical point: namely, that natural climate variability remains sufficiently large to preclude the identification of climate change with anthropogenic forcing. As the IPCC AR4 noted, the attribution claim, however questionable, was contingent on the assumption that models had adequately handled this natural internal variability. The temperature record of the past 14 years clearly shows that this assumption was wrong. To be sure, this period constitutes a warm period in the instrumental record, and, as a result, many of the years will be among the warmest in the record, but this does nothing to mitigate the failure of nature to properly follow the models. To claim otherwise betrays either gross ignorance or grosser dishonesty….

    Source: http://www.ecoworld.com/features/2009/04/20/global-warming-greentech/

    To have your model represent natural climate variability, or any sort of variability at all, you must first understand it.

  89. MikeN,

    Duae is referring to “sensitivity” as the amount of climate forcing in response to a doubling of CO2. If estimated sensitivity turns out to be higher than currently expected, then any current predictions for climate responses are too low.

    Your question about the effects of a “flat” decade have a Bayesian flavor, as if the recent flatter trend is new evidence that should update the probability of 6o – specifically, make it less likely. But the argument following from the GRL paper is that there’s nothing unexpected about flat spots appearing from time to time. In Bayesian terms, the conditional probability of a flat trend, given 6o, is close to one.

  90. Dunning Kruger strikes again. I must make sure that the virus scan on my computer can detect it since I don’t want to be embarrassed by outwardly displaying the symptoms shown by the AGW deniers on this site. I can never believe (till I actually read what they write) that people can be so ignorant and oblivious to their own ignorance. Why people, who have no formal training in science, can be so sure that they know far more than scientists who have spent all their working lives in the area really puzzles me. However, their are many reports of similar people who get jobs as janitors in hospitals who then pretend that they are doctors. I’m sure that psychologists have a name for it.

  91. Erl,

    There are two Ians on this thread – me and Ian Forrester. I didn’t mention egos and IQs. Regardless, aren’t you the one who requested civility earlier?

    Erl, you and I appear to very far apart – unbridgeably so – in our beliefs and about what sorts of things drive climate, and how we know what we do. We even differ markedly on what it is that climate science has and hasn’t found.

    I give you credit, though, for putting in what must have been many hours of work to develop a scheme that you think supports your claims (on your website). Would you consider submitting it for review in a scientific journal? You’d want a journal with a section devoted to lit reviews/overviews or suggestive second looks at existing findings. Having been involved in all aspects of the review process over the years, I think submitting a paper might help you clarify some of your thinking for a scientific audience, and you might get some interesting and constructive feedback. If you do decide to submit, keep in mind that the modal first response from any good journal is some form of rejection. Good luck to you!

  92. Ian,
    Thank you for your constructive advice. As a person who does submit articles for publication I find that the likelihood of acceptance is inversely related to the extent of departure from the conventional wisdom. Nowhere is this more true than in the field of climate science.The major journals are in the hands of activists for the global warming cause or people who keep a weather eye to anything that might upset their career prospects. Minor journals are in the hands of a media that is reluctant to rock the boat.Anybody who suggests that the Southern Oscillation might be a response to solar variation is seen as a whacko.

    Over the past year I have seen a change in ideas. I don’t think that anyone would maintain today that tropical warming and cooling events are Earth heat budget neutral. It wasn’t long ago that El Nino was seen as due to the exchange of energy within the climate system. Influential people used to write about an exchange of energy between the atmosphere and the ocean as if the atmosphere could somehow store energy. People are now realizing that radiative influences are involved. By radiative influences I mean ‘changes in cloud cover that involve a greater level of solar radiation reaching the surface.

    Much garbage gets published. Publication is no true measure of worth. Apart from that it is terribly slow.

    There is little room in the journals for the synthesis of ideas……..and it is important to do that sort of thing. If you have a suggestion I would be most grateful of course.

    Then, my modus operandi is different. It’s a bit old fashioned. Here is the introduction to an upcoming article to hopefully explain that point.

    Morphological analysis works backwards from the output towards the system internals. An example is ‘geomorphology’ that tries to deduce the forces at work sculpting the landscape via a study of the form of the landscape. Equally, climatologists could determine the forces shaping atmospheric performance via a study of the climate record. That is the method of analysis used here. It is rooted in observation rather than speculation and modelling. Modelling is frequently based upon on the assumption that important parameters are stable and disturbances to the system are resolved via adjustment to reach a new equilibrium. Morphological analysis is very much more alive to the possibility of change in the system itself.

    Response– Essentially everything you have said so far is just wrong. Instead of going on about how the mainstream view is not receptive or open-minded, maybe you should re-examine your own thought system. You don’t understand the data, or the physics, as has been shown very clearly by Duae and others. Really, if you’re going to keep on posting unsubstantiated claims based on your own opinions (particularly after numerous efforts of correction), then do it somewhere else.– chris

  93. Chris, Ian Forrester offers nothing but insults yet Erl Happ is dissinvited due to ideas you dissagree with?

    Amazing.

    Response– I’d be more than happy if erl wanted to start talking about actual science. The errors and misunderstandings he has goes well beyond agreeing or disagreeing…he’s just wrong. That isn’t the big problem, so much as his insistence on pushing the issue ad nauseum. Repetition is an interesting tactic but it gets a little tiresome– chris

  94. Even though Erl is unlikely to benefit, I still think it is worth being a bit more explicit. Erl says: “Influential people used to write about an exchange of energy between the atmosphere and the ocean as if the atmosphere could somehow store energy.”

    Here’s a clue, for anyone. If you think you have found genuinely influential people (not isolated mavericks) describing conventional scientific ideas that you can falsify easily with elementary physics, then that is a good sign that YOU are the one not understanding something.

    Of course the El Nino Southern Oscillation (ENSO) is (amongst other things) an exchange of energy between atmosphere and ocean. This has never been expressed as if the atmosphere simply soaks up and stores the energy. Not now, not then, not ever. The atmosphere contributes to the transport of energy (weather). It has no long term storage effect. The ocean is both a store and a transport.

    Erl is — again — hampered by his own lack of understanding of the underlying physics, and much worse, hampered by not even knowing that he’s got so much to learn. Hence we have the bizarre notion of this kind of stuff being submitted to a journal — and the inevitable conspiracy notions for why it can’t get published!

    Submitting to a journal would help only if Erl had the capacity to learn from reviewer corrections. But this is not the proper role for academic review. Reviewers are busy, and review is not intended to be one-on-one basic education. Erl would get a lot more benefit from threads like this one, but only if he first grasped the fundamental point that he needs a lot of help. Given how poorly he’s followed the discussion here, he’s got no hope of getting much benefit from the short comments typically given in an academic review.

  95. Duae,

    I understand what you’re saying about the limited time of editors and reviewers. (In fact, here I am procrastinating on an interesting blog instead of finishing two reviews on my desk, among other things. :-)) His would be a non-standard paper, to say the least. But in cases like this I think it’s better to err on the side of inclusiveness toward the submitter.

    Erl seems like a classic autodidact; he’s picked up various facts without the context of a more systematic course of study or the judgments of mentors and colleagues over the years. His thoughts about journals are completely out of synch with my own experience, or the experience of anyone I know. But he would benefit just from trying to trim and sharpen his thoughts to fit a journal format, even if he never submits a paper. If he does submit something, it certainly wouldn’t be the first time any editor saw a submission of this type and a paper from him wouldn’t overload any journal’s review process.

    Even a bench rejection might be more persuasive to Erl than detailed blog comments. If it did go out for review, it’s the type of paper that reviewers would read until the first fatal flaw – fairly quick, and again likely to be more persuasive than blog interaction. Erl is especially resistant on this thread, but I would prefer to try to point him in a more fruitful direction.

  96. I agree with Duae that erlhapp has much to learn about rather basic science. I had thought that his trying to publish could lead to more contact with more knowledgeable people and thus prove helpful. But indeed it seems that he sees conspiracy where people do not accept his hand-waving and lining up kinks in graphs. At this point I am not really sure anything may be useful for showing that those half-baked hypotheses based on incomplete and faulty ideas simply do not complete with the long-established theories, which have had a whole lot of maths worked out, fit with wider data and theories, etc.

    The comment by Ray above signals to me again the importance of trying to keep simply wrong ideas from confusing or misleading people. If erlhapp was saying the sky is blue because there is water vapor in the atmosphere, that may be easier to counter, but as is his site demonstrates a huge fortress from which it appears he will not budge. Hopefully though in trying to correct those misconceptions others can be kept from latching onto them because they think they sound reasonable or just like ideas besides the consensus. That is a continual struggle in the efforts to learn and inform. Chris at least has editorial control over content. Trying to counter with the better information I think would be my ideal preference, but realistically that cannot be done, and sometimes squelching the faulty is quite reasonable in a forum like this. I would like to think I gain useful experience from such exchanges.

  97. Duae,
    Another attempt at impressing the readers, who I imagine are few and far between. But, the thing to illuminate is really the contribution of ENSO to the Earth’s heat budget and the mechanism behind the increased flux of energy into the tropical ocean that produces these tropical warming events AND MOST IMPORTANT the manner in which change in the ENSO system can produce net warming or cooling of surface temperatures at low and high latitudes over decadal and longer time scales.

    Ray,
    Point well made. What you see here is a reluctance to address the issues. The wriggling and the insults come in direct proportion to the level of discomfort.

  98. This is from way at the top: apologies, but I travel a lot.

    1998-2008 period contains most of the warmest years on the instrumental record

    Which goes back how far?

    Response– About 150 years for solid global-scale temperature measurements– chris

    I have attempted to verify that assertion, without success. Reliable Arctic temperature records go back no more than 130 years; for the Antarctic and the southern oceans, even less.

    Most notably, I stumbled on this: Ocean Circulation and Climate from Google Books. See in particular pages 45 – 58 or so.

    There is essentially no information about the oceans that is more than 100 years old, and most of what we do know dates from no earlier than the 1950s.

    Consequently, I do not see how you can justify your statement that the global temperature record is solid for 150 years, which brings into question the ability to conclude whether recent temperatures are unusual, or otherwise.

    ++++

    Completely OT. Modern airliners (anything built since about 1980) must have a very exact notion of air temperature for correct engine operation. Also, they very precisely (to within a degree and a knot) measure wind. All these observations are extremely specific for location and altitude; typically within .2NM and 120′.

    What’s more, all this is available via datalink. That makes for tens of thousands of continuous temperature and wind samplings daily (although not so much for the southern hemisphere).

    Are climatologists using this data? If not, why not?

    • Duae:

      _____________
      Hence a common confusion. More thermal absorption results in a warmer surface… and this is because the surface is warming the atmosphere more efficiently. The same applies for different layers in the troposphere. It’s being heated from the bottom up.
      ______________

      Personally, I don’t this is the best way of saying this. The surface cools slower when the atmosphere is heated up. If one puts two pots of water on the stove and heats one to 100C and leaves the other at room temperature, the 100C pot will cool faster than if we heat the other one to 90C.

      Cheers, 🙂

  99. Erlhapp, your contention that radiation from an upper layer of air cannot warm a layer below it is just wrong. What do you think happens to the infrared light that is radiated downward? Does it disappear? If you think convection somehow negates the warming, do you have a quantitative estimate of the magnitude of each effect? For the Earth’s surface, 333 watts per square meter are absorbed from the atmosphere, but sensible and latent heat transfer upwards only amount to 97 watts per square meter, which would seem to vitiate your whole thesis right there.

    The fact is, a layer of atmosphere doesn’t “know” where an infrared photon it absorbs came from. Could be from above or below. The effect will be the same. Upper layers warm lower layers continuously, and the atmosphere warms the Earth. That’s why the Earth’s surface isn’t frozen over.

  100. Hi Barton. For completeness, we can add to your description the upward thermal flux as well.

    Overall, the atmosphere is being warmed from the surface. There’s more thermal radiation going up from the surface (about 396) than there is coming back from the atmosphere (about 333)… and then there’s that 97 in special and latent heat as well, which is also energy from surface to atmosphere.

    The atmosphere is being heated from the bottom up, all the way up to the tropopause. The reason a greenhouse effect results in higher temperatures is precisely because the energy from lower layers is going into heating upper layers, rather than simply radiating direct into the cold of space.

    Chris has a post giving an up to date diagram, at An update to Kiehl and Trenberth 1997. The diagram has just recently been formally published Trenberth et al (2009) in BAMS vol 90 pp311-323. This must be the source of your figures as well.

    The net flow of heat energy is up from the surface and on out into space. The atmosphere makes it harder for the radiant heat to get out; and so the surface will heat up in the presence of an atmosphere, to whatever temperature is required to get energy back up through the atmosphere and into space at the rate it is being received. Equivalently, the atmosphere gives a large amount of backradiation, and so the surface has to be hot enough to shed this additional load on top of the solar input.

    When the concentration of greenhouse gases increases, a larger fraction of surface radiation is absorbed into the atmosphere; primarily from increased absorption along the wings of the spectral band where absorption occurs. The immediate effect is an increase in the backradiation and a drop in the outwards radiation (the forcing). The surface inevitably heats up as a result; and it will keep heating up until it is radiating the same energy as it receives.

    Hence a common confusion. More thermal absorption results in a warmer surface… and this is because the surface is warming the atmosphere more efficiently. The same applies for different layers in the troposphere. It’s being heated from the bottom up.

  101. Oops, above response to “Hey Skipper” should be addressed to Duae:

    Cheers, 🙂

  102. Here is something for you guys to get your teeth into:

    The Gaping Hole in Greenhouse Theory

    Hey Skipper you bring a very attractive note of realism to the discussion. I have read that the surface temperature record in the southern states of the US, the most advanced industrial country in the world is regarded as unreliable prior to 1940. If so, where would one go for a more reliable record?

    In my search for good records of temperature change in the tropics I thought I had something useful with Darwin, but then I discovered that the location of the weather station had been changed three times since establishment in 1890 or thereabouts.

    gmo If you seemed to have an ounce of human decency about you I would be offended. Consider yourself squelched.

  103. Duae, to continue the issue of feedback sensitivities, the charts given above show that if you change the variable of carbon inputs, then a high warming model is indeed less likely to produce a flatline than a low warming scenario.
    So the physical mechanisms you describe disappear in the face of more carbon.

  104. When the concentration of greenhouse gases increases, a larger fraction of surface radiation is absorbed into the atmosphere; primarily from increased absorption along the wings of the spectral band where absorption occurs.

    Does that not also mean that less of the sun’s long wave radiation makes it to the surface in the first place?

  105. MikeN: we aren’t changing the variable of carbon inputs. That’s measured, and the forcing is known quite accurately. The uncertainty is almost all from unknown sensitivity. The high warming model, of 6.4, and the low warming model, of 2.4, are both using the same carbon input. There is no difference at all in the likelihood of a flatline, because sensitivity impacts natural variation in precisely the same way as it affects the greenhouse impact. You can’t infer sensitivity from the frequency of flatlines, and so, of course, you can’t constrain sensitivity that way either.

    Hey, Hey Skipper. The Sun is much hotter than the Earth, and so most of its radiant energy is shortwave, with proportionally very little in the long wave.

    For example; less than 1% of Earth’s thermal radiation is less than 5 microns, and less than half a percent of the Sun’s radiation is more then 5 microns.

    This is why the greenhouse effect works. It’s caused by gases that interact strongly with longwave radiation, which makes no meaningful difference to the solar input.

  106. erlhapp, I am sorry if you think that I have less than an ounce of human decency because I wish to make sure people know that your work does not show what you claim. However, calling me indecent because I call your conclusions wrong does not make your conclusions correct. How do you explain Venus with your model of how atmospheres work?

    Hey Skipper, someone else may be more familiar with the details but I believe the solar longwave is decreased but considerably less compared to the terrestrial longwave emission. Talking about “solar longwave” is a little dangerous I think since people may be confused by the different wavelengths, spectra, etc. Assuming no confusion, my point is that the “longwave” from the sun overlaps almost not at all with the wavelengths of the “longwave” emitted by the earth’s surface – they are pretty much two different “longwave”s. And the absorption bands for at least CO2 I believe cover much more of the terrestrial “longwave” spectrum than the “longwave” portion of the solar spectrum so that more CO2 does little to influence incoming solar “longwave” because it was not affecting it much to begin with.

  107. gmo
    tell me whats wrong with my reasoning at:http://climatechange1.wordpress.com/2009/04/24/the-gaping-hole-in-greenhouse-theory/

    There is nothing wrong with radiative theory. However, radiation is not deterministic in terms of near surface temperature.When you wake up to that truth you might become interested in other sources of warming and cooling of planetary surfaces

  108. I don’t think it is really appropriate to use Chris’ blog to comment on Erl’s articles. If anyone is really interested, it is probably best — for Erl as much as for anyone else — to place comments at Erl’s own blog, which he has linked above in this comment.

    I’ve put a comment there myself, which I am not really expecting to help, but that’s okay. Who knows whether one day one something might get through. Good luck, Erl.

    Cheers — Duae Quartunciae

  109. Duaue,
    You, are most welcome to comment directly on my blog. Good to see that you will go out of your way to set me straight. All are welcome.

  110. Yes, there’s nothing unusual about this ten year trend, except for the fact that you have been forced to write about it because it.

  111. Yes, there’s nothing unusual about this ten year trend, except for the fact that you have been forced to write about it because it is apparent to anyone.

    It may indeed “mean nothing”, but until the mercury starts going the other way, you’ve lost the initiative, and the momentum is with the “skeptics” or “deniers” or whatever word is currently in vogue.

    It’s hard to fight going uphill…

  112. Hushashi,

    It is not the so-called unusual feature of temperature trend of the last 10 years that forced scientists to write and blog about it.

    It is the fact that certain people tried to lift this short piece of the timeseries out of context to suit their political agenda that forced scientists to write and blog about it.

  113. Duae, sorry for the confusion. We agree about the difference between the 2.4 and 6.4 models having no difference in carbon inputs. However, look at the charts at the top of this thread that Chris provided. Those have different carbon inputs, and presumably different temperature outputs, and yet a substantial difference in probability of a flatline. So the physical mechanism you explain disappears if you add more carbon.

  114. Mike, those are different distributions for different carbon inputs and emission scenarios.

    You don’t fit the amount of carbon to the existing trend in observations. We know the amount of carbon in that case. It’s measured. It’s not a parameter you can adjust.

    At the risk of detracting from the main point, in bold, let me underline something really basic about how science works… ALL science. The models in science are not curve fitting. You don’t over fit your model to observations. Often you try to measure unknown quantities from observation, but when you’ve got a model, you use observation to falsify the model, not just tune your underlying physics to match. You certainly don’t adjust the independently observed carbon levels!

    It’s perfectly clear why we get flatlines. They are a part of natural variation. We can even identify with pretty good confidence the major cause of the most recent lull. It’s ENSO oscillations, with a high in 1998 and a low in 2008. This has models as well — as an oscillation.

    The frequency of flatlines over the last 30 years is not something that has any impact on the projection for A1FI scenarios. It doesn’t even have any impact on the relative likelihood of upper and lower bounds, because that depends on sensitivity, which is highly uncertain and can’t be inferred from the flatline frequency.

  115. Maybe I’ve misunderstood the charts. I interpret the last chart to mean probability of a flatline under different carbon scenarios. No idea what physical model or feedback variables they are using.

    >You don’t fit the amount of carbon to the existing trend in observations. We know the amount of carbon in that case. It’s measured. It’s not a parameter you can adjust.

    Yes it is, and that’s what the different scenarios mean. The carbon variable is future carbon emissions.

    I don’t have access to the full paper, but I’m guessing that they are using a certain model, or perhaps a collection of models, and then giving the probabilities for different carbon scenarios.

    Presumably the red chart is the ‘extreme’ scenario with more warming.

  116. OK I think I see your point. Looks like most of the other charts are historical.

  117. You got it; although it’s not actually historical, except for the one set based on observations.

    All the others are based on simulations. The pre-industrial control is based on an ensemble of long control simulations. The CMIP3 is an ensemble of simulations of the 20th century. The observational data is the actual 20th century observed record. Then there is an ensemble of simulations for the 21st century, assuming the A2 scenario for emissions. They broken the latter in two parts, because the distribution changes over the course of the century, in this scenario.

    For comparison, the projected temperature rise under A2 is given as from 2 to 5.4 degrees; with best estimate 3.4 degrees. The scenarios cover a lot more than emissions; there’s a description here.

  118. So the purple and red charts are both future probability graphs based on future carbon emissions. They appear to both be SRES A2, but have different charts, with the scenario that has more warming, has a lower probability of a lull.

  119. Duae, I’m confused as to what the probability chart means. How do they simulate the 20th century? They should be able to give an exact probability for each decade in the century, so what are they simulating?

  120. The altered lull probability goes with accelerated emissions. That is, if we follow a scenario in which the rate of greenhouse warming increases over the present, then so also the frequency of flatlines will be reduced from the present.

    There’s nothing about a lull in the present (not a flatline; the 10 year trend is well and truly back to warming) to alter a projection for future emissions!

  121. PS. Omitted from the previous comment. The red and blue lines are the same scenario. The blue line says what is expected under this scenario for the first half of the 21st century. The red line says what is expected under this scenario for the entire 21st century.

  122. Mike asks: How do they simulate the 20th century? They should be able to give an exact probability for each decade in the century, so what are they simulating?

    One of the ways you check climate models is run them in a case where you already know the answer.

    They simulate the twentieth century just by setting boundary conditions like volcanic eruptions and greenhouse concentrations and aerosol concentrations and solar input etc to the values observed; then run the model and see if it behaves in the same was as the real world behaves. Generally, it comes pretty close, and the models are improving. The models show random variations in much the same way as the real world does; but not exactly the same because models cannot simulate to total accuracy.

    In this case, they run “ensembles”, which means you simulate the 20th century many different times, using a range of models and very slight changes to initial conditions to pick up chaotic effects. You then look at all the temperature trends from all simulation, count out the frequency of slope on all ten year windows — and plot their distribution. It ends up similar to the observed distribution, as we should expect.

    It is not possible to calculate a probability directly. Models are unpredictable in advance, with chaotic behaviour similar to the real world. The advantage of a model is that you can actually run the model multiple times and see what happens, and get the frequencies from multiple runs.

  123. >That is, if we follow a scenario in which the rate of greenhouse warming increases over the present, then so also the frequency of flatlines will be reduced from the present.

    So more warming scenario had a lower probability of a lull. As I understand you, there is only one future scenario in the chart, appearing twice, and all the others are historical, still don’t understand a simulated 20th century.

  124. There is one future scenario considered, called “A2”, which corresponds roughly to the way social and political action is headed at present. They ran an ensemble of models to see what happens under the A2 scenario in the 21st century. These runs give you all kinds of information.

    They’ve picked out two bits of information to plot from that ensemble: the distribution of trend over any decade within 2001-2050; and the distribution of trend over any decade within 2001-2100.

    How can you understand the idea of modeling the 21st century, but not understand the idea of modeling the 20th century?

  125. OK, I think I understand it now. Seemed pointless to graph a probability for a simulation of something that’s already happened.

    I am still seeing a high warming scenario, less likely to produce warming than a low warming scenario, only this time the difference is the carbon levels rather than the feedback variables in the models.

  126. sorry less likely to produce flatlines.

  127. Simulating what has already happened is essential for checking that your model makes sense. Without this, there’s no basis for confidence that any of the other simulations are even meaningful.

    There’s only one scenario in that plot. I think you are mixing up your terminology. The scenario gives the anthropogenic impacts – such as carbon. A scenario also includes aerosols, land clearing, other greenhouse gases, black carbon, etc, etc; but carbon dioxide is the largest factor when it comes to temperature response.

    A scenario is not a “prediction”; but a possible goal, or objective. It can be a target for policy makers or international agreements to achieve – or avoid!

    The A2 scenario involves an ongoing increase in greenhouse gas emissions, and carbon in particular. We are now at about 8 Gt carbon emissions per year (which is already above the A2 scenario for 2009!) and increasing; the A2 scenario involves an ongoing increase in carbon emissions though the 21st century, up to about 28 Gt/year by 2100.

    Given this scenario, there is still plenty of uncertainty as to the resulting temperature change, and that uncertainty arises because we don’t know climate sensitivity. Hence the temperature gain to the end of the 21st century, with the A2 scenario, is anything from 2 to 5.4 degrees. That is not carbon related; it is sensitivity related, and there’s nothing at all about lull frequency in the present that can help you constrain that uncertainty, or indicate whether the high or the low end is more likely.

    You seem stuck in a rut about this, but it’s really not that complicated. We can’t predict short term variations well at all — that’s chaotic. We can predict longer term trends; but only to limited accuracy –– that’s unknown sensitivity.

    The frequency of lulls in the present has much more to do with the magnitude of natural variations than with the long term gradient or sensitivity. I’ve said above that in science, you don’t overfit models to data. But if you ignore that principle and just go for a spurious curve fitting approach, then you might be able to adjust your model output to alter the frequency of lulls, but NOT by altering the long term trend. That’s a given observation and if you adjust away from that, your simulation is WORSE. You definitely don’t want to try and fit individual lulls, expect by explicit modeling of the actual causes of lulls — ENSO most especially for the last few years. And that also makes no long term difference.

    The frequency of lulls in the present doesn’t help you constrain the sensitivity, and they do nothing to help constrain the amount of warming we can expect in the 21st century. The lull frequency we observe is completely unexceptional.

  128. I get that you verify your models by simulating existing data. What I don’t get is why that would be relevant in a paper about the frequency of lulls. We already know the exact probabilities of lulls for the past, what is the point of a graph showing the chance of a lull in a simulation of the past? I guess it’s to show that the future lull probabilities ave some meaning.

    >there’s nothing at all about lull frequency in the present that can help you constrain that uncertainty, or indicate whether the high or the low end is more likely.

    We disagree on this, because you say that the high sensitivity and high warming scenario is as likely or more likely to produce lulls. If this is not the case, then I think information is learned from this. I’m not suggesting models should be adjusted to fit to individual lulls, but rather that the higher feedback scenario becomes less likely, and the range should be adjusted.

    You have given a physical construct in which a high warming scenario is as likely to produce lulls. However, in the low warming 20th century, we see higher probabilities, this time because there was less carbon.

    If we had a graph of some more carbon scenarios for the 21st century, we would have a better comparison.

  129. Mike, you say “we disagree”. That’s not a particularly useful way to look at it. It’s not just that you and I represent two different approaches. The thing is, nobody, to my knowledge, tries to use this lull frequency in the way you suggest. You may never figure out why, but I can try explaining it. It’s not that I have a lot of personal authority. It is rather that I am explaining to you WHY none of the people who actually work on this stuff do what you are suggesting.

    You say: We disagree on this, because you say that the high sensitivity and high warming scenario is as likely or more likely to produce lulls. If this is not the case, then I think information is learned from this. I’m not suggesting models should be adjusted to fit to individual lulls, but rather that the higher feedback scenario becomes less likely, and the range should be adjusted.

    You’re misusing terms again. Higher feedback is not a different scenario; it is a different model. Try and keep it straight. This is not merely being pedantic about words; you are making errors in reasoning by mixing up observations and projections, models and scenarios.

    A high sensitivity is a high warming model. It is projected to give a stronger warming trend than a low sensitivity model, given the same scenario. This is the basis of the difference between 2 degrees, or 5.4 degrees, by the end of the 21st century in the A2 scenario.

    You are suggesting the observations in the recent past should be indicating the low end warming is more likely for the future. Specifically: if we only used data up to 2004, say, then you get some kind of notion of relative likelihood for temperature in 2100. But now that we have data up to 2008, you think the likelihood of lower temperatures in 2100 is reduced. You’re wrong.

    If we were merely extending the trend to 2100, then you’d be correct… although the mention of lulls is actively misleading. It’s not the lull that matters, but the slightly reduced long term trend that is being extended. But since the projections are not based on extending a trend, this whole line of thought is irrelevant.

    A common popular argument is based on the idea that the trend of the last ten years invalidates the longer term trend. This is the cry of the “warming has stopped” nitwits. I appreciate that you are a bit more sophisticated than that.

    The more sophisticated approach recognizes that we have a model for climate, and that estimates for the future are based on applying that model to a future scenario. The ONLY way to alter the relative likelihood of the 2.4 low end and the 5.0 high end is to alter the relative likelihood of different models that give those numbers. The largest factor for this, by far, is the sensitivity of the models.

    So can you alter the relative likelihood of different sensitivities given the recent observations? Answer… no, I’m afraid not.

    Basically, you see this by considering how different models might simulate what has already being observed! That is how you check a model. We know the carbon emissions, you can’t alter that, and in any case, that’s not a change to a model, but to the boundary conditions. (The scenario for the last 30 years.) But what is the major effect of altering sensitivity for a given set of forcings? It is to alter the main trend! There is a side effect of having more or less lulls, but only because the whole bell shaped distribution of decadal trends follows along when you alter the mean. And if you try to alter the main trend, you will get a WORSE fit to the data.

    Digging into more detail only makes this whole approach even worse. The lulls we see are not random in the sense of pure noise. They are short term consequences of factors that are not well modeled. In particular, the recent lull is pretty obviously primarily a consequence of the ENSO oscillations. Since we are the bottom end of that right now… which also helps explain the recent “lull”, such as it is … we KNOW with high confidence that the long term trend to the present is going to be a slight underestimate! If you try to overfit to this trend, you are going to make the models worse.

    Bottom line. The current lull is entirely consistent with existing models. There’s nothing about it to suggest models need to be altered. The current lull doesn’t help one way or the other to distinguish between the high and low sensitivity models. Where you CAN distinguish is when you can get a much more accurate idea of the magnitudes of forcing involved in some short term variations. That’s much easier with the sharp cooling trend at a big volcano; and if you dig into the detail, what’s most useful is the recovery time. Ironically, the higher sensitivity is associated with a longer lull – but not because of a direct relationship between lull and sensitivity. It’s somewhat happenstance, because we are looking at a cooling spike to make the inferences. It’s been done, and this kind of work is why we know sensitivity is around 2 to 4.5 K/2xCO2.

  130. I agree with the middle, but you seem to have misunderstood what I’ve been saying. I’m not extending a trend to 2100, since that’s not how any of the models work.

    We have models A B C … Z with different feedbacks built into them, and all using the same carbon input scenario. Let’s say the ‘output’ 2100 temperature increase for these models is 0C .3C .6C … 7.5C
    By some level of calculation and/or guesswork, a confidence interval has been created saying we think the likely result is between 2.4C model H and 6.3C model V.

    >The ONLY way to alter the relative likelihood of the 2.4 low end and the 5.0 high end is to alter the relative likelihood of different models that give those numbers. The largest factor for this, by far, is the sensitivity of the models.

    We agree on this, and I think the existence of a lull makes the less sensitive model more likely. We see that under the A2 scenario, a decadal flatline has a very low probability(looks like 4% and 8% for 2001-2050). Now, I’m not sure what model or models they are using for this.

    Let’s say that we have models H through V above, and we ran all with scenario A2. I think model H is more likely to produce a flatline than model V. Now it may be the probabilities of a flatline are so high for the first decade that the new calculations don’t change things much, but some change is in order.

    This probably isn’t a very difficult question for some other people to answer. I’ll see if can get a definitive answer on flatline probability, but anyone who’s run the models, can certainly calculate the flatline probabilities.

    >If you try to overfit to this trend, you are going to make the models worse.

    I’m not trying to overfit to any trend. You are stating that the current cause of the flatline is the result of something that is not modeled well. So again you have to make a judgment about how much ‘negative forcing’ is caused by ENSO or whatever else you think caused the flatline. Is it a 6.4C warming earth that has been balanced out by a large negative trend, or is it a 2.4C warming planet that has been balanced out by a small negative trend? Or something in between? I think the existence of a ten year flatline adds new information to try and decide which models are more likely, and that a 6.4C warming + large negative forcing is less likely . All of it hinges on the assumption that flatlines are more likely under one model than another.

  131. Mike, I give up. You go ahead and try and pick the model to get a better match the with lulls, and I assure you, you will actually get a WORSE match with the data. I’ve tried to explain why, and have apparently failed.

    If you were correct someone would have tried to calculate sensitivity from the frequency of lulls. They haven’t.

    Some folks have tried to calculate sensitivity from the overall trend in the last thirty years. That makes a bit more sense; although it has other problems of its own. But the lulls? Nope. Can’t work. And I guess I can’t explain why for you either.

  132. Duae, I think starting with the lulls, which is what this post is about, has made you misunderstood my reasoning. I would say that ANY ten year temperature trend should give you a better idea of sensitivity. I realize the models don’t operate as linear trends plus noise, but for the sake of argument let’s use that as an example. So the expectations for warming by 2100 is 2.4C to 6.4C, or about .44C per decade. If in the first decade, assuming the incorrect behaviour of a linear trend plus noise, you get warming of .44C, right down the middle, that would still make 6.4C less likely, as well as 2.4C.

    Now the actual behaviors in the models are not linear plus noise, so a lull may not be presenting very much info. But if there is a lull, and this is less likely in one model over another, that should change the model likelihood.
    Is there any length of time for which a flatline would not change your opinion on the likelihood of one model over another?

  133. Duae, Tamino has responded on his blog(Post Title You Bet!) that in fact the higher warming scenario is less likely to produce a flatline, as I said. He also says that the difference in probabilities for the first decade is very low as warming accelerates in the models.

  134. Mike, the response of Tamino tried to explain what I also have been trying to say. Higher sensitivity is less likely to give a flat line IN THE FUTURE, but there’s basically NO IMPLICATION for the recent decade just past. The reason for this is as I have said … an increasing trend in the future. If the trend increases to be greater than the present, then so also the frequency of lulls will reduce wrt the present. Higher sensitivity is expected to have such a result; but it is not possible to use the most recent decade to say much about how likely this is. Honest.

    I suggest you take it up further with tamino, who has more expertise in this than I do. Somehow, however, I think the fundamental problem remains your own confusion between the projections and the observations. Higher sensitivity alters the projection, because of a larger increase in the trend. It does not alter what is seen, where the trend is known.

    The basic reason that the recently observed trend is so little help for inferring sensitivity is the confounding effects of oceanic warming. A larger sensitivity means more warming in the pipeline, much more than additional warming in the immediate present. I’ve been reading an older paper by Hansen at al in 1988 which explains this very thing; the models with higher sensitivity don’t actually make much difference at all over the next few decades (which when he was writing means right now!), but rather make a difference in the longer duration of the warming with increased oceanic lag.

    Here’s what was said back in 1988 (ref: and see section 6.1):
    Forecast temperature trends for time scales of a few decades of less are not very sensitive to the model’s equilibrium climate sensitivity.

    I have added a comment at Tamino’s blog in order to link back here, so that anyone who cares can see my own attempts to explain in my own words. No offense, but I don’t think you can represent my position in your own words very well as yet. I don’t think you understand tamino either; but I suggest you take that up over there. If you don’t mind, don’t bother trying to sort out what I might or might not have said; that’s not important. If tamino disagrees with me on anything, just go with tamino’s explanations.

    If anyone cares, the the response of Tamino to Mike is here, and I endorse it without reservation.

    Best of luck, Mike. You’ve picked a very good person to ask about this, and I really hope it can be of some help.

  135. >Higher sensitivity is less likely to give a flat line IN THE FUTURE, but there’s basically NO IMPLICATION for the recent decade just past.

    You’re right. I didn’t get that you were saying the first part of that when you described the various physical processes.

  136. So is there a ten year period of small trend without a volcano?

  137. Typo: their is not much in the peer-reviewed literature

    Response- thanks– chris

  138. “GHG-induced paradigm provides the best predictive…power”

    How do you know this? You’ve been told by visitors from the future? Or God?

  139. Leonard Weinstein

    If 30 years is a positive indicator of climate, and the global temperature increase has stalled over the last 10 or so years (too short a time to call positive or negative trend), will AGW supporters agree that if the next 20 or so years show a general drop in global temperature, that they were wrong? If not , why? There are several indicators that this is what is going to happen, and even some pro AGW modelers agree it is likely. The comment is then made by some: Just you wait, the heating will start back stronger than ever. Either 30 years is proof of something, or not, you can’t have it both ways. There is nothing in the ocean pipeline, and no other source can make up the difference, so where is the turnaround supposed to come from?

Leave a reply to MikeN Cancel reply