An update to Kiehl and Trenberth 1997

Kiehl and Trenberth 1997 is a widely cited document on the Earth’s global, annual energy budget and discusses important things like how much solar radiation comes in, how much is reflected away, how much infrared goes out, how the surface energy budget is partitioned between radiative and the latent and sensible heat fluxes, etc. The authors (along with J. T. Fasullo) have a new, 2008 paper on the same subject—and a new colorful diagram to go along with it.  These values are all globally and annually averaged, with the “net absorbed” part of 0.9 W/m2 due to the enhanced greenhouse effect.

Update— Actually it will be a 2009 paper, coming out in BAMS in March.

kiehl4

504 responses to “An update to Kiehl and Trenberth 1997

  1. Thank you. This is the type of thing that I don’t know is out there till someone points it out to me. JG

  2. I have first the question: where is this update comming from? From Kiehl and Trenberth or other people? if other, please give me a contact to the outors.
    best regards from Berlin
    Ernst Juenger

    Response– Clicking on the link to the paper will work– chris

  3. This is a really useful paper, and the relevant issue of BAMS is now available.

    Trenberth, K.E., J.T. Fasullo, and J. Kiehl, 2009: Earth’s Global Energy Budget. Bulletin of the American Meteorological Society, Vol 90, No 2, pp 311–323.

    At the time of this comment, the article is also being made available as open access by BAMS. Here are links for the abstract, and for the full pdf article.

  4. Oops. The reference above should be Vol 90, No 3; not No 2.

  5. Dan Pangburn

    Are they unaware that most absorption takes place close the emitting surface? The graphic is misleading. Still.

    Response– Huh??

  6. Patrick 027

    Dan Pangburn – you misunderstood the intent of the labels and flows shown.

    1. clarifications:

    At each wavelength and along each direction, each layer of air emits and absorbs in proportion to (1-transmissivity) (setting aside scattering, which is a minor issue for longwave radiation under Earthly conditions). Thus the portion of energy from any one layer that reaches any one other layer (or surface) decays exponentially with optical path length, and the portion that is absorbed over a distance is equal to the decay over that distance. When integrating over wavelengths and directions, the decay is not quite exponential, though it tends to be qualitatively similar (the exponential decay rate decreases since the remaining portion is increasingly at wavelengths where there is greater transparency, and is increasingly concentrated into the range of directions (solid angle) closer to vertical (optical path length per unit vertical distance is inversely proportional to the cosine of the angle from vertical).

    Even without clouds and aside from horizontal differences in humidity, optical path length per unit vertical distance also varies with height.

    This is partly because some gases – water vapor and ozone in particular, vary in relative concentration greatly with height; water vapor concentration relative to air tends to decrease roughly exponentially with height within the troposphere, and has less effect on radiation that is emitted directly to space than it would if it were evenly distributed (the CO2, etc, and any high level clouds are ‘in front’ of the lower-lying H2O, etc, in as far as upward radiation to space is concerned).

    This is also because the spectra of gases is affected by pressure and doppler broadening of absorption/emission lines, and these things vary with height – within the troposphere, they both decrease with height, so that the absorptivity and emissivity of an optically thin layer is more concentrated toward line centers higher up. (I think the effect of pressure broadenning dominates even into a portion of the stratosphere – not sure where doppler broadenning becomes more important, but it might be where the air is too optically thin for it to make much difference (?).)

    2. More to the point of your question:

    The fluxes that are shown are not to and from each layer of air or clouds, but the total that reaches the surface and that reaches space from all layers of air, and the total from the surface that is absorbed over layers of air, and does not explicitly show the radiative energy transfers between different layers of air, and thus does not show the fluxes from layers of air that are absorbed by other layers of air.

  7. Patrick 027

    1. – on line broadenning consequences – lack of broadenning increases the tranmission in more transparent gaps between line centers and concentrates emissivitity/absorptivity of a thin layer toward the line centers. When, over a sufficient distance, the absorptivity at and near line centers approaches 1 (when the optical path length gets large), increasing the absorptivity of thinner layers does not contribute much more to the absorptivity over such a distance because of overlap (saturation), so the effect of reduced broadenningover a larger interval of wavelengths is to increase transmission.

    2. –

    For example, of the radiation from the atmosphere that reaches the surface, some portion of that is from any given layer of air; the portion that is from a layer of air is only a fraction of the total emitted by that layer of air in the direction of the surface.

    (PS sorry last comment was not located properly relative to the comment to which it was in response.)

  8. Dan Pangburn

    There are at least three glaringly misleading things on this graphic. The first states that 78 is absorbed by the atmosphere. That should read atmosphere and clouds (and my energy balance model calculations show that this is closer to 61). The second is the failure to show thermalization (which my calculations show to be about 55). The third is showing the large amount of radiation going all the way to the clouds and from the clouds to the ground (I calculate this to be about 16) when in fact most absorption and re-radiation and/or thermalization occurs close to the radiating surface.

  9. Patrick 027,
    Our debate over at RealClimate was killed July 07 when Gavin apparently closed the thread. The heart of it was centred on the the various “Earth’s energy budget” diagrams from various sources, not so much in the numbers , but in the the nature of the depictions involved.
    Any chance of continuing the discussion here?

  10. Bob_FJ,

    What was your question about the diagram?

  11. Hi Patrick, thanks for your return and your interest. This is a quickie because it is late here in Oz.

    The fundamental problem I have with the generic K & T diagram is that the displayed 396 W/m^2 of EMR up-welling (exclusively vertically!) from the surface is not HEAT but that it is instead a very crude depiction of the greenhouse effect.
    However there is a broad conception about that this EMR is HEAT, whereas it might be better described as infrared light that BTW flies around in ALL directions, sometimes with resultant HEAT transport IF there is a sink for it to go to. Remember S-B in radiative HEAT transfer the fundamental bit: (T1^4 – T2^4)

    But there is more….. Gotta go

    Response– The upwelling radiation itself is not a depiction of the greenhouse effect, but rather the difference in radiation that an observer would see from space between the emission at the TOA and the emission at the surface (i.e. 150 W/m2)– chris

  12. Just a small point for now:

    Heat is a funny word. In everyday conversation, we all know what it is. When physicists get involved, nobody understands it.

    According to some, radiation emitted through processes occuring at local thermodynamic equilibrium (so that the entropy flux = energy flux / temperature) (as that emitted by a blackbody) might be a form of heat (for which entropy flux = heat flux / Temperature); however, some might say that only the net flux of opposing radiant fluxes can be a heat flux (??).

  13. Heat can flow by conduction, convection, and/or radiation.

    Heat is a flow of energy that also carries entropy (in a specific way).

    What we often call stored heat is actually enthalpy, or in cases where volume is held constant, internal energy.

    * Heat only flows spontaneously down a temperature gradient (not up). (Second Law of Thermodynamics)
    (When heat flows into or out of a body at local thermodynamic equilibrium (impotantly, the point of exit or entry has a defined temperature that is the same temperature as the body), the heat, Q, carries an entropy, S, equal to Q/T, such that heat exiting a body Aremoves entropy S(A) = Q/T(A) and heat absorbed into a body B adds entropy S(B) = Q/T(B); thus, a net flow of heat from higher T body to lower T body increases the total entropy of the two bodies together (The amount of Q from a high temperature that can be converted to work when the remainder flows to a cold temperature is determined by conservation of entropy (ideal heat engine – the same flows of work and heat in reverse are an ideal heat pump or refrigerator). S can also be affected by mixing/compositional gradients, chemical and physical reactions, etc, and so the more general form of the second law of thermodynamics is expressed in terms of entropy S.)

    However, radiation emitted by processes occuring at local thermodynamic equilibrium (LTE) can go back and forth between two bodies; so if the above is strictly true (heat only flows…), then only the net radiative energy flux (when emission occurs at LTE) is called heat. If it is only the net heat flow that is restricted to flow from hot to cold, than all radiant energy is potentially considered heat, depending on context.

    PS
    As I understand it, the entropy of radiation is conserved except upon scattering or partial reflection or partial absorption; it is determined by the temperature at point of emission for emission at LTE. In the absence of scattering, partial reflection, and partial absorption, radiant spectral/monochromatic intensity (flux per unit area per unit solid angle per unit frequency interval at a given frequency (or wavelength interval at a given wavelength, provided wavelength is given in terms of the wavelength the same frequency would have in a vacuum or some other standard index of refraction)) varies in a particular way along a direction of propagation (specifically, it must be group velocity, if that is different from phase motion), and specifically it is constant if index of refraction is constant, and proportional to the square of the index of refraction if various complexities do not come into the picture. IF processes are polarization selective, then this will hold for any particular polarization but not for all polarizations taken together. This assumes phase lags of various photons do not matter; if there are phase lag selective processes or if emitted radiation is less than completely incoherent, then this applies to individual groups of photons that are in phase with each other. For a given index of refraction, If the same spectral/monochromatic radiant energy flux is concentrated into a smaller solid angle (greater intensity – ie direct sunlight through a clear window vs diffuse sunlight through a frosted window), concentrated into a smaller subset of polarizations, or concentrated into a smaller set of relative phases (less than 100 % incoherent), the brightness temperature – the temperature of the blackbody that could have produced such radiation, and thus the temperature of matter that such radiation would be in equilibrium with (aborption and emission occuring at equal rates at LTE within an infinite or mirror-surfaced body or within an opaque enclosed cavity (so that radiation does not enter or leave) – will be higher, and the entropy will be lower. As I understand it, when radiant energy from a higher temperature blackbody is absorbed by a lower temperature blackbody, entropy must be created upon absorption. The reverse flow of radiant energy requires entropy destruction upon absorption at the higher temperature if considered in isolation, but the two flows of radiation cannot occur independently of each other, so there is an increase in entropy when a hot and cold body exchange radiation, with the net flow being from hot to cold. (There is a potential caveat in that the brightness temperature of radiation emitted (at LTE) from a partly transparent body or body with nonzero albedo will be less than the actual temperature; however, for emissions at LTE, over optical distance, brightness temperature will approach actual temperature if actual temperature does not fluctuate (else it will lag the actual temperature on the scale of unit optical distance), and if two 100 % opaque, partially reflective infinite surfaces at the same temperature face each other, the combined emitted + reflected radiation from each surface will approach the brightness temperature of the surfaces over time.

    But more specific to energy fluxes in climate/weather:

    It doesn’t really matter to me whether or not it is called heat – it is still part of the heat budget either way.

    Perfect blackbody radiation is isotropic, meaning the intensity is equal in all directions. Within a medium of some opacity, with emissions at LTE, the brightness of radiation over different directions will occur with variations in the temperature of the material; if there is a temperature gradient in one direction that extends over sufficient optical thickness, the direction with the greatest net radiant intensity will be parallel to the temperature gradient (so long as optical properties do not vary over space and for different directions) – the radiant intensities in opposite directions will be the most different in that direction; they will be the same perpendicular to that direction.

    What is shown above is fluxes per unit horizontal area. It is shown as vertical fluxes because they are across horizontal areas, and the net horizontal radiation fluxes tend to be zero because temperature gradients are generally small in the horizontal direction relative to optical distances in the layers of the atmosphere that are important in this context, and on the global scale, horizontal surfaces are closed surfaces, so there are not globally averaged horizontal fluxes. Locally there will be exceptions, such as around horizontally-varying cloud cover and sloping topography, but horizontal radiative fluxes are not climatologically significant so far as I know.

    The flux through a horizontal surface includes all photons that move across a horizontal surface, at any angle from the vertical (except exactly 90 degrees). The fluxes shown in the diagram include radiation in all directions within the top and bottom hemispheric solid angles (contributing to upward and downward fluxes, respectively).

    Note that a radiant intensity along a direction contributes to a flux per unit area through a surface in proportion to the cosine of the angle of the direction from the normal to the surface.

  14. Patrick 027,
    The context of discussion on HEAT loss from the surface is that the temperature of matter is directly related to its HEAT content and that EMR in itself, is a different form of energy that is not HEAT contained in matter. It is the temperature of matter that is of import in climate change, not infrared light that is flying around in all directions. HEAT can be described in a complex manner at the quantum level, but it is more easily described in the observable world as you know. (e.g. specific heat of matter)
    I don’t have a lot of time this morning, so I’ll quickly paste some other stuff on this:

    1) Here is part of a related comment I made elsewhere:
    If we consider any layer of air, then the highest intensity of inherent EMR (radiation) is lateral in all directions. However, within any typical air layer pocket there is no change in temperature as a consequence, or in other words, there is no HEAT transfer. This follows from the second law of thermodynamics; that HEAT can only flow into a colder sink. (BTW, there is common misconception that EMR (eg sunlight), is the same form of energy as HEAT, but it is not, and it is only converted to HEAT if it is absorbed by matter).
    In my 580/p12, I quoted an article on radiative HEAT transfer and the Stefen Boltzmann (S-B) equation, writing in part:

    “… the part of the equation fundamental to the discussion is: (T1^4 – T2^4), which is that classic potential difference found in all energy transfer equations , for instance (T1 – T2) in conductive heat transfer, or (H1 – H2) in hydraulics, where H2 is the lower height…”

    What you appear to be saying is that the HEAT loss from the surface via EMR is crudely K1 x T1^4, and that the back radiation of crudely K2 x T2^4 has no effect

    2) Here is another comment in part
    Concerning K & T 1987, please note that the claimed upwelling of EMR, (Electro Magnetic Radiation…. also known as Infrared Light, or long-wave radiation), of 396 w/m^2 is opposed by 333 w/m^2 back radiation, which slows down the rate of escape of HEAT via that transport process of EMR. Furthermore, by definition, EMR is not in itself HEAT.
    Here is a simple analogy, comparing ELECTRICITY to EMR in two of its aspects:
    1) Hold an electrical resistor in your hand, and pass a suitable current through it. What you should feel is HEAT that has been converted from electricity via its “absorption” of electrons in the resistor.
    2) Now, expose some of your skin to adequate sunlight, and you should experience a similar sensation. The sunlight, (short-wave EMR) will be converted to HEAT by a somewhat similar process. In this case it is via dermal molecular absorption of photons of light.
    3) In the analogy 1), if an appropriate voltage for the experiment is say 200 volts across the resistor, then the identical result would be obtained, if there were two opposing EMF’s of 400 volts and 600 volts across that same resistor. (BTW, nothing would happen if the opposing voltages were equal, AOTBE).

    Incidentally, both these responses of mine were deleted in moderation from the RealClimate website, which seemed to be rather unfair

  15. “What you appear to be saying is that the HEAT loss from the surface via EMR is crudely K1 x T1^4, and that the back radiation of crudely K2 x T2^4 has no effect”

    (PS that is a greybody approximation; the reality is more complex because optical properties are wavelength (or frequency) dependent (so that radiative energy fluxes become a more complex function of temperature), but the greybody approximation can be useful for introductory purposes.)

    Where did it appear that I was saying that ? I certainly never meant that.

  16. Your analogy of opposing voltages is clever.

    But regarding the classification of EMR as heat or not is I think somewhat besides the point. It accomplishes a flow of heat in some contexts.

    The reason we might not consider sunlight to be heat is that it’s entropy per unit energy is quite low – approx. 1/(5780 K), so that it has the potential to do work in a relatively cool environment (specifically, an ideal heat engine could convert almost 95 % of direct solar radiation in space to work if the heat sink is just under 300 K – it will be a bit less than that under the atmosphere even on a clear day because scattering of radiation out of the direct beam will make the direct sunlight appear ‘cooler’ (varying over wavelength), and the diffuse sunlight will also a lower brightness temperature than the sun’s actual brightness temperature at any given wavelength.

    However, sunlight is by no means laser light or maser radiation or coherent radio waves, each having extremely low entropy per unit energy – their effective brightness temperatures (if coherence and polarization are taken into account as well as intensity) will be extremely high (how high, I don’t know – a perfectly coherent laser beam that had zero spreading would have a temperature of 1/0 – undefined, thus having zero entropy, and perhaps could by itself be considered as ‘not heat’ in any context.

    In the environment of the photosphere of the sun, solar radiation is a form of heat if any radiation can be considered heat. Within an opaque isothermal expanse of matter, some of the ‘heat energy’ (enthalpy or internal energy) will be in the form of radiation that is in LTE with the matter or at least tends to approach that condition over time and space.

    I have read that heat is only a flow or transfer of energy and not stored energy of any form, and thus the heat you refer to as being possessed by matter is actually enthalpy, although the term heat capacity does apply the the relationship between enthalpy and temperature, specifically that a given heat Q flowing into a body will cause (in isolation from other processes) an increase in temperature dT, an increase in enthalpy dH and an increase in internal energy dIE, such that:

    If at constant pressure, dH = Q = Cp*dT
    If at constant volume, dIE = Q = Cv*dT

    For an ideal gas, (Cp – Cv)/(amount of matter) = R
    (Cp and Cv are the heat capacities for an amount of matter; per unit mass or per unit mole, specific heats would be cp and cv(although it’s possible others would use a different way of differentiating the two besides capital vs small case), and for an ideal gas, cp = cv + R .)

    If pressure and/or volume are not fixed, dH = Cp*dT and dIE = Cv*dT, but the relationship to Q is more complex. At constant pressure, when Q is added, dH = Q but dIE is less than Q – the difference is the work done by expansion at pressure. When the source of pressure is cummulative weight under gravity, this expansion corresponds to an increase in gravitational potential energy somewhere, which is what the work accomplishes. When things stay at constant pressure, the same Q is lost for an equal cooling, as the work is done on the material as it cools.

  17. Now at the temperatures being considered, the bulk of, colloquially speaking, heat (including enthalpy in matter) is in matter and the amount in radiation at any one moment is minor.

    But in planetary sciences, the terms latent and sensible heat are allowed; sensible heat flux is a transport of enthalpy associated with Cp*T; latent heat flux is a transport of enthalpy associated with a phase change (this could also extend to chemical reactions where applicable – I’m not sure if that is called latent, though).

    “It is the temperature of matter that is of import in climate change, not infrared light that is flying around in all directions. ”

    But the radiative transport of energy, combined with other forms of energy transport, is extremely important in determining how temperatures change and what equilibrium temperatures would be.

    Specifically, a divergence of heat fluxes out of any grid cell will cause a reduction of enthalpy within that grid cell, and tend to cause decrease in temperature (depending on changes in Cp per grid cell) and/or phase changes.

    A net flow of water vapor across a grid cell surface is a net latent heat flux (that can generally be considered perpendicular to the surface – if there is a flux along the surface, it would be a flux perpendicular to another surface, and the two fluxes could be vector components of a total flux). IF more water vapor enters than leaves a grid cell, there is a net convergence of the latent heat fluxes into the grid cell, and an increase in latent heat in the grid cell equal to that convergence. (Latent heat can be converted to sensible heat or vice versa in a phase change. If liquid water is the ‘default’ phase, then addition of frozen water would be an addition of negative latent heat.)

    A net flow of sensible heat across a grid cell surface is a sensible heat flux perpendicular to the surface; a convergence of sensible heat fluxes increases the density of the sensible heat.

    A net flow of radiation across a grid cell surface is a radiant energy flux, that in some contexts will be a radiant heat flux. In the absence of changes in the real component of the index of refraction, or for averages over time periods comparable or greater to the time taken for radiation to traverse the spatial scales involved, convergence of a radiant heat flux implies absorption greater than emission, and a positive net radiative heating rate.

    Generally, a heat flux convergence is equal to a heating rate per unit volume or mass (depending on coordinate system), which by itself will tend to increase the temperature or cause phase changes, etc, in the absence of a comparable heat capacity convergence, etc.

    Sensible heat fluxes are by conduction or convection; conduction is only significant just above, at and below the surface of the solid/liquid Earth and in a very small fraction of atmospheric mass in the uppermost atmosphere that can often be neglected for some climatological purposes.

    Latent heat fluxes are by convection and molecular diffusion (molecular diffusion being important at and just above the surface) and by precipitation (if liquid water is used as a default (zero) for latent heat content, removal of solid water from one mass of air, moving to another or to the surface, is a flux of negative latent heat).

    As air (or any material) moves to different pressures, it can carry a sensible heat flux, but expansion with decreasing pressure converts some of that sensible heat to work to gravitational potential energy of the whole atmosphere (which can then be converted to kinetic energy in some conditions). Compression with increasing pressure does the reverse. This adiabatic temperature change (caused only by changes in pressure) is infinitesimal across an infinitesimal change in pressure, such as when air just passes through a surface, so it doesn’t factor directly into the sensible heat flux across a surface.

    (PS when air of different temperatures move adiabatically across a pressure level (isobaric surface), if the cooler air is moving to higher pressure and warmer air is moving to lower pressure, for two equal masses of air (which is necessary in order for the mass of air above the pressure level to remain constant and maintain the pressure level, aside from variations in gravity, etc.) the reduction in enthalpy of the warmer air is greater than the increase in enthalpy of the cooler air, so there is a net conversion of heat to work. This is often not shown in energy budgets such as that above because this work is ultimately converted back to heat (adiabatically or frictionally), and the net conversion to kinetic energy to frictional heating is small, …)

    “If we consider any layer of air, then the highest intensity of inherent EMR (radiation) is lateral in all directions. However, within any typical air layer pocket there is no change in temperature as a consequence, or in other words, there is no HEAT transfer. This follows from the second law of thermodynamics; that HEAT can only flow into a colder sink.”

    Yes, if we only consider radiation emitted from an optically thin layer of air, the intensity will be highest for directions within the plane of the layer, which, if horizontally isothermal, will not result in any net heat transport.

    However, multiple layers of air above and/or below, and an underlying surface, contribute to the total radiant intensities that cross the layer of air; some of that radiation may be absorbed by that layer of air, and some of the radiation from that layer of air is absorbed in other layers or at the surface or escapes to space. There are signficant temperature (or brightness temperature, for space) differences in the vertical over relatively short optical distances at many wavelengths, so that there are significant net fluxes across horizontal surfaces (and thus in vertical directions), and there is some divergence or convergence of those fluxes that result in net radiative cooling and heating rates.

  18. Patrick, Reur 12:09 pm

    “…[1] that is a greybody approximation; the reality is more complex because optical properties are wavelength (or frequency) dependent (so that radiative energy fluxes become a more complex function of temperature), but the greybody approximation can be useful for introductory purposes.)
    [2] Where did it appear that I was saying that ? I certainly never meant that.

    [1] The generic K & T 1997 diagrams show the surface condition, e.g. in 1997; 396 -333, and it does not matter how or where the 333 originates. I’ve suggested above that this is: “a very crude depiction of the greenhouse effect.”. (that it is more complicated, but Chris disagrees in his response).
    The intent in K & T appears to be to show gross EMR loss from the surface, and the gross EMR gain at the surface. I agree that things get complicated above the surface, for instance, back radiation diminishes with altitude, and absorption is most intense quite near the surface. However, that is not my issue concerning the net heat loss from the surface.
    This all arose from what I see as important and rarely discussed issues at the surface concerning evapo-transpiration, and its relative importance to climate, and also the absorption of infrared radiation into the top skin of water, just where this major latent heat loss takes place. However, this is only background and let’s not divert from the current topic.
    I come back to the point of potential difference between energy sources, and in the case of S-B, it is the fundamental term; (T1^4 – T2^4), which I do not see as an approximation at the surface.

    [2] Sorry, I did not mean that you said that, my comment was a cut and paste from something I tried to post over at RC.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    Here is something I wrote that shows that some impeccable sources agree that there is a net heat loss from the surface, along the lines of my main topic here.

    BobFJ writes: If there is more water vapour, might you speculate that there will be increased evaporation? (which according to NOAA and others is already the greatest HEAT loss process from the surface, see links below).

    It isn’t. Radiation is.

    So in other words, [BPL] the standard text book Stefan Boltzmann formula for HEAT transfer (loss) from a body at temperature T1, to a cooler source at T2 is wrong?
    Radiative Power = Area x S.B.constant x emissivity x (T1^4 – T2^4)
    Also, you claim that the following “Earth’s Energy Budget” diagrams are all wrong?
    This is the simplest I’ve seen from NASA Earth Observatory
    http://earthobservatory.nasa.gov/Features/EnergyBalance/images/surface_energy
    _balance.jpg
    Here are some others, the first from NOAA.



    http://en.wikipedia.org/wiki/File:Breakdown_of_the_incoming_solar_energy.svg
    http://earthobservatory.nasa.gov/Features/EnergyBalance/images/atmosphere_en
    ergy_balance.jpg
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Incidentally, this too was deleted in mediation at RC
    Your following two posts are too big for me to handle at the moment Wow!….. Maybe this evening.

  19. 1. My comment about the greybody approximation (an approximation in which emissivity and absorptivity are constant over wavelength) was in response to your statement about the net radiant flux per unit area being proportional to T1^4 – T2^4. When optical properties vary over wavelength (very important for atmospheric gases in particular), the relationship is more complicated, but is more clear if taken over one infinitesimal interval of the spectrum at a time – in that case, the net radiative flux between two surfaces (excluding radiation not emitted by either surface, and excluding radiation not absorbed by one of the surfaces) is proportional to F(v,T1) – F(v,T2), where F is the blackbody flux per unit area per unit v at v and at T, v being frequency, and F always increases with an increase in T, approximately in linear proportion for low v and high T, but F increases much faster in proportion to T at high v and low T (hence, the v with the highest F shifts to higher v at higher T). What the net flux actually is depends not only on emissivity and absorptivity (and reflectivity, since radiation reflected between surfaces has multiple changes of being absorbed by them) of each surface, but also the transmissivity of the intervening space, which inevitably is a function of angle, since the path is longer from one surface to next in directions closer to parallel to the surfaces (PS I am assuming surfaces are parallel to each other), and for at least the same reason if not others, the optical properties of the surfaces can vary over direction (if the surface is a representation of a thin layer of air, emissivity and absorptivity will both approach 1 at grazing angles but will be less than 1, possibly much less than 1, at angles closer to perpendicular to the surface. For actual interfaces between materials, there can be some direction-dependent reflectivity). Thus, actual numerical values require taking B(v,T), the blackbody intensity at v and T, multiplied by the emissivity of one surface and absorptivity of another and the transmissivity in between (if there is no reflection or scattering), and integrating the contributions over different directions. In some cases, it will be necessary to do this seperately for different polarizations. But without too many complexities, the result will be qualitatively similar to the net flux based on transmission at a particular direction… This is in particular true for LW radiation in the atmosphere.

    PS see my commenst starting around http://www.skepticalscience.com/argument.php?p=10&t=530&&a=18#3368
    or you could jump to


    or start with this comment to help put prior comments in context:


    (note that I occasionaly make errors which I try to correct in subsequent comments.)
    In some ways it is easier to, rather than think in terms of the net fluxes (from emission to absorption) between specific layers, instead think in terms of the net fluxes passing through a particular level; the contribution to the gross flux in either direction is emission that is distributed over some space; before weighting by B(v,T), the density of the distribution, at a given frequency (and polarization), for contribution to the total flux from one particular direction, matches of the absorption from radiation coming from the opposite direction. It is easy to visualize if you think about what it would look like to be inside a fog that, for LW radiation, is glowing incandescently, with temperature variations over space.

  20. PS at those websites, I wrote Ibb (as in ‘blackbody intensity’) instead of B here to mean the same thing.

    What I refered to as an emission distribution, I probably should have called an emission-cross-section visibility distribution, which must be weighted by B(v,T) to get the emission distribution.

  21. PS the convergence of radiant energy fluxes is a radiative heating rate per unit volume or proportional to unit mass (the later being closer to the proportional to the radiative contribution to a temperature change rate, unless phase changes occur, etc.

    The radiative heating rate assigned to the surface of the Earth is actually the volumetric integration over depth beneath/through the surface of the volumetric heating rate, and thus is a heating rate per unit area. The same can be done to assign heating rates for the whole atmosphere or its subdivisions. Depending on surface characteristics, emission and absorption of radiation from/by the surface may be distributed within a very small distance of the surface (over the depth of vegetation, over a microscopic distance on rock), although for some wavelengths of SW (solar) radiation, absorption by the ocean occurs at significant depths.

    PS In the above comment relating Q to dIE and dH, I should have used a different symbol for Q; what I wrote as Q would actually be Q*dt, with is equal to an amount of energy, that which flows over a time interval dt; Q is a heat flow rate.

  22. No, wait, I think Q is conventionally used for an amount of heat energy; I should have used dQ, not Q, nor Q*dt; if J is a heat flow rate, J*dt = dQ.

    But more to the point of your inquiry:

    Yes, the net LW radiative cooling rate of the surface is less than the total convective cooling of the surface (latent heat flux + sensible heat flux) and is less than just the latent heat flux from the surface, in the global average.

    (PS in this context, the term convection includes conduction of sensible heat from the surface to the thin layer of air immediately above, as well as molecular diffusion of water vapor from the surface to the thin layer of air in contact with the surface.)

    This isn’t the same as ‘being less important’, though (which may have been what whoever was arguing with you was thinking of?). The convective fluxes would be insignificant or not exist (for global averag conditions) if radiative conditions were sufficiently different. To a first approximation, the atmosphere on average tends to be in radiative-convective equilibrium, in which the convection (latent + sensible) that does occur is whatever amount of convection required to keep the temperature profile of the atmosphere from being significantly unstable to convection (allowing for stability where radiation makes it such (stratosphere, etc.); while the LW radiative fluxes depend on the temperature distribution and all radiative fluxes depend on optical properties.

  23. Patrick 027, in your 1:41 pm, you quoted me and responded:
    “It is the temperature of matter that is of import in climate change, not infrared light that is flying around in all directions. ”
    “But the radiative transport of energy, combined with other forms of energy transport, is extremely important in determining how temperatures change and what equilibrium temperatures would be.”
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Yes, I totally agree, and sorry, but I did not elaborate my meaning adequately.
    The first principles include that global warming over the past ~150 years is measured in terms of temperature rise of a little less than 1C. (e.g. HADCRUT). This T is measured at or close to the surface, and is the consequence of heat within the matter at those locations. Heat is defined at the quantum level by the complex dynamics of whole molecules and/or their internal quanta. Photons of light (external quanta) can be absorbed by molecules and hence raise their heat energy levels, at which point the photon ceases to exist as a photon. Thus the photon streams in EMR are a different form of energy to heat.

    The bulk of infrared light that is flying around in the atmosphere does nothing in terms of heat transfer, see sketch:

    Heat transfer (transport) via EMR, (somewhat like electricity) can only take place when there is a potential difference between the two locations. Of course the back radiation “location” is hard to nail down in origin etc, but at the surface that receives it, it does not need not know!

    Thus the measure of global warming (or the T aspect of climate) is not measured in any way by EMR activity.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Patrick, many thanks for your long and interesting posts, but I’m having difficulty keeping pace with you.
    I also feel that there is a tad too much detail that is of side interest at times, and may not want to respond. But again, thanks, it’s good stuff! Gotta go.

  24. “Of course the back radiation “location” is hard to nail down in origin etc, but at the surface that receives it, it does not need not know! ”

    Generally, more of that back radiation comes from the first meter of air than from the second meter of air, more from the first km than from the second km, etc. This changes if there is a layer of clouds beneath clear air, or if there is a temperature inversion at some point.

    For radiaton in one particular direction (assuming no refraction):

    Of the radiant intensity at a point, the portion which directly (without intervening absorption, scattering or reflection) reaches a distance x from that point is exp(-x*ext), where ext is the extinction coefficient, which is equal to the extinction cross section per unit volume. The extinction coefficient is the sum of the absorption coefficient and the scattering coefficient, each of which is equal to its own cross section per unit volume. At LTE, the absorption cross section (for radiation in one direction, at one wavelength and polarization) is equal to the emission cross section (for radiation emitted in the reverse direction, same wavelength, and I assume same polarization (although would circularly polarized radiation have to be switched between left-handed and right-handed – well that I don’t know – polarization is not a big issue for atmospheric gases, thankfully). If optical properties vary over distance, than the relationship becomes more complicated, but then one can replace x with a measure of optical thickness. Optical thickness is equal to the integral of ext*dx over x, and optical thickness contributions from absorption and from scattering, and from each individual contributions to absorption and scattering, etc, by different spatial intervals and by different gases or other substances or interfaces, add linearly. The radiant intensity emitted along a direction from an interval x that reaches one end of the interval (regardless of how far beyond that interval it may go) is equal to B(v,T) * [1 – exp(-x*emi)], where emi is the emission coefficient which is the emission cross section per unit volume, assuming optical properties and T do not vary within that interval, setting aside reflections from interval endpoints or anywhere between, and assuming that absorptivity is the only contribution to opacity and that this occurs at LTE, etc, so that absorptivity = emissivity; absorption cross section = emission cross section, etc. Taking the derivative and multiplying by dx:

    dB = B*exp(-x*emi) * emi * dx

    where x is measured in the opposite direction of the radiation, from the point where B is measured going into the interval from which it comes, dB is the contribution to B from emission from dx, and it is proportional to B*emi*dx, which is the emission in the negative x direction from dx, and it is proportional to exp(-x*emi), which is the fraction of radiation from a distance x that is transmitted to x=0 without absorption.

    If T varies over that distance, then the above differential equation must be integrated while allowing B (a function of T) to vary as a function of x. A coordinate system can be chosen to allow x to be proportional to optical thickness (so that emi would be constant along x, and a perfectly opaque surface would be infinitely thick, a 1 m thick cloud would have a greater x distance than a 1 m thick layer of air at many wavelengths, etc. – but this gets more complicated if there are other contributions to opacity, especially if they do not remain proportional to each other. Numerical integration will be necessary.

  25. Rewrite:

    I = B(v,T) * [1 – exp(-x*emi)]

    dI = B(v,T) * exp(-x*emi) * emi * dx

    This is to differentiate between B, which is the blackbody intensity, and I, which is the actual intensity.


    cont. from last comment:

    But qualitatively this is actually quite easy to understand. Within a thin isothermal layer of homogeneous material with some nonzero emission (at LTE) coefficient (setting aside other optical processes besides absorption) between two cold empty infinite voids, looking around, one would see variations in I, with the highest I seen within the plane of the air, the lowest I seen looking through the shortest distances to the voids on either side. Within the center of the layer, the distribution of I is symmetric, so that there would be zero net radiative flux. If one is not centered within the layer of material, then one would see a thicker layer to one side than to the other, and there would be greater I seen coming from the thicker side, so there would be a net flux at that point, going away from the center of the layer. The net flux would increase going toward the surface of the layer.

    If the layer is optically very thick, then the situation is similar, except that, away from the surfaces of the layer, there would be a region in which one would see sufficiently thick layers of material to either side so that I would approach the blackbody value B(v,T) in all directions if T does not vary. The only significant net fluxes would be found outside a central region where the net flux is near zero over a region of significant thickness.

    Now, if we go to a thin layer again, with temperature T2, but place on one side an infinite region (with nonzero emission coefficient) of higher temperature T3, and on the other side an infinite region (with nonzero emission coefficient) with lower temperature T1, then from within the layer, one would see a dipole in the I pattern, with greater I from the T3 side than from the T1 side. Thus there would be a net flux through the layer from the T3 side to the T1 side. This would not vanish immediately outside the layer – but going away from the layer, it would decrease and eventually approach zero at some distance, because at some distance the opacity of either isothermal region would hide the temperature variations. If the thin layer is made thicker, then the net flux within the layer is reduced because, from the center of the layer, material at T = T2 hides more of the radiation from the T1 and T3 materials, while at either interface, the layer hides more of the radiation from the region on the opposite side. The dipole in the radiant intesity decreases in strength, reducing the net flux.

    Space does not have a high emission cross section, but there is not much radiation coming from space (except in certain directions, such as from the sun), so space acts like a very cold blackbody (approx. 0 K).

    —–

    So the bulk of radiation may not add linearly to net energy transfer, but whereever there is radiation going in some direction, there is a flux, and when the radiation from the whole sphere of directions is assymetrically distributed, there is a net flux.

    The diagram above shows the fluxes per unit area of upward and downward radiation (including radiation upward at some angle from vertical, but the fluxes are vertical because they are from energy passing throuh horizontal surfaces (in the global average, anyway).

    The diagram shows these fluxes, and convective fluxes, as they would be measured immediately above the surface and at the top of the atmosphere – thus they show fluxes into the atmosphere and out of the atmosphere, to and from the surface, and to space, and from the sun. This diagram does not show how the emission and absorption of those fluxes are distributed within the atmosphere (except for an assignment of some radiation to space specifically from to clouds, and except for a specification that a portion of the LW flux from the surface goes to space without absorption in the atmosphere).

    Another diagram could be contstructed that would show total upward and downward LW and SW radiative and convective fluxes at all vertical positions. This would show that both the upward and downward LW fluxes generally decrease with height (because of the generally decreasing temperature going from the surface and upward through the troposphere, and for downward radiation, decreasing opacity of what remains above), with the upward flux decreasing to a nonzero value and the downward flux decreasing to near zero as the remaining atmosphere above thins out. The upward and downward SW fluxes would decrease downward from the top of the atmosphere. The upward convective flux (in total; latent + sensible) would decrease going up from the surface, and reach near zero (in the global average) at the tropopause, and remain near zero (in the global average) upward from the tropopause. This is a convergence of the convective flux within the troposphere that heats the troposphere. It is balanced by a divergence of the total net radiative flux. The net radiative flux (in the global average) is near zero above the tropopause – there is a net LW divergence approximately balanced by SW convergence (absorption – by ozone in particular). There are no significant areas where there is positive SW divergence because there is not significant SW emission (very small amounts from lightning, volcanic eruptions, human activity – not directly climatologically significant). There is SW convergence distributed over the atmosphere; a majority of the SW convergence (solar heating) is concentrated at the surface or below (as in the ocean). There is some net LW divergence at the surface (396 W/m2 – 333 W/m2 = 63 W/m2, from the figure above) and some LW divergence within the troposphere that is greater than the SW convergence there, the difference balancing the convective flux convergence.

    About LW contribution to surface cooling:

    “This all arose from what I see as important and rarely discussed issues at the surface concerning evapo-transpiration, and its relative importance to climate, and also the absorption of infrared radiation into the top skin of water, just where this major latent heat loss takes place.”

    The rate of evaporation from a surface depends on temperature variations, humidity and wetness, and air motion. There are some exceptions, but in the global average, there is net evaporative cooling, net sensible cooling, and net LW radiative cooling at the surface, which are balanced by SW heating. Within the ocean, Some SW heating takes place beneath the surface, and mixing can also transport some of the heating near the surface to some depth (around 100 m, I think), so there is a net conduction/convection of heat upward within the upper ocean to where convective and radiative cooling occur. An increase in LW heating of the ocean surface by downward radiation will in the global average only reduce the net LW cooling at the surface. An increase in convective cooling at the surface can occur, but an increase in gross LW cooling will also occur when the temperature rises. Generally, for the same humidity and surface conditions, the ratio of evaporative surface cooling to total convective cooling increases as the temperature increases, but that can occur even if the total convective cooling is unchanged.

  26. Chris Colose,
    I want to thank you for allowing three of my (trimmed) posts here that were deleted in mediation at RealClimate. The surprising thing to me concerning RC has been that they have previously had a terrible reputation for deleting contrary posts, and thus I long hesitated going there. However, out of curiosity, I’ve recently contributed there for a couple of months and have had a very good run on three threads. It seems that I only touched a raw nerve on a couple of occasions where posts were edited or deleted. (no rudeness was involved). However, the more recent last trio of posts that you have allowed here were deleted in mediation at RC without comment, but as far as I‘m aware, they contained only factual information and were totally devoid of any insult whatever.
    Thankyou again for allowing them here Chris.

    Response– No problem, though with the caveat that I haven’t really followed your exchange with Patrick. I’ve been busy defending the fact that I didn’t send an e-mail under the name “Hughen Falconer” to a list of quack scientists and then blog about them.– chris

  27. Patrick 027, still responding to your 1:41 pm of Aug 18, and me time-necessarily quickly skipping through it, you quoted me and responded:
    “If we consider any layer of air, then the highest intensity of inherent EMR (radiation) is lateral in all directions. However, within any typical air layer pocket there is no change in temperature as a consequence, or in other words, there is no HEAT transfer. This follows from the second law of thermodynamics; that HEAT can only flow into a colder sink.”

    “Yes, if we only consider radiation emitted from an optically thin layer of air, the intensity will be highest for directions within the plane of the layer, which, if horizontally isothermal, will not result in any net heat transport.
    However, multiple layers of air above and/or below, and an underlying surface, contribute to the total radiant intensities that cross the layer of air; …” [my bold emphasis added]

    Yes, I understand what you went on to say, which is why I inserted the qualifying word ‘inherent’ in my first line. The purpose was to show that there is only HEAT transfer via EMR if there is a potential difference in EMR “pressure”. (to use say typically hydraulic energy as an analogy)

  28. Patrick 027, I’ve responded in two parts to your 1:41 pm of Aug 18, and now, coincidentally herewith at the very same time of 1:41 pm on Aug 19, you wrote in part:
    “…Yes, the net LW radiative cooling rate of the surface is less than the total convective cooling of the surface (latent heat flux + sensible heat flux) and is less than just the latent heat flux from the surface, in the global average…
    …This isn’t the same as ‘being less important’, though (which may have been what whoever was arguing with you was thinking of?).

    So you agree that in principle, the various “Earth’s Energy Budget” diagrams that I linked to are OK in terms of concept of heat loss from the surface, putting aside that the numbers may not be real?
    BTW, I would qualify that, in that some of the global averaging aspects of S-B assumptions etc in all these diagrams are a bit of a stretch!
    I’m not sure what you mean by; “This isn’t the same as ‘being less important”
    However, if I take you in to a post I made at RC, you can discover the “whoever” (BPL) that I was mostly debating, but that is only at page 11 of 18 ….if you have the patience :

    http://www.realclimate.org/index.php/archives/2009/07/summer-sea-ice-round-up/comment-page-11/#comment-132829

  29. Re BPL’s comment 516:

    “Whoever claimed this was wrong. The surface is at, let’s say, 288.15 K with an emissivity of 0.95, which means it radiates about 371 watts per square meter. It absorbs 161.4 x 0.85 + 348 x 0.95 = 137 + 331 = 468 watts per square meter from sun and atmosphere, respectively, which means there’s another 97 watts per square meter it has to lose. That’s from 17 W/m^2 due to conduction and convection (”sensible heat”) and 80 W/m^2 due to evapotranspiration (”latent heat”). So of the Earth’s 468 W/m^2 of cooling, 80 or 17% is from evapotranspiration. This is not “up to 50%” except in the loosest sense.”

    Okay, he is wrong in that the net latent heat loss from the surface is about half of the total net convective + net LW cooling of the surface.

    However, he is right in that net latent heat loss or even total net convective heat loss from the surface is much less than surface upward LW flux cooling.

    (PS I started specifying net convective fluxes because there are places and times when the surface is heated by latent and/or sensible heat fluxes from the air – for example, on a calm clear night, frost or dew might form and sensible heat may be transfered to the surface from the air as the surface radiates to space and higher colder layers of the atmosphere. The two convective fluxes could be opposed, as when dry hot air blows over cold water. The sensible heat flux is on average toward the surface at the highest latitudes (air brings heat from lower latitudes).)

    What is really important is that the numbers are right (or approximately so). Whether x is z % of y depends on what y is as well as what x is, etc.

    There is some concern in placing too much importance on such proportions – they are not generally set in any way – they can change with a change in climate.

    Part of the problem, what BPL might be sensitive to, is that there are some people who argue that convection is so important that it can counteract changes in the greenhouse effect, or something like that.

    This is what I meant about importance. Niether convection nor radiation could be said to be more important than the other, at least within the troposphere. One alone does not determine the temperature distribution. Convection acts to couple the temperatures at different vertical levels so that the various levels of troposphere and the surface tend to warm up or cool off by similar amounts in response to tropopause level radiative forcing – which is a forced net downward radiation flux at the tropopause, and is a heating rate per unit area of the climate system beneath that level. The distribution of that heating within the troposphere/surface may vary, but changes in convection will respond to that so that the heating effect is spread vertically (if an upper level is warmed, convection from below is reduced, thus warming lower levels; if a lower level is warmed, convection to above is increased, thus warming higher levels). Including radiative feedbacks (from changes in temperature and from resulting changes in optical properties (water vapor, clouds, snow, etc.), the equilibrium response tends to be a change in temperature from the surface to the tropopause that tends to follow radiative forcing, and changes in convection in response to changes in the distribution of radiative heating/cooling. For global warming: At lower latitudes, the mid-to-upper troposphere tends to warm more than the surface because the moist adiabatic lapse rate (what convection tends to sustain against radiatively-forced (surface heating, tropospheric cooling) instability) is reduced at higher temperatures – this is generally aside from how the rate of convection may change. At higher latitudes, particularly during the colder part of the year, warming is concentrated at and near the surface because that is where the snow/ice albedo feedbacks occur and because the atmosphere is more stable to convection there, so warming can occur at lower levels without making the air unstable.

  30. Patrick 027, Reur 10:01 pm, quoting mine and responding, in part:
    “…Of course the back radiation “location” is hard to nail down in origin etc, but at the surface that receives it, it does not need not know…”
    “Generally, more of that back radiation comes from the first meter of air than from the second meter of air, more from the first km than from the second km, etc. This changes if there is a layer of clouds beneath clear air, or if …”

    Yes, I totally agree that the dynamics of the atmosphere are very complicated. For instance, simplistically, if we take it in layers, the “first layer” will absorb infrared from the surface as a factor of its temperature (T) and a great variety of surface conditions, including ground altitude. The initial surface emissions are in all directions hemispherically. The receiving first layer will then reemit in all directions spherically so that nominally only half of it is back-radiation returns towards the surface. All layers will also absorb some back radiation from layers above at a reducing rate with altitude, and each will have some infrared pass through them unhindered
    This process is most intense in the tropics where there is high water vapour content, and of course, the photon free path lengths suffer a general reduction over a much wider spectrum. In relatively dry air such as over deserts, and at high latitudes, the opposite is true. Then of course there are diurnal and seasonal variations added to the considerable spatial complexities. Oh, and lapse rates and and……
    Yet another difficulty in trying to sensibly describe average global conditions as in K & T 1997, or to integrate all the variables so far mentioned, is that radiative power is proportional not to T, but to the fourth power of T. (plus a few other things)
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    However, as I’ve said before, global warming is measured in terms of T at the surface, it being something less than 1 degree C over the past ~150 years. The e.g. HADCRUT record over this period consists mostly of T’s measured at about chest height over land, and in bucket dips in the oceans, so it is reasonable to describe this as the long term record of T’s at the surface. (and a measure of global warming as a consequence of a rise in level of HEAT contained in matter.)

    The K & T 1997, and various other Energy Budget Diagrams describe (in principle?) only their assessment of average conditions at the surface. Whilst I think that declaring such average global conditions to be a huge stretch, this is apparently the best/preferred information available, so let’s run with it.

    Sure, it is good to ponder the complexities in the atmosphere above, but the fundamental parameter of import is; HEAT transfer at the surface. It does not matter how much EMR is flying around in all directions because it is a different form of energy, which only matters when there is a potential difference. (PD)

    Oh, BTW, when I wrote:
    “…but at the surface that receives it, [back-radiation] it does not need not know.
    What I meant was that the Earth’s surface, a grey body, receives back-radiation from no distinct surface in the atmosphere, but responds to it accumulatively as if it were another parallel non edge spilling grey body.

    It should also be remembered that the infrared absorption in the atmosphere originates from the surface, and that back radiation reduces with altitude, and that T and a few other things also reduce with altitude.

  31. “Sure, it is good to ponder the complexities in the atmosphere above, but the fundamental parameter of import is; HEAT transfer at the surface. It does not matter how much EMR is flying around in all directions because it is a different form of energy, which only matters when there is a potential difference. (PD)”

    Okay, but:

    1. Heat transfer at the surface depends on convective and radiant fluxes at the surface; radiant and convective fluxes depend on the temperature distribution, etc, and the temperature distribution depends on radiant and convective fluxes, so the radiant and convective fluxes at any one point depend on conditions over a larger region which depends on both radiant and convective fluxes all over. Convective and radiant heating of the surface responds to fluxes that occur higher in the atmosphere.

    2. There is not much globally averaged net convective flux at and above the tropopause. A key parameter for climate change is changes in the radiative flux at the tropopause level. For example, starting from equilibrium (which requires net average convective + radiant flux across a closed surface (which, globally, the tropopause or any other vertical level is) must be zero, so that energy input = energy output.), if there is an optical property-forced (this includes feedbacks like water vapor, snow, etc., so is different from a common usage of the term ‘radiative forcing’) decrease in net upward radiant flux at the tropopause level, then there will be a net energy input to the space enclosed, which is below tropopause level. This energy build up tends to raise the temperature, and a new equilibrium is approached when the temperature has increased enough to bring the net upward radiant flux back up to zero at the tropopause level (including effects of stratospheric temperature changes). This along does not specify where in the troposphere/surface the temperature must have increased; heating a small volume within the troposphere to very high temperature would do just as well as heating up the entire troposphere-surface evenly. But physics restricts the possibilities. In particular, if radiative changes alone heated only the upper troposphere, this would increase vertical stability, thus reducing the rate of convection, thus cooling the upper troposphere and warming some layers below; if radiative changes alone heated only the surface, convection would tend to increase, cooling the surface and heating the troposphere. The result tends to be such that convective fluxes may change, but the temperature change tends to be evenly distributed to a first approximation. Now, there are complexities that alter this result, but they are not a complete mystery – they can be understood and calculated, and are, hence model results.

    3. Because of this complexity, it is useful to have an understanding of not just net fluxes but gross fluxes (and their causes), because there are multiple ways to change gross fluxes for the same change in net flux, and changes in net fluxes are not generally in a simple proportion to changes in gross fluxes (different net flux changes could occur for the same change in a subset of gross fluxes).

    4. It is quite true that basing calculations on globally averaged conditions in a simple way will not give the same results for fluxes than finding the fluxes in different places and times and then averaging over area and time (annual averages, daily, etc.), because there are nonlinearities. On the other hand, it is a useful starting point. Based on a greybody surface, I estimated (using some information in “Global Physical Climatology” by Dennis Hartmann) that using a global average temperature of 288 K and seasonal and spatial surface temperature variations, the actual global LW emission from the surface is equivalent to the LW emission from an isothermal surface that under 1 K hotter than the actual average surface temperature (so at or less than about 1/3 of 1 % error in temperature (considering such proportionality is appropriate in this context). I estimated that diurnal variations make an even smaller impact – and for most of the mass of the atmosphere, diurnal variations are quite small compared to those that occur at low levels over land. There are other considerations regarding how water vapor and clouds are actually distributed. Kiehl and Trenberth do not ignore these contingencies in their calcuations.

    A globally averaged flux across a closed horizontal surface has important meaning. This is not to say that variations are unimportant, and GCM climate models do computations based on conditions at places and times, not global-scale averages. And there are also observationally-based studies of how the climate system really works in space and time.

  32. Patrick 027 Reur 6:39 pm
    “Okay, he [BPL] is wrong in that the net latent heat loss from the surface is about half of the total net convective + net LW cooling of the surface.
    However, he is right in that net latent heat loss or even total net convective heat loss from the surface is much less than surface upward LW flux cooling…”
    [emphasis added]

    Nevertheless, the surface upward LW flux is a gross value that cannot be considered unaffected by the gross downward LW flux. It is not HEAT in itself but EMR, also known as infrared light, which can be converted to HEAT only IF absorbed by matter. (molecular absorption).
    There can only be cooling at the surface if there is a loss of HEAT, which is a function of temperature. The second law of thermodynamics gives that HEAT can only flow from a high to a lower temperature sink. In a radiative situation, for example, this allows that if two bodies are facing each other and one is colder, then it will absorb net radiative power amounting to the difference in the opposing radiative power between them. (which will be converted to HEAT). Although downward LW flux, is not from a body, the surface that receives it responds to it in the same way as if it were a body emitting at the same power.

    Despite the complex dynamics of the atmosphere, and their ultimate effect on surface temperatures, the fact remains that global warming is measured by surface temperatures. Various “Earth’s Energy Budget” diagrams purport average global surface conditions, and that the average global heat loss from the surface via evapo-transpiration is of the order of 50% of the total HEAT loss from the surface.

    Do you dispute these diagrams?

  33. “Nevertheless, the surface upward LW flux is a gross value that cannot be considered unaffected by the gross downward LW flux.”

    1. a flux will be affected over time by other fluxes due to maintenance or changes in temperature, etc.

    2. instantaneously, the surface upward LW flux is unaffected by the downward flux, though they do necessarily correlate in a way shaped by temperature distribution (assuming only LTE emissions), as the ‘I can see you as much as you can see me’ rule for radiative energy exchanges along lines of sight (for given frequency/wavelength, polarization) allows the second law of thermodynamics to apply (from emission to absorption, the net flux has to be from warmer to cooler). However, it is still helpful to keep track of gross fluxes, because the gross fluxes relate more directly to the temperature at any one place than the net fluxes. It just helps complete the picture.

    We can disagree about the precise definition of ‘heat’ (in some conditions, radiation energy density can be a significant part of the total internal energy and enthalpy of a volume), and still agree on the results of radiant energy transfer. I don’t really care that we disagree on whether EMR can ever be called heat – a rose by any other name is still a part of the reproductive organ of a multicellular organism. Whatever we call it, I think we do agree on what it does.

    “There can only be cooling at the surface if there is a loss of HEAT.”

    Yes, but there can be multiple contributors to heating or cooling. There may be solar heating, LW cooling at some wavelengths, LW heating at other wavelengths, convective cooling, and these may each individually result in heating rates that can be linearly superimposed to get a net total heating rate. But it is useful to note the seperate contributions (by direction, by wavelength, etc.) because seperate contributions may change differently in general.

    “Do you dispute these diagrams?”

    Not really. Do you? (Kiehl and Trenberth’s diagram is an approximation and they state as much – for example, they approximate the surface as having a zero LW albedo (so it is a perfect blackbody for wavelengths longer than about 4 microns). In actuality it may have a LW albedo of 5 %, give or take (I’m not clear on exactly what it is, but 5 % is close to what is implied by a diagram in “Global Physical Climatology”, as I recall). This means that the surface LW emission would be 95 % of what they state. However, 5 % of the backradiation would also be reflected from the surface (assuming albedo is not directionally dependent – it probably is, but my guess is that this would generally increase the reflectance of back radiation, since it tends to be more intense coming from angles closer to the horizon). This means the total gross upward LW flux just above the surface will still be close to what it would be above a perfect blackbody (
    instead of 396 W/m2, it could be (0.95 * 396 + 0.05 * 333) W/m2 = (396 – 19.8 + 16.65) W/m2 = (396 – 3.15) W/m2 ~= 393 W/m2
    ). However, the part of that upward LW flux that is a reflection from the surface may be absorbed over a shorter distance on average (with a smaller fraction escaping to space) than the LW radiation emitted from the surface, since it might generally be partly concentrated at angles away from vertical – in particular, I would expect this for specular reflection off of calm water. (PS when there is a low level inversion, depending on wavelength and water vapor and clouds, etc, the concentration of backradiation toward directions nearer horizontal than vertical could be reversed.)

  34. Patrick 027, you have described some complex atmospheric dynamics, for instance that advection from the warmer regions is important in warming the higher latitudes. (yes, very importantly so!). To this could also be added consideration of coriolis, and reduced insolation per unit area at higher latitudes coupled to seasonal and spatial variations and so on. Thus the many complexities result in a wide distribution of surface temperatures that are fairly constant 24/7 at around 30 C in the tropics, and varying from very hot to very cold both spatially and temporally elsewhere.
    However, that is not what the K & T diagram is about. It shows what purports to be the global average conditions at the surface, (as a consequence of all the above complexities), and its fundamental component is based on an S-B calculation using an average global T. This gives 396 w/m^2 gross upward EMR, but a net heat loss of only 63 w/m^2 via EMR.

    I think that we have agreed that this diagram is an approximation of average conditions at the surface, centred fundamentally around the average global T. There is perhaps a question about its numerical accuracy, particularly with so many non-linearities to integrate, and for example, BPL has suggested 371 w/m^2 upward EMR, not 396. However, heat loss from the surface from evapo-transpiration seems to be a very significant value of about half the total heat loss.

    This raises what I see as an important question. Just as increasing water vapour levels should be expected in a warming world, creating a positive feedback, were does that larger water vapour cycle come from? If it is from increased evapo-transpiration, should there not be a pro rata increase in what is already the greatest cooling effect from the surface?

  35. “To this could also be added consideration of coriolis, and reduced insolation per unit area at higher latitudes coupled to seasonal and spatial variations and so on.”

    That is all implicit in the observed climate system, and is understood within theory and is used in modelling, etc.

    Remember my point that variations in temperature at the surface give it an effective brightness temperature that is very close to the actual average temperature (that is, an isothermal surface need only be about 1 K warmer than the global average to emit the same power as is emitted on average).

    Note that an area weighted average mulitplied by area gives a total amount; though approximate, Kiehl and Trenberth’s diagram thus implicitly shows fluxes into and out of spherical shells (multiply fluxes per unit area by the surface area of the Earth), which is important.

    BPL’s number probably accounts for the small LW albedo of the surface.

    ——–

    “This raises what I see as an important question. Just as increasing water vapour levels should be expected in a warming world, creating a positive feedback, were does that larger water vapour cycle come from? If it is from increased evapo-transpiration, should there not be a pro rata increase in what is already the greatest cooling effect from the surface?”

    1.

    In order to change the water vapor content of the air, there must be some temporary imbalance between evaporation and precipitation. However, water vapor could either be increased by increased evaporation or decreased precipitation. Anyway, a very tiny imbalance in proportion to the water cycle flux would be plenty to keep the water distribution close to equilibrium for such changes as are being considered.

    The residence time of a water molecule in the atmosphere is between 1 and 2 weeks, whereas AGW may increase H2O in the atmosphere by some tens of percent (?) over decades, so the rate of change would require an imbalance that would be essentially imperceptable in terms of changes in the water cycle fluxes – perhaps on the order of a fraction of a percent of the global evaporation and/or precipitation rate:

    For example, if the total H2O vapor content increased 20 % over 50 years, since the total H2O vapor is the liquid equivalent of about a 25 mm global layer (“Global Physical Climatology”, Hartmann, p.12), this increase would require the unbalanced evaporation of 5 mm of liquid water over the globe (that’s 5 kg/m2). This would be an evaporative cooling of nearly 2500 kJ/kg * 5 kg/m2 = 12.5 MJ/m2, which, over 50 years, would be a cooling of about 0.0079 W/m2. Note that this cooling stops when climate reaches equilibrium, at which point, precipitation and evaporation will be on average balanced. The 12.5 MJ/m2 per x K of warming is really part of the heat capacity of the climate system. If 3 K of warming were sufficient for a 20 % H2O vapor increase (not sure offhand, but I think it’s close), this would be a heat capacity of about 4 MJ/(m2 K), which is in addition to roughly 8.8 MJ/(m2 K) from the troposphere air heat capacity, perhaps something on the order of 6 MJ/(K m2) from the land surface (~ 30 % of Earth’s surface, and assuming heat penetration of 10 m over time considered, heat capacity ~ 2 MJ/(m3 K)), , maybe 100 MJ/(K m2) from melting ice, if 3 K warming contributed to a 1 m sea level rise from melting ice, roughly 300 MJ/(m2 K) from the upper ~ 100 m of ocean (over approx. 70 % of globe), and about ~ 10800 MJ/(m2 K) from the rest of the ocean over time. The effect of this heat capacity is to slow the rate of climate change, although some of that heat capacity is not accessed over shorter time periods (the deep ocean in particularly). Except for the time-dependent heat capacities (the ongoing effects of continuing temperature rise deeper into the ocean (which will decrease on the scale of the turnover time of the deep ocean), and the ongoing penetration of additional heat deeper into soil, rocks, and groundwater (which just gets slower and slower over time)), the flux into these heat sinks would go to zero as the climate reaches equilibrium.

    2.

    Contributions to the effective heat capacity of the climate system from changing total amounts of water vapor and ice, etc, which result from small imbalances accumulating over time, is quite a bit different than changes in the balanced portion of the water cycle and the heat fluxes associated with this, such as might be in place when the climate reaches a different equilibrium state.

    Once the total water vapor has reached equilibrium, the rates of evaporation and precipitation could go back to what they are now or have been before any AGW, and this would simply maintain the amount of water vapor. Or both could increase, or both could decrease. What actually does happen depends on changes in radiative fluxes. Both the LW emission from the surface and the backradiation from the atmosphere to the surface will increase with increasing temperature. The difference – the net LW cooling of the surface, will get bigger due to the nonlinear relationship between emission and temperature. However, increasing opacity, from greenhouse gas forcing, but also from the water vapor feedback for any warming, will tend to decrease the net LW cooling by decrease the height from which back radiation reaching the surface is emitted, so that the brightness temperature of the backradiation will increase even more. This is an especially strong effect at lower latitudes, where water vapor feedback in particular decreases the transparency in the atmospheric window – the band of wavelengths between 8 and 12 microns. At lower latitudes, there will be a further decrease in the LW cooling due to the decrease in the lapse rate in the lower and maybe(?) middle troposphere. At higher latitudes, with some seasonal variation, there may be some increased net LW cooling at ths surface because there will be greater warming at the surface than at higher levels (with some seasonal dependence). However this changes, the tendency will be for convective cooling of the surface to change to keep the convective cooling + net LW cooling – SW (solar) heating 0 nearly equal to zero in the global average. If convective cooling increases more, the surface temperature will fall and temperature at some higher level will increase, decreasing net LW and convective cooling, causing convective cooling to increase. And so on if convective cooling decreases more than radiation allows.

  36. Correction to second to last sentence:

    If convective cooling increases more, the surface temperature will fall and temperature at some higher level will increase, decreasing net LW and convective cooling, causing convective cooling to DECREASE.

  37. Patrick 027 Reur 1:18 pm:
    I’ll not respond to your first 14 lines, since it is unnecessary to go over old ground.

    Concerning the possibility of increased evapo-transpiration (E-T) in a progressively warming world, these points appear to be the fundamentals:

    a) There is no dispute that there is a related progressive increase in the amount of water vapour in the atmosphere. (and a consequent positive feedback)
    b) Consequently, if there is increased water vapour content in the atmosphere, a logical, prime cause of this would seem to be increased E-T
    c) A speculative possible offset may be reduced net global precipitation. (reducing the need for increased E-T to sustain the increased water vapour content.)
    d) However, the speculation in c) seems to be unlikely because if there is increased water vapour content, then it would appear to be logical that cloud volumes would also increase. Logically this should result in increased net global precipitation. (accelerating the need for increased E-T to sustain the increased water vapour content.)
    e) Increased global net rainfall on land can logically be expected to result in some increase in E-T
    f) All of these factors are subject to substantial temporal and spatial variations, and for instance regional airflow pattern changes are very important. To assess the net effects is an extremely complex matter, which in my view is being neglected.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

    In consideration of c), it is important to be not distracted by reports of drought in some parts of the world, because such regional variations are not unusual. For instance, whilst there has been a prolonged drought of some six years in S.E. Australia, those of the late 1880’s and early 1900’s were very severe indeed, and would have been disastrous had there been today‘s much greater population and world trade back then. About 500 years ago, the great Khmer city-civilization of Angkor was abandoned, apparently because of changes in monsoonal patterns. And, of course, some areas have had more than their fair share of rain recently.

    Patrick, I think you would do better to embrace the broader picture, and not get bogged down in hypothetical detail. You should perhaps include a few more ‘maybe’s’ in your hypothesise, and better take into account item f) above which critically affects some of the logic in your detail.

  38. Bob_FJ – with regards to your discussion of increased water vapor suggesting increased ET, you are still confusing amounts present with fluxes. Evapotranspiration and precipitation are fluxes of water into and out of the atmosphere. Condensation is a flux of water vapor from one form to another. Etc. The water vapor feedback generally discussed refers to an increase in the amount of water vapor in the atmosphere, to approach a new equilibrium value – as opposed to an ongoing change in one direction for ever. It is not at all automatically the case that more water vapor leads to more clouds and more precipitation; increasing temperature while holding total atmospheric water constant would tend to decrease atmospheric condensed water, with a net evaporation of some of the clouds. Having said all this, though, my understanding is that a warmer world will generally have higher evapotranspiration and precipitation rates globally. However, there is an upper limit to convective cooling of the surface – aside from the complexities of spatial/temporal variations, in the limit of surface LW cooling going to zero, convective cooling cannot exceed surface solar heating, and with increasing water vapor, that would actually decrease if warming progresses beyond some point (setting aside cloud cover and other possible feedbacks, assuming there is no snow and ice left to melt before such a point were reached). Including spatial and temporal variations, it is easier to place hard limits on average net fluxes…

    And of course there are spatial and temporal variations, both in the climate at any one period, in internal variability patterns, and in forced changes in climate. That I have not gone into more detail about these things with you does not mean I am not aware of them, and certainly not that climatologists are not aware of them. None of the complexities you have raised are missing in climate models – not to say they are perfect (grid scale limitations related to computing power is a constraint on peformance), but nothing so obvious has been forgotten. You might be amazed at how much has been studied.

  39. Patrick 027, Reur 2:14 pm
    “Bob_FJ – with regards to your discussion of increased water vapor suggesting increased ET, you are still confusing amounts present with fluxes…”

    I’m puzzled why you wrote that! If you look back above somewhere, I described it all as a cycle. Furthermore, we have discussed E-T and precipitation as part of that, with the simple understanding that the difference between what goes up, and what comes down is what stays up there. Unfortunately, it is more complicated because what stays up there is not just water vapour, but also clouds, which depend on water vapour and particulates etc, and it is not only a matter of cloud volume, but also species mix. Clouds remain an area of major uncertainty.

    “…increasing temperature while holding total atmospheric water constant would tend to decrease atmospheric condensed water, with a net evaporation of some of the clouds…

    Well yes, but so what? Are we not discussing a regime where it is agreed that global warming results in an increase in water vapour? (a positive feedback)

    “…Having said all this, though, my understanding is that a warmer world will generally have higher evapotranspiration and precipitation rates globally. However, there is an upper limit to convective cooling of the surface – aside from the complexities of spatial/temporal variations, in the limit of surface LW cooling going to zero, convective cooling cannot exceed surface solar heating, and with increasing water vapor, that would actually decrease if warming progresses beyond some point…”

    There is an upper limit…. progresses beyond some point….?
    Have you really considered that the decadal rates of temperature increase have been, in absolute terms, rather gradual?

    “…None of the complexities you have raised are missing in climate models – not to say they are perfect (grid scale limitations related to computing power is a constraint on peformance), but nothing so obvious has been forgotten…”

    So you think that the estimates (assumptions) that are fed-in to computer models are perfect?

    “…You might be amazed at how much has been studied…”

    If there is anything available in the public domain, I would like your advice.
    Perhaps something of quality like the following?

    Dessler, Zhang and Yang: (Water vapour positive feedback)
    GEOPHYSICAL RESEARCH LETTERS, VOL. 35, L20704, Received 13 July 2008; revised 16 September 2008; accepted 19 September 2008; published 23 October 2008.

    Spencer, Braswell, Christy, and Hnilo: (Clouds negative feedback)
    GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L15707,
    Received 15 February 2007; revised 30 March 2007; accepted 16 July 2007; published 9 August 2007.

  40. Bob FJ

    “I’m puzzled why you wrote that!”

    Because you’ve implied a number of times that the water vapor feedback – an increase between old and new climatic equilibrium water vapor quantities in the atmosphere – automatically requires an increase in the climatic equilibrium net surface cooling rate by evapotranspiration.

    “Unfortunately, it is more complicated because what stays up there is not just water vapour, but also clouds, which depend on water vapour and particulates etc, and it is not only a matter of cloud volume, but also species mix. Clouds remain an area of major uncertainty.”

    Quite true. However, my understanding is that the actual mass of condensed water in clouds is quite small compared to the total atmospheric water content, so that the total can be approximated well by the water vapor content.

    (in response to my “increasing temperature while holding total atmospheric water constant would tend to decrease atmospheric condensed water, with a net evaporation of some of the clouds”):
    “Well yes, but so what? Are we not discussing a regime where it is agreed that global warming results in an increase in water vapour? (a positive feedback)”

    Yes, but the point is that this could easily occur while total condensed atmospheric water stays constant. An increase in temperature without an increase in atmospheric water would reduce relative humidity, generally tending to reduce cloud formation and longevity and precipitation and tending to increase evaporation. The opposite changes precipitation and evaporation would then tend to change atmospheric water content to boost relative humidity; a net increase in condensed water content does not automatically follow from bringing relative humidity back to where it previously was – a good first guess (at least when condensed water is a small fraction of the total?) would be to assume atmospheric condensed water mass per unit air mass to go back to where it was in a new climatic equilibrium; an increased tropospheric mass due to a higher tropopause may increase condensed water mass by increasing the volume in which clouds tend to occur, and other complexities could cause some other adjustmenst – it is not obvious in which direction without careful consideration of complexities (such as by using models); I have read that a warmer climate may make precipitation easier so as to reduce total cloud cover in some cases.

    As the climate changes and in the new climatic equilibrium there will be additional globally and temporally balanced changes in evaporation and precipitation shaped by changes in surface radiative heating and cooling.

    “There is an upper limit…. progresses beyond some point….?
    Have you really considered that the decadal rates of temperature increase have been, in absolute terms, rather gradual?”

    The rate of change has nothing to do with what I was just describing. I was also refering to climate in general and don’t particularly expect AGW to run up against this upper limit.

    For equilibrium climates, warmer climates will tend to have increased net LW surface cooling because of the nonlinear relationship between blackbody radiation and temperature, but if warming is caused by an increase in the greenhouse effect, increased LW opacity will tend to decrease net LW cooling of the surface, and the water vapor feedback will do the same for any warming and water vapor also absorbs some solar radiation, reducing SW heating of the surface (the water vapor SW effect will not be counteracted by snow and ice albedo SW feedback once the temperature is too high for snow and ice). (Other greenhouse gases may also absorb some SW radiation in the air and contribute to this effect, though this is generally quite a bit smaller than the LW effect so far as I know, except for ozone.) Thus warming forced by increasing the greenhouse effect could reduce net radiative cooling of the surface (especially at night, when the SW effects are zero), and will particularly tend to do so within a range of temperatures (includes tropical conditions when humidity is not too low) in which the water vapor feedback is quite strong within the range of 8 to 12 microns – at higher temperatures, with sufficient relative humidity, net LW surface cooling approaches zero, so that net convective cooling can only on average approach surface SW heating, which will actually be declining due to increasing water vapor (for the same incident solar radiation at the top of the atmsophere, a cooler climate may tend to have a maximum in convective surface cooling in the afternoon, while a sufficiently warm climate might have convective surface cooling evenly distributed over the day, or ?…). Of course, this is all before cloud feedbacks and ecological feedbacks are accounted for.

    “So you think that the estimates (assumptions) that are fed-in to computer models are perfect?”

    Of course not. But there is a big difference between having some % error and just plain forgetting to include something entirely.

    There is also a big difference between forgetting something and knowingly not including something, because the later can be kept in mind when interpreting model results.

    “If there is anything available in the public domain, I would like your advice.
    Perhaps something of quality like the following?”

    Try looking up any combination of these terms:

    climate, global warming, anthropogenic global warming, paleoclimate, …
    Milankovitch
    orbital forcing
    lakes in the Sahara
    carbon cycle, biological pump
    silicate weathering, chemical weathering
    DMS feedback
    isoprene
    methane hydrates or clathrates
    wetlands
    polar stratospheric clouds
    ENSO
    NAO
    AO, NAM
    SAM
    QBO
    PDO
    AMO
    MJO
    monsoon
    seabreeze, landbreeze
    mountain/valley breeze
    regional
    desert
    mountain
    convection
    coriolis
    ekman
    quasigeostrophic
    circulation patterns
    boundary layer
    upper ocean mixed layer
    Hadley cell
    Ferrel cell
    tropical cyclones
    tropical waves
    baroclinic waves
    baroclinic instability
    barotropic governor
    storm tracks
    anticyclones
    planetary waves
    quasi-stationary waves
    topographically forced quasi-stationary waves
    thermal forcing of quasi-stationary waves
    mutiplie equilibria
    Charney
    Rossby waves
    gravity waves
    equatorial waves
    Rossby-gravity waves
    Kelvin waves
    inertial oscillations
    blocking patterns
    jet stream index cycle
    polar front jet
    subtropical jet
    equatorial jet
    western boundary current
    potential vorticity (PV)
    conservation of angular momentum
    isentropic coordinates
    mean meridional circulation
    sudden stratospheric warmings
    Brewer-Dobson circulation
    circumpolar vortex
    troposphere-stratosphere interaction
    wave-mean interaction
    nonlinear
    linearized
    EP flux
    geostrophic turbulence
    conditional symmetric instability
    air parcel trajectory
    frontogenesis
    frontolysis
    cyclogenesis
    cyclolysis
    anticyclogenesis
    anticyclolysis
    drought
    flood
    cloud top height
    mesoscale convective system/complex
    squall line
    cut-off low
    thermal low
    thermohaline circulation
    potential temperature
    potential density

  41. Some more to consider (obviously note those terms you haven’t heard of before)

    precipitation
    collision-coalescence
    haze particle
    Kohler curve
    ice nuclei
    ice nucleation
    mesocyclone
    rear flank downdraft
    gust front
    equilibrium Bowen ratio
    Sverdrup
    bulk Richardson number
    Kelvin-Helmholtz
    Kelvin cat’s eye
    negative viscosity
    group velocity
    phase speed
    absolute vorticity
    orbital/curvature vorticity
    shear vorticity
    divergence or convergence
    Dansgaard-Oeschger event
    Heinrich event
    Younger Dryas

  42. And:

    Reynold’s (or is it Reynolds’ ?) averaging:

    a = average of a + perturbation of a = a_ + a’

    a*b = (a_ + a’) * (b_ + b’) = a_*b_ + a_*b’ + b_*a’ + a’*b’

    average of a*b = (a*b)_ = a_*b_ + a_*0 + b_*0 + a’*b’ = a_*b_ + a’*b’

    perturbation of a*b = (a*b)’ = a*b – (a*b)_ = a_*b’ + b_*a’

    for example, the average vertical motion (w) away from the surface of the Earth across a horizontal surface must be 0 (or more precisely, it must be zero over sufficient time), but the average sensible heat flux is proportional to (w*T)_, which is the average of vertical motion * temperature;

    (w*T)_ = w_*T_ + w’*T’

    but w_ = 0, so: (w*T)_ = w’*T’

    where T_ is the average temperature over a horizontal surface and T’ is at any one point in space T – T_.

    ——-
    also:

    geostrophic adjustment
    Rossby radius of deformation

  43. Patrick 027; responding only to what is the greatest collective verbiage in your recent three, (3), comments:
    I’m repeating a simple exchange from my last comment above, but this time, since you appear to have had difficulty in understanding it, I’ve emphasized one line in bold font, :
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    “…You [Bob_FJ] might be amazed at how much has been studied…”

    [my reply] If there is anything available in the public domain, I would like your advice.
    Perhaps something of quality like the following?

    Dessler, Zhang and Yang: (Water vapour positive feedback)
    GEOPHYSICAL RESEARCH LETTERS, VOL. 35, L20704, Received 13 July 2008; revised 16 September 2008; accepted 19 September 2008; published 23 October 2008.
    Spencer, Braswell, Christy, and Hnilo: (Clouds negative feedback)
    GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L15707,
    Received 15 February 2007; revised 30 March 2007; accepted 16 July 2007; published 9 August 2007.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    I puzzle why you laboured so long with all those elaborate listings of far flung topics for me to research, regardless of if they had any connection to the topic of E-T. Oh, and BTW, what happened to:
    *Saharan dust
    *Harley Davidson Event
    *Dust (other)

    Congratulations with your great illumination on ‘Anthropogenic Global Warming’….. that was so clever!
    However, couldn’t you find anything starting with ‘Z’ to end the list? (You know; AGW to Z)

    Ah! I’ve got one: zooplankton! ….. You like?

  44. “I puzzle why you laboured so long with all those elaborate listings of far flung topics for me to research”

    Based on your implication that basic flux relationships were hypotheses and that climate scientists had left out something rather basic, I had some concern that you might not appreciate just how much is known or being studied.

    “Ah! I’ve got one: zooplankton! ….. You like?”

    Yes.

    “Perhaps something of quality like the following?”

    Here are some authors you could search for (somewhat off the top of my head):

    Kiehl and Trenberth
    (in particular, the original paper behind the update discussed above:

    Click to access RadiationBudget.pdf

    it would be helpful for you to go through it, I think.)

    Syukuro Manabe
    James Hansen
    Isaac Held
    Dennis Hartmann
    Gavin Schmidt
    Drew Schindel
    Joseph Pedlosky
    Peter Rhines
    Michael Mann
    William Ruddiman
    James Kasting
    Daniel Schrag
    Paul F. Hoffman
    Carl Sagan
    Ed Lorenz

    See also
    Spencer Weart’s website: http://www.aip.org/history/climate/index.html
    The IPCC report: http://ipcc-wg1.ucar.edu/wg1/wg1-report.html
    any other links from Real Climate

    ————-
    http://www.realclimate.org/index.php/archives/2008/01/our-books/
    for example:
    — “Principles of Planetary Climate, Ray Pierrehumbert. (Cambridge University Press, 2010) (Online draft version)”
    http://geosci.uchicago.edu/~rtp1/ClimateBook/ClimateBook.html
    (this includes a very in depth description of how radiation works)

    Some textbooks:
    —————
    General (these books include atmospheric radiation among other topics):

    John M. Wallace and Peter V. Hobbs
    “Atmospheric Science – An Introductory Survey”

    John Houghton
    “The Physics of Atmospheres”

    Dennis L. Hartmann
    “Global Physical Climatology”
    (PS A VERY GOOD PLACE TO START for gaining understanding of flux diagrams such as shown above, among other things).

    ——————–
    In depth radiation:

    Grant W. Petty
    “A First Course in Atmospheric Radiation”

    ———————-
    Atmospheric (and in some cases, oceanic) physics with emphasis on fluid dynamics:

    Benoit Cushman-Roisin
    “Introduction to Geophysical Fluid Dynamics”
    (see important point about the section on global warming:
    http://www.realclimate.org/index.php/archives/2008/08/are-geologists-different/comment-page-5/#comment-97315 )

    James R. Holton
    “An Introduction to Dynamic Meteorology”
    (PS in the 3rd edition (at least the one I have) there was an error somewhere in chapter 8 in the mathematical derivation of the equation for baroclinic instability in a two layer model; however, so far as I know, the result is correct, suggesting there was a typo in copying the intermediate steps.)

    Jonathan E. Martin
    “Mid-Latitude Atmospheric Dynamics – A First Course”

    Howard B. Bluestein
    “Synoptic-Dynamic Meteorology in Midlatitudes – Volume II”
    (I suspect you don’t need Volume I if you have any of the other books that are focussed on fluid dynamics).

    Joseph Pedlosky
    “Geophysical Fluid Dynamics”
    (I don’t have this one, but I would like to have it)

    ——————-
    Climate, Climate change, and Paleoclimate

    William F. Ruddiman
    “Earth’s Climate Past and Future”

    There is also some discussion of climate in some geology books, such as:

    Dott and Prothero
    “Evolution of the Earth”

    Nick Eyles
    “Ontario Rocks”

    and “Supercontinent”, etc.

    ———————-
    A couple of online books on the ocean that look pretty good:
    http://oceanworld.tamu.edu/home/course_book.htm
    http://oceanworld.tamu.edu/resources/oceanography-book/contents.htm

    ———————

    Enyclopedia Britannica
    – see macropedia article “Climate and Weather”
    see also atmosphere, earth, …

  45. “Climate Models Confirm More Moisture In Atmosphere Attributed To Humans”
    http://www.sciencedaily.com/releases/2009/08/090811091832.htm

    …”In new research appearing in the Aug. 10 online issue of the Proceedings of the U.S. National Academy of Sciences, Lawrence Livermore National Laboratory scientists and a group of international researchers “… (from the rest of the article, this includes:

    Benjamin Santer
    Karl Taylor
    Peter Gleckler
    Celine Bonfils
    Steve Klein
    Tim Barnett
    David Pierce
    Tom Wigley (name rings a bell)
    Carl Mears
    Frank Wentz
    Wolfgang Bruggemann (two dots over the u)
    Nathan Gillett (have I seen his name before?)
    Susan Solomon (another familiar name)
    Peter Stott (I think I’ve seen that name before, too)
    Mike Wehner

    I haven’t looked up the actual study yet – it would probably be good to find it because from this article:

    “The atmosphere’s water vapor content has increased by about 0.4 kilograms per cubic meter (kg/m_) per decade since 1988, and natural variability alone can’t explain this moisture change, according to Santer. “The most plausible explanation is that it’s due to human-caused increases in greenhouse gases,” he said.”

    That number must be wrong – perhaps it is grams per cubic meter or kilograms per square meter of atmospheric column (?) … well, present day atmospheric water is equivalent to an about 25 mm layer of liquid water, which would be 25 kg per m2, or about 2.5 g of water per kg of air averaged over the atmosphere, which would be a bit over 2.5 g per cubic meter of air (maybe between 3 and 3.5 g / cubic m, roughly… won’t bother to do the calculation right now) if all air were held near sea level pressure. Clearly, a 4 kg per cubic m increase per decade is way off. Even 4 kg per m2 of atmospheric column seems high for a per decade increase. Perhaps a typo?

    ———–

    Regarding evaporation at the surface, you would be interesting in learning about terms such as ‘Bowen ratio’. Chapter 4 of Hartmann, “Global Physical Climatology”, has some good information about what shapes surface fluxes of heat and moisture.

    Surface fluxes will be affected by mixing into the rest of the boundary layer – the layer of air that is significantly affected mechanically and thermally (besides radiation) by the surface. Surface heating can drive mixing via thermally direct (hot rising, cold sinking – as opposed to thermally indirect) convection within the boundary layer. Friction at the surface causes vertical wind shear, and that shear can also contribute to mixing. Even when surface cooling relative to air tends to make the air stable, sufficient vertical shear can cause mixing (greater shear is required to drive mixing when the stability is greater) – via a process similar to barotropic instability. Mixing can reduce stability and make mixing easier, although mixing in one layer can make the boundaries of that layer more stable to mixing across the boundaries (something similar applies to quasihorizontal potential vorticity mixing, as I understand it). The atmospheric boundary layer can be capped by an inversion. Even if mixing is driven in part by surface heating and convection, that convection may overshoot (warm air, which cools adiabatically or moist adiabatially on ascent, tends to keep moving by inertia beyond the level at which it has cooled to be at the same temperature as its surroundings, and may rise and cool and lose kinetic energy to work done on thermally indirect motion as it becomes cooler than its surroundings. Without mixing, diffusion, and radiation of mechanical waves, and radiative heating/cooling, air would oscillated (at the Brunt-Vaisalla frequency (spelling?)) about an equilibrium level within a stable layer of air – higher frequency oscillations in more stable air) and entrain air from above, deepening the boundary layer and creating an inversion above it.

    If you want to learn more about cloud processes, you might start with ‘Kohler curve’ or ‘Kohler equation’ for clouds without ice. Wallace and Hobbs has a chapter on cloud and precipitation processes, including ice nucleation. It’s very interesting. It is very hard to get condensation from vapor without a sufficient condensation nucleus. Relative humidity is generally measured as the vapor partial pressure divided by the equilibrium vapor pressure (when the same number of molecules are going from liquid to vapor and vapor to liquid) at that temperature for a flat surface of pure water; soluted dissolved in the water will lower the actual equilibrium vapor pressure, while a convex surface will, through surface tension, pressurize the liquid, effectively squeezing water out of the liquid phase, raising the actual equilibrium vapor pressure. For a given condensation nucleus that can dissolve in water, for very small water droplets, there is an equilibrium droplet size that is nonzero for relative humidity (RH) below 100 %, and gets bigger with increasing relative humidity, because higher relative humidity is necessary to sustain a larger droplet because the solute will be more diluted. If I remember this right, the vapor pressure lowering effect of the solute is proportional to the solute concentration, which is inversely proportional to the cube of the droplet radius (assuming the droplet does not collect another soluble particle or merge with another droplet), while the curvature effect is inversely proportional to the first power of the radius, so the curvature effect declines less rapidly with increasing droplet size, hence the higher RH required for larger equilibrium droplet size. If the droplet gets larger or smaller, the changes in vapor pressure will cause compensating evaporation or condensation, restoring the droplet to its equilibrium size for that RH. In this condition, where the equilibrium size is stable, the droplet is a haze particle. The transition to a cloud droplet occurs when the solute concentration effect is reduced enough that the curvature effect starts to dominate, so that further increases in size reduce the necessary RH (which at that point will be over 100 %). At that point, the equilibrium droplet size is unstable – growth reduces the necessary RH to sustain the droplet, so the droplet will grow so long as the RH is greater than the equilibrium RH, which declines toward 100 % as the droplet grows, as the actual RH will also decline (without further cooling, etc.) as vapor is lost to the liquid phase. However, as cloud droplets grow and take vapor from the air, remaining haze particles will evaporate (different condensation nuclei will produce haze particles of different equilibrium sizes at a given RH, not all of which will become cloud droplets). Water molecules have to diffuse through the air to get to growing droplets – fast growth requires a larger gradient in vapor concentration, so that the RH between droplets may be larger than the RH just outside the droplet surfaces – and I would guess that more rapid cooling would allow a greater number of haze particles to transition to cloud droplets, by partially spatially isolating particles from the effects of growing droplets on the RH in their vicinities. As cloud droplets grow, the diffusion of water vapor becomes a rate limiting step. There is an implication that, by condensation alone, it takes much longer to produce particles large enough to have large enough terminal velocities to precipitate than it actually takes to produce precipitation. Other processes are at work. One is collision-coalescence. Droplets with different sizes will have different terminal velocities (effect might be magnified by eddies that act as centrifuges), so droplets can move by each other and bump into each other, sometimes merging (or breaking into smaller droplets ?) (also applicable so some frozen precipitation processes – graupel and hail). Different size ratios will have different collection efficiencies. Some droplets may move around a droplet but get caught by the air circulation around it and merge from behind.

    (It’s concievable that insoluble aerosols with hydrophilic surfaces might also help counteract the curvature effect, further (along with solute concentration) reducing the RH required to form a droplet – but the insoluble aerosol would have to be larger than the droplet for this to be significant.)

    Water droplets can become and remained supercooled (liquid below freezing) for some time for lack of good ice nuclei; homogeneous nucleation will occur at some frequency per unit time per unit volume, so aside from ice nuclei distribution, larger droplets are more likely to freeze at any given temperature than smaller droplets; in general, homogeneous nucleation is not significant for cloud droplets until the temperature declines to near -40 deg (C or F – they happen to be the same at -40). Any existing ice can nucleate new ice growth – this can occur just by touching, as when a droplet bounces against an ice particle without merging. One way of producing ice nuclei is the breaking up of ice particles or crystals – this can happen when a droplet freezes from the outside in (because the inner portion will expand). Other non H2O particles can serve as ice nuclei with varying effectiveness. When ice particles and supercooled droplets are in the same volume, a difference in equilibrium vapor pressure between the ice and liquid surfaces drives net evaporation of droplets and depositional growth of ice crystals – the smaller the ice crystal number, the more ice growth per particle. This is one way to get particles large enough to precipitate. Clouds can be seeded by ice particles falling from higher clouds.

  46. “As cloud droplets grow, the diffusion of water vapor becomes a rate limiting step. ”

    Well, rapid ascent would raise RH to some level of supersaturation away from the cloud droplets due to the need for a vapor concentration gradient to drive a flux into the droplets. Conceivably, rapid ascent could turn would-be haze particles into cloud droplets or keep them from reverting to haze particles.

    Rate of condensational growth aside, depending on size distribution, with condensation alone, the sizes achievable are limited by the number of droplets and the total amoung of water vapor that was present before condensation started (except for addition of vapor from evaporating precipitation, although precipitation shouldn’t be evaporating when it’s falling through a cloud, unless it’s liquid precipitation and the cloud has ice particles, etc.).

  47. … So as air is radiatively heated or cooled, mixed, or adiabatically cools or warms with ascent or descent, changes in RH would occur in the absence of phase changes; phase changes occur that tend to reduce the change in RH, which can cause opposing phase changes on some other particles.

    Evaporating particles are sources of water vapor and particles growing by condensation or deposition (gas to solid) are sinks, and this sets up vapor concentration gradients, which are necessary in order for vapor fluxes to occur so that particles grow or shrink. A proportional RH gradient results.

    However, the phase changes, include melting and freezing of particles aside from growth or shrinking, make the particles heat sources and sinks, which sets up temperature gradients that allow heat fluxes. The temperature variation also affects RH – for the phase changes to and from vapor, the resulting total variation in RH is proportionately larger than that of the vapor concentration, and this slows the rates of particle growth or shrinkage. The temperature variation also slows the other phase changes. But the phase changes can still procede as heat fluxes occur that tend to homogenize the temperature. Also, the temperature variations might drive microscopic convection, which could enhance the fluxes of water through the air away from evaporating particles and toward growing particles. The temperature and vapor concentrations will be more heterogeneous on the microscopic scale when the macroscopic average temperature is forced to change more rapidly or when mixing with air of different specific humidity, etc, is more rapid.

    These microscopic temperature variations mean that macroscopic volumes of cloud may only be in approximate local thermodynamic equilibrium (presumbably it is still a good approximation). Heterogeneity in optical properties will also have this effect – a cloud top at night will radiatively cool, with the particles cooling more intensely and perhaps becoming colder than the air in between.

    Microscale homogenizing of temperature by radiation should be more effective in thicker clouds, because on a small scale, much of the radiation emitted by a particle will be absorbed far away in the air, but if a cloud is thick, then within its interior, colder particles and air will be heated by radiation from a large volume of warmer particles and air; warmer particles and warmer air will cool radiatively to a large volume filled with cooler particles and air. But radiation might not be the major mechanism involved because the distances between heat sources and sinks are so tiny.

    If air ascended fast enough to condense 2 g of water per kg of air per minute (PS ballpark figure off the top of my head to be used as a reference point), this would be a latent heat source of around 2500 J per kg of air per half minute, about 83 J/(kg s)…

  48. ~ 80 W/kg, on the order of 80 W per m3;

    PS about “Microscale homogenizing of temperature by radiation should be more effective in thicker clouds” – up to a point, but if the cloud droplets individually have rather small LW albedos, aside from diffraction, then radiation transfer within a cloud might be on the order of (?) 3 meters (?) (its hard to say because cloud droplets are around the same size as the wavelengths of LW radiation, so optical properties are not as straightforward as using geometric ray tracing), so most radiative transfer to air from droplets would be at wavelengths where the air has significant opacity over just (?) 3 m (?).

    Well, a 3 m thickness of air would have latent heating of ~ 240 W/m2, so … it looks like radiation alone would have a hard time keeping microscale temperature variations from becoming large.

    Spacing between cloud droplets may be around 1 mm and typical cloud droplets may have a radius on the order of 10 microns (Wallace and Hobbs, p. 173) – a cross sectional area of ~ 3 * 10^-10 m2, a volume of ~ 4 * 10^-15 m3, a mass of ~ 4 * 10^-9 g, or on the order of 4 g per cubic meter of air. Each cloud droplet of such size would have released about 10^-5 J. A cubic mm has a cross section of 10^-6 m2.

    From http://www.engineeringtoolbox.com/thermal-conductivity-d_429.html,
    the thermal conductivity is around 0.024 W/(m*K).

    So 1 K over 1 mm distance over 1 square mm area would sustain a heat flux of 0.024 * 1000 * 1e-6 W = 24e-6 W

    Surface area of droplet about 1.2 e-9 m2, over a distance of 1e-5 m, 1 K difference sustains heat flux ~ … gotta go.

  49. Well, after looking at a skew-T chart (a visual reference for atmospheric thermodynamics information), it looks like one could condense typically 4 g of water per kg of air over an ascent on the order of 2 km in the lower troposphere for moderate to warm conditions.

    A 20 m/s updraft (rather strong) would take 100 s to go 2 km.

    A latent heat release of about 10^-5 J per cloud droplet is required for formation and growth to 10 micron radius; 1 such cloud droplet per cubic millimeter gives 4 g of liquid water per kg air, assuming a density of 1 kg per cubic meter of air.

    This is a heat flux of 10^-7 W per cloud droplet for the above rate of ascent.

    I derived a formula for the temperature variation assuming a steady state heat flow from the surface of a droplet of radius R0, decreasing outward so that there is a constant heating rate per unit volume out to a radius R1, which defines a sphere of the volume per cloud droplet, given thermal conductivity K:

    T'(r) = T at radius r, relative to T at R0, where Q0 is the heat flux outward at R0.

    T'(r)
    = Q0/(4*pi*K) * [ (1 + R0^3/(R1^3 – R0^3))*[1/r – 1/R0] + [r^2 – R0^2]/(R1^3 – R0^3) ]

    T'(R1) = Q0/(4*pi*K) * [ (1 + R0^3/(R1^3 – R0^3))*[1/R1 – 1/R0] + [R1^2 – R0^2]/(R1^3 – R0^3) ]

    Which is, for R0 << R1, approximately

    T'(R1) ~= Q0/(4*pi*K) * [ -1/R0 + 1/R1 ]
    or
    T'(R1) ~= Q0/(4*pi*K) * [ -1/R0 ]

    Using K = 0.024 W/(m K), Q0 = 10^-7 W, R0 = 10^-5 m (10 microns), and R1 ~= 6.2035 * 10^-4 m (for a spherical volume of 1 cubic mm), I got

    T'(R1) ~= – 0.032 K

    So the droplet surface would have to be about 0.032 K warmer than the coldest air between the droplets (setting aside any evaporating haze particles, etc.). Most of that temperature variation is close to the droplet, so the droplet would be approximately 0.03 K warmer than the average temperature of the air. This value is not very sensitive to changes in droplet number per cubic meter – it doesn't change much even if the air density is only 0.25 kg per cubic meter or if it is 1.5 kg per cubic m, so the approximation will work for a wide range of vertical levels for the same droplet number per mass of air.

    A temperature differenc of 0.03 K is small enough that it could be ignored for many purposes. Of course, in actuality, the heat flux would not be steady state and the size of the droplet would be changing – with the same latent heat release rate, starting at 1/100 the radius (a typical cloud condensation nucleus, from the same page of Wallace and Hobbs referenced above), the temperature elevation of the growing droplet would be about 3.3 K. However, such a heat flux would involve growth to the formerly used 10 micron radius in 100 s, and the temperature elevation would not get quite that high and would rapidly fall back toward 0.03 K as the droplet grows and the heat is conducted away. And this is for a rather strong updraft.

    PS from Wallace and Hobbs, the 10 micron radius cloud droplet would have a terminal velocity of around 1 cm per second. This motion through the air would reduce the temperature elevation.

  50. Patrick 027, Whoa!
    I still have not finished my homework that you set for me in your August 29, 2009 @ 1:36 pm, together with more at 3:01 pm, and again at 9:48 pm the same day.
    I followed your suggestion but am still floundering at line #1, after doing a Google on your recommended word combinations and finding hit-count stats as follows:

    Climate + evapo: 158,000
    Global + warming + evapo: 43,200
    Anthropogenic + global + warming + evapo: 17,400

    I have not yet done ‘paleoclimate + evapo’, the fourth item on your first line, because I hesitated about its relevance for today. (AD 2009+)
    Incidentally, your line # 2; I notice was ‘Milankovitch’. Is that likely to have an effect on evapo-transpiration in the next hundred years or so?

    Hey look, there are yet 116.25 lines of homework to go in just those three older posts…. It’s going to take time!

  51. factor of 0.5 dropped from an equation; correction will be made…

  52. Bob_FJ – “Hey look, there are yet 116.25 lines of homework to go in just those three older posts…. It’s going to take time!”

    Oh, sorry, that was not my intention. Suggestion: look for those terms unfamiliar to you and look them up in combination with a familiar term like ‘climate change’, ‘climate sensitivity’ (thats a warming per unit forcing), ‘AGW’, etc.

    If your are specifically interested in learning more about ET and clouds and the water (hydrologic) cycle, try those terms, along with Bowen ratio, boundary layer, Kohler curve/equation, aerosol – and maybe one or more scientists’ names.

    Milankovitch (orbital) forcing would not have much effect in the next 100 or 1000 years.

  53. PS the correction to the droplet temperature elevation formula did not have a significant numerical effect for the two cases I computed. (still 0.032 K and 3.3 K for the 10 micron and 0.1 micron radius droplets growing by 4 g per billion droplets in 100 s.

    Corrected formulas:

    elevation of T at heat source Q0 relative to r:

    T’ = – Q0/(4*pi*K) * [ (1 + R0^3/(R1^3 – R0^3))*[1/r – 1/R0] + 0.5*[r^2 – R0^2]/(R1^3 – R0^3) ]

    for r = R1:

    T’ = – Q0/(4*pi*K) * [ (1 + R0^3/(R1^3 – R0^3))*[1/R1 – 1/R0] + 0.5*[R1^2 – R0^2]/(R1^3 – R0^3) ]

    Which is, for R0 << R1, approximately

    T' ~= Q0/(4*pi*K) * 1/R0

    The formula for water vapor concentration variation will have the same structure.

  54. Patrick 027, I repeat my question concerning a logical increase in evapo-transpiration with global warming yet again:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    If there is anything available in the public domain, I would like your advice.
    Perhaps something of quality like the following?
    Dessler, Zhang and Yang: (Water vapour positive feedback)
    GEOPHYSICAL RESEARCH LETTERS, VOL. 35, L20704, Received 13 July 2008; revised 16 September 2008; accepted 19 September 2008; published 23 October 2008.
    Spencer, Braswell, Christy, and Hnilo: (Clouds negative feedback)
    GEOPHYSICAL RESEARCH LETTERS, VOL. 34, L15707,
    Received 15 February 2007; revised 30 March 2007; accepted 16 July 2007; published 9 August 2007.
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    As far as I’m aware, there is nothing of quality on this topic on a significantly global scale. Regionally yes, where e.g. empirical data has shown that water run-off has reduced despite increased rainfall, but that is specific to that particular eco-system or region.

    You have not answered my question, but instead have drowned the issue with endless minutiae including the irrelevant. (e.g. line 2 of 117: Milankovitch). I have flicked through some of it, and your exhibit ‘A’ seems to be this:

    Climate Models Confirm More Moisture In Atmosphere Attributed To Humans

    Well, I did not bother to read the article, because:
    1) The title alone is internally wrong: A model may support a hypothesis, but it cannot confirm it.
    2) We are not discussing increased water vapour in the atmosphere, but whether increased E-T is the prime cause.
    3) Substantially global empirical data are required to make a quality assessment.

    Oh BTW, my reference to you setting 117 lines of homework for me, in which the first line alone registered over 200,000 Google hits, was intended to point-out the absurdity and irrelevance of your fob-offs.

    As an academic, you may have the luxury of being able to hypothesise freely, but me, as a retired engineer, I have had to be more careful with my science. A mistake or wrong assumption on my part could have got people killed or maimed.

  55. “A mistake or wrong assumption on my part could have got people killed or maimed.”

    In a less direct way, the same (and economic injury) is true of AGW-related policy. However, the luxury that policy for AGW has is that more accurate is better than less accurate; it is not all or nothing. And there is an insurance-like aspect – we can’t know everything with precision, but we have to make decisions based on likely outcomes and risk factors, etc.

    “Oh BTW, my reference to you setting 117 lines of homework ”

    I didn’t tell you to look every one up; I suggested looking up any combination of them. I thought you might find something interesting. You seemed to demonstrate a lack of awareness of how much is known or is being studied and so I made a list of interesting topics.

    “Well, I did not bother to read the article, because:”

    … I thought the title seemed a bit odd. But I think they were comparing model output to observations – though it’s been awhile since I’ve read it.

    “Patrick 027, I repeat my question concerning a logical increase in evapo-transpiration with global warming yet again:” … I’ll get back to you on that; I have to take care of some other things. I did find something about clouds.

    But it might help if you stated more clearly exactly what you are looking for – are you disagreeing with Kiehl and Trenberth, or with me or with somebody else, or are you looking for clarification, or what?

  56. … because your initial issue was that you seemed to think that more water vapor was automatically correlated with larger ET surface cooling, and also your statement that radiation is not ‘heat’ (a differentiation which is irrelevant to the results), and now you’re asking me about cloud feedbacks when I never set out to go into detail in that matter (I need to learn more about it myself).

  57. Now I’m really confused. According to Nordell, thermal pollution is enough to explain 55-74% of observed warming based on energy analysis. http://www.ltu.se/shb/2.1492/1.5035?l=en

    Climate scientists don’t seem to agree with Nordell, based on the fact that the estimates of radiative forcing are much larger than Nordell’s estimate of net heat radiation.

    But climate scientists don’t seem to have any robust method of converting energy to temperature change except via the formula to convert radiative flux (RF) to surface temperature change (ΔTs): ΔTs = λRF, where λ is the climate sensitivity parameter. This formula appears to have no basis in theoretical physics. Estimates of climate sensitivity parameter vary enormously (0.3 to 2.0) depending on source.

    Climate sensitivity parameter is derived by observation or by computer model and therefore predicts a linear relationship based on observed trend. If the observed trend was negative, then climate sensitivity parameter would be negative.

    I can’t see that Trenberth et al have even accounted for thermal pollution effect and therefore it’s effect on global temperature must be removed from the effect currently attributed entirely to greenhouse gases. There are so many wooly estimates in the Trenberth paper which are larger than the net global warming effect, which highlight how little we actually know.

    Response– The effect proposed by Nordell is negligible on a global scale. The inability to constrain climate sensitivity better than about 2 to 5 degrees C per 2xCO2 has no bearing on this, you just have to get used to it. By the way, it’s the RF’s that you want to compare, since sensitivity is largely independent of what is causing warming– chris.

  58. Re myself above:

    An update to Kiehl and Trenberth 1997

    “As cloud droplets grow, the diffusion of water vapor becomes a rate limiting step.”

    No, it’s more complex than that, I shouldn’t have stated this.

    What is the case is that as a droplet (or ice crystal) grows from vapor, there must both be a temperature gradient and a vapor pressure gradient in order to keep the vapor pressure high enough and temperature low enough at the droplet or crystal surface to sustain net growth and prevent net evaporation. This means the bulk air will have greater supersaturation than is required to maintain the droplet or ice partical at equilibrium.

    But the difference is small for slow growth that would occur for slow cooling (slow ascent or slow radiative cooling). It is also small when particles are larger. So the effect is that there is some lag in time for particle/droplet sizes to catch up to equilibrium sizes for the macroscopic conditions, which would be important for more rapid cooling.

    Still the case, though, that condensation alone doesn’t generally account for observed precipiation.

  59. The whole concentration of climate scientists on a global energy budget and W/m2 is a load of non-science. Kiehl/Trenberth/etc estimates of line items for the global energy budget don’t balance, so they are adjusted using the global climate computer model. On that basis one can compute future temperature changes based on the past computer model. Fairyland nonsense.

    The effect proposed by nordell is justifiable based on more robust physics than the climate computer model.

    Try finding where his logic is faulty. Climate predictions are full of holes for all to see.

  60. It is interesting to examine the following Climategate Email exchange. (my emphasis)
    From: Tom Wigley
    To: Kevin Trenberth
    Subject: Re: BBC U-turn on climate
    Date: Wed, 14 Oct 2009 16:09:35 -0600
    Cc: Michael Mann , Stephen H Schneider , Myles Allen , peter stott , “Philip D. Jones” , Benjamin Santer , Thomas R Karl , Gavin Schmidt , James Hansen , Michael Oppenheimer

    Kevin,

    I didn’t mean to offend you. But what you said was “we can’t account
    for the lack of warming at the moment”. Now you say “we are no where
    close to knowing where energy is going“.
    In my eyes these are two
    different things — the second relates to our level of understanding,
    and I agree that this is still lacking.

    Tom.

    ++++++++++++++++++

    Kevin Trenberth wrote:
    > Hi Tom
    > How come you do not agree with a statement that says we are no where
    > close to knowing where energy is going or whether clouds are changing to
    > make the planet brighter. We are not close to balancing the energy
    > budget.
    The fact that we can not account for what is happening in the
    > climate system makes any consideration of geoengineering quite hopeless
    > as we will never be able to tell if it is successful or not! It is a
    > travesty!
    > Kevin

    End of extract, for lengthy continuation, See:

    http://www.eastangliaemails.com/emails.php?eid=1056&filename=1255550975.txt

  61. Great work Bob_FJ.

    So Trenberth is a true scientist, whose work has been corrupted by the IPCC. I understand he may be part of a majority of IPCC scientists, who voice the usual skeptical scientific questions.

    See: The IPCC report: what the lead authors really think:
    http://environmentalresearchweb.org/cws/article/opinion/35820

  62. Bob_FJ and Blous79

    Bear in mind there are many aspects of many variables. Don’t assume Trenberth is discussing what you think he is.

  63. Patrick 027, Reur 10:58 pm
    Sorry Patrick, but I can see absolutely nothing ambiguous in the words and their context in what Trenberth wrote!
    Could you please elaborate on what you have tried to imply?

    Response– See his statement at http://www.cgd.ucar.edu/cas/Trenberth/statement.html and the associated publication.– chris

  64. No matter what Trenberth tries to spin after Climategate, the evidence of unbalanced/fudged global energy budget is in the published papers.

    Climate scientists have not yet established a credible physics-based relationship between radiative forcing and temperature prediction.

    In fact he has only clarified the “lack of warming” comment as being out of context.

    Trenberth: “It is quite clear from the paper that I was not questioning the link between anthropogenic greenhouse gas emissions and warming, or even suggesting that recent temperatures are unusual in the context of short-term natural variability.”

    Those words make it *very* clear that he was not in that particular email or paper questioning the link between greenhouse gas emissions and warming.

    What he truly believes he is yet to decide.

    I really can’t see anything ambiguous in words or context of the emails either.

    The answer to how did they figure out the net absorbed energy at 0.9W/m2 amongst the forest of numbers 1-2 orders of magnitude higher – because that’s what the computer model said was required to produce the observed warming. Circular nonscience.

  65. “Climate scientists have not yet established a credible physics-based relationship between radiative forcing and temperature prediction.”

    That’s c–p.

    “The answer to how did they figure out the net absorbed energy at 0.9W/m2 amongst the forest of numbers 1-2 orders of magnitude higher – because that’s what the computer model said was required to produce the observed warming. Circular nonscience.”

    Actually, I don’t know offhand how they figured it out, but

    1. A lot of computer models, and much else, boils down to math. Do you believe that 1 + 1 = 2, or do you think it might be 1 + 1 = 2.01334523 ?

    2. If you see two people standing next to each other, you might be able to accurately gauge how much taller one is than the other even if you don’t know how tall either one is.

  66. Chris, Reur response on my 11:35 pm above; several things caught my eye:

    1) Kevin Trenberth’s indignation at “theft” of the Emails is predictable, as it is with the UEA and the IPCC etc. However, the charge that they may have been “stolen” should not detract from the evidence that they contain. (as for example with the recent British MP’s scandalous expense claims). Furthermore there have been several rather more forensic studies that suggest that they were not hacked, but leaked by a whistleblower. Here is one such article:
    http://homelandsecuritynewswire.com/climategate-leak-not-hack

    2) The two Emails in my 2:53 pm above; Wigley to Trenberth, and Trenberth to Wigley, are, in the words of both authors, clear in context and unambiguous. These would be much more difficult for Trenberth to wordsmith-away, and he does not mention them, arguably for that reason.

    3) Kevin Trenberth’s on-line article discusses possible mechanisms for the current cooling plateau, and laments the current inability to properly measure the processes involved. Here is an extract which is not contradicted within that document:
    “…Perhaps all of these things are going on? But surely we have an adequate system to track whether this is the case or not, do we not?
    Well, it seems that the answer is no, we do not…”

    Although more specific in nature than in point 2) above, it is a further admission that the level of understanding of the complex energy balances is poor. (as is the current ability to measure them)

    • Bob: care to explain why any whistleblower would try to hack realclimate? Why any whistleblower would post a message linking to the file uploaded to realclimate on climateaudit? Can’t? Gee, how surprising. The deniosphere WANTS and NEEDS it to be a whistleblower, since it would otherwise have to admit to using criminally obtained material. Its content is already so weak on usable material to cry fraud (although that does not stop deniers from doing so, anyway), that you can’t have yet another problematic factor shifting focus away from the attempts to question climate science.

  67. Patrick 027, Reur 1:25 pm above;
    I don’t know if Blous79 can sensibly follow what you wrote to him, but me; no!
    Is that what you intended to say?
    Could you please run that by us again?

  68. I found a paper attempting to account for human thermal pollution in the global energy budget. Sadly, it doesn’t even bother to quote Nordell in the powerpoint version I saw, so it therefore perpetuates the junk science that is the (ΔTs): ΔTs = λRF, where λ is the climate sensitivity parameter.

    Fro an example of how physics might deal with temperature and energy, see

    The complicated version of ΔQ = cp * rho * Δt accounts for heat diffusion in substances of different density and thermal conductivity integrated over time.

    Given a rotating earth, it makes not much sense at all to even consider atmospheric effects until one has assumed:
    a. atmospheric effects are at equilibrium
    b. total solar radiation per annum is constant
    c. cosmic rays per annum are constant
    d. total of thermal pollution plus geothermal heat per annum is constant

    One should then apply the computer model to predicting day/night temperature variablility and seasonal temperature variablilty in land on oceans to prove the various formulae explaining heat diffusion and ocean currents and so on are valid.

    Having done that, one should then apply the computer model to predicting expected changes when:
    a. atmospheric effects are at equilibrium and have no effect
    b. total solar radiation per annum varies
    c. cosmic rays vary
    d. total thermal pollution varies and geothermal heat generation varies

    Having done that, one should consider the possibility of other effects needed to explain temperature variability.

    The ruling climate nonscience assumes the exact opposite of a sensible scientific approach:
    a. atmospheric effects cause everything
    b. everything else doesn’t count, because we believe in a.

    It’s no more than childish egocentrism or a flat earth mentality, which will never prevail over science.

  69. [the missinglink above]
    For an example of how physics might deal with temperature and energy, see
    http://en.wikipedia.org/wiki/Heat_equation

  70. “The ruling climate nonscience assumes the exact opposite of a sensible scientific approach:
    a. atmospheric effects cause everything
    b. everything else doesn’t count, because we believe in a.”

    NO. We have a theory of the atmosphere causing and affecting things to the extent and in the way that basic physics applied to atmospheric and other conditions predicts, combined with observations that are in agreement.

    This whole bit of writing of the climate sensitivity parameter as junk is goofy. It is not the case that a particular value for this parameter is simply assumed – rather, it is estimated based on models (causal physical relationships), paleodata, etc. And it might not be constant; to assume it is constant is an approximation, but often a good understanding is gained by starting with ‘first approximations’, then moving to ‘second approximations’, etc.

    Would you try to argue that there is no radiative forcing, no radiative blackbody (Planck) response to temperature, no radiative feedback response to temperature? We know these things exist. Ergo, there is such a thing as climate sensitivity.

  71. The the physics of radiative greenhouse theory is disproven by Gerlich and Tscheuschner. There is nothing wrong with a concept of “climate sensitivity”, excepting that its value could be positive or negative depending on the positive or negative feedback mechanisms in operation, which we don’t know enough about yet.

    The current estimation of climate sensitivity parameter by observation and computer model is founded in the global energy budget which is founded in the computer model and observation. QED.

    To quote Trenberth’s latest attempt at the Global Radiative Energy Budget:

    Click to access 10.1175_2008BAMS2634.1.pdf

    “There is a TOA imbalance of 6.4 W m-2 from CERES data and this is outside of the realm of
    current estimates of global imbalances (Willis et al. 2004; Hansen et al. 2005; Huang 2006) that are expected from observed increases in carbon dioxide and other greenhouse gases in the atmosphere. The TOA energy imbalance can probably be most accurately determined from climate models and is estimated to be 0.85±0.15 W m-2 by Hansen et al. (2005) and is supported by estimated recent changes in ocean heat content (Willis et al. 2004; Hansen et al. 2005). A comprehensive error analysis of the CERES mean budget (Wielicki et al. 2006) is used in Fasullo and Trenberth (2008a) to guide adjustments of the CERES TOA fluxes so as to match the estimated global imbalance. CERES data are from the Surface Radiation Budget (Edition 2D rev 1) (SRBAVG) data product. An upper error bound on the longwave adjustment is 1.5 W m-2 and OLR was therefore increased uniformly by this amount in constructing a “best-estimate”. We also apply a uniform scaling to albedo such that the global mean increases from 0.286 to 0.298 rather than scaling ASR directly, as per Trenberth (1997), to address the remaining error. Thus the net TOA imbalance is reduced to an acceptable but imposed 0.9 W m-2 (about 0.5 PW). Even with this increase, the global mean albedo is significantly smaller than for KT97 based on ERBE(0.298 vs 0.313).”

    Response– G&T is utter garbage. See Arthur Smith’s rebuttal paper for instance, and I am helping put another one in the works for their IJMP publication. It’s pseudo-science to the extreme, and your support of it is clearly revelaing about your intent and/or understanding of the issues. Furthermore, the value for the sensitivity paramter “lambda” cannot be negative unless a positive RF causes global cooling, which is unphysical. You might be thinking that the value is small or large depending on whether the feedbacks are net positive or negative, and I hate to crash your parade, but we know plenty enough to say that they are positive– chris

  72. The net positive RF is a number cooked up in the computer model which ignores increasing thermal pollution.

    Clearly, negative feedback control systems don’t exist for some people.

    Arthur Smiths rebuttal of Gerlich and Tscheuschner is refuted here:
    “Comments on the “Proof of the atmospheric greenhouse effect” by
    Arthur P. Smith
    Gerhard Kramm, Ralph Dlugi, and Michael Zelger

    Click to access 0904.2767.pdf

  73. Marco, Reur reply post to me above:
    You wrote in part:
    Bob: care to explain why any whistleblower would try to hack realclimate? Why any whistleblower would post a message linking to the file uploaded to realclimate on climateaudit?

    First of all, let me remind you of part of what I wrote above. Please read it carefully:

    Kevin Trenberth’s indignation at “theft” of the Emails is predictable, as it is with the UEA and the IPCC etc. However, the charge that they may have been “stolen” should not detract from the evidence that they contain. (as for example with the recent British MP’s scandalous expense claims). [wherein some MP’s resigned]

    Well, I, like you, can only speculate, (and, BTW, I’m not sure of your alleged facts), but IF it was a whistle blower, I can conceive that if s/he was pissed-off with stuff at CRU, then s/he would probably also be pissed-off with RC, particularly with the top guys there like Mann and Schmidt, who feature strongly in the Email exchanges and in CC’s. Thus, such a suggested whistleblower may have wanted to tighten the loop revealed in that correspondence.

    It should also be noted that if a hacker broke into CRU/UEA, without leaving any trace, then that would involve considerable skill. So why would such a clever individual fail in what you say was an attempted hack of RC, a site that I suspect had much lesser security etc?

    The rest of your post was a rant of the nature that I don’t respond to

    • What a load of crap (sorry, there’s no better word for it). Gavin Schmidt, a computer expert, is the one who takes care of the safety of realclimate. Still, it WAS hacked, but rapidly discovered.

      And Schmidt is in exactly 6 of the e-mails, all innocuous (in fact, in one he even calls for handing over data).

      The rest is not a rant, but a simple observation: the content of the mails can only be made into something by deliberate misinterpretation. Look, for example, at McIntyre deliberately leaving out a few sentences so his pre-determined conclusion (i.e., a claim he made previously) could be upheld. Enter those sentences, and suddenly his claim went out of the window.

  74. Marco wrote (31 December, 3:41 am)

    The deniosphere WANTS and NEEDS it to be a whistleblower.

    This is no more trure than:

    The (pro-AGW) deniosphere WANTS and NEEDS it to be a hacker.

    But, as Bob_FJ has pointed out, it really does not matter HOW the e-mails were leaked; more important is their CONTENT.

    Max

    • And those contents are by-and-large conmpletely innocuous, and require deliberate misinterpretation to make it into something that could be considered somewhat questionable.

  75. The updated Kiehl + Trenberth “cartoon” shows global energy flows, with a “net absorbed” energy of 0.9 W/m^2 (or 0.26% of the total energy budget), with all values averaged globally and annually.

    Presumably this theoretical “net absorbed” energy would result in an increase in the globally and annually averaged atmospheric, land and sea temperature.

    According to a recent study by Lindzen and Choi, the total flow of outgoing LW and SW radiation, as observed by ERBE satellites, has increased with higher surface temperature, indicating that the system attempts to restore equilibrium.

    Click to access Lindzen-and-Choi-GRL-2009.pdf

    This seems to imply that the K+T “cartoon” is really a static depiction of a dynamic process, which actually changes with increased total outgoing radiation as the surface temperature increases with an increase in the net outflowing energy, resulting in a new static snapshot with net absorbed energy of zero.

    If this is the case, as it appears, it might be best to show this dynamic behavior, rather than simply showing a static snapshot.

    Max

  76. Bob_FJ

    I have noticed that you have commented on the K+T “cartoon” frequently here and on other sites.

    Do you have any comments to my post on the dynamic rather than static nature of the earth energy balance, as pointed out by the recent Lindzen and Choi observations?

    Max

  77. Marco

    Thanks for your personal opinion on the meaning and impact of the leaked emails. Others appear to have different opinions on this, some even suggesting that they have demonstrated collusion among a relatively small group of very influential climate scientists to fudge the data, block the publication of any dissenting opinion and destroy or withhold information to FOI requests.

    Max

  78. Bob_FJ

    Another question on the K+T cartoon.

    Let’s assume that this is a dynamic process, with temperature rising as GHG concentrations rise (with all other factors being equal).

    Let’s ignore for now the findings of Lindzen and Choi that total outgoing radiation increases with higher temperature, leading to a dampening effect (or total net negative feedback).

    Let’s ignore all other anthropogenic forcing factors other than CO2, for simplicity’s sake (IPCC has told us that these have essentially cancelled one another out over the period from 1750 to 2005).

    Let’s ignore all natural forcing factors (IPCC have told us that these were essentially insignificant over the same long-term period).

    Let’s say today’s atmospheric CO2 concentration is 380 ppmv and that this is increasing annually by 2.5 ppmv.

    We then have an annual change of 382.5/380 = 1.0066

    The logarithm (ln) of this ratio is 0.00656

    GH theory (IPCC: Myhre et al.) tells us that the CO2 climate forcing from this increase equals 5.35 times the ln(ratio) = 0.035 W/m^2

    Even if we escalate this by a factor of 3.2 to account for net positive feedbacks, as estimated by the IPCC model simulations, we arrive at 0.112 W/m^2

    How do K+T arrive at 0.9 W/m^2 or eight times this value?

    I know that this is the very small difference between some very large numbers, so that I would expect it to be an extremely rough number, but this seems like too much of a discrepancy.

    Any thoughts?

    Max

  79. manacker
    “Even if we escalate this by a factor of 3.2 to account for net positive feedbacks, as estimated by the IPCC model simulations, we arrive at 0.112 W/m^2

    How do K+T arrive at 0.9 W/m^2 or eight times this value?”

    That is in the same ballpark as Nordell’s figures for heat generated by thermal pollution which he says explains 55-75% of warming.

    I believe the answer to your question lies in Trenberth’s latest Global Radiative Energy Budget:

    Click to access 10.1175_2008BAMS2634.1.pdf

    I believe the number comes from the Hansen computer model. The magnitude of number depends on what exact value the model has for “climate sensitivity parameter”. Wikipedia has some estimates ranging from 0.3 to 2.0. Some people think lower but William Connolley didn’t like those sources.

    The formula to convert radiative flux (RF) to surface temperature change is (ΔTs): ΔTs = λRF, where λ is the climate sensitivity parameter.

    So if one knows the surface temperature change and climate sensitiivty parameter, the value for radiative flux can be solved. This is used to balance the out of balance global energy budget.

    • Blous79

      Thanks for reply and link to Trenberth et al. paper (which will be published soon).

      This paper tells us:

      “The Clouds and the Earth’s Radiant Energy System (CERES) measurements from March 2000 to May 2004 are used at TOA but adjusted to an estimated imbalance from the enhanced greenhouse effect of 0.9 W m-2.”

      This tells me that the 0.9 W/m^2 figure is an “estimated” number, which has then been used to “adjust” the CERES measurements.

      The 0.9 figure is apparently based on climate model simulations by Hansen et al. (as you indicate), with an “upward error adjustment of 1.5 W/m^2 made to the LW radiation.

      The ERBE observations of Lindzen and Choi, which show a net increase in outgoing LW + SW radiative flux with warming, are not mentioned.

      Reading this report in more detail simply pinpoints how fraught with spurious errors, rather arbitrary adjustments and uncertainties the entire energy balance estimates are.

      The 0.9 W/m^2 estimate for net absorbed portion remains an unsubstantiated “plug number” as far as I am concerned, until someone can substantiate it based on actual physical observations, rather than simply computer model simulations.

      Max

  80. manacker

    “How do K+T arrive at 0.9 W/m^2 or eight times this value?”

    The climate doesn’t fully equilibrate over one year. The 0.9 W/m^2 is the difference between the change in forcing and the radiative feedback (including Planck response, see below); it takes time for 0.9 W/m2 to increase the surface+tropospheric temperatures because of the heat capacity (mainly of the ocean – due to the short time period, only a portion of which is acting on the climate response). Continually adding radiative forcing continually adds to the imbalance, while continual temperature increase tends to reduce the imbalance over many years until it approaches zero.

    “According to a recent study by Lindzen and Choi, the total flow of outgoing LW and SW radiation, as observed by ERBE satellites, has increased with higher surface temperature, indicating that the system attempts to restore equilibrium.
    http://www.drroyspencer.com/Lindzen-and-Choi-GRL-2009.pdf

    Have yet to look at the reference, and I might never get to it for time constraints, but my first reaction is:

    Well duh.

    Of course the system attempts to restore equilibrium. This is why we get warming in response to positive radiative forcing.

    Re Blous79

    “The net positive RF is a number cooked up in the computer model which ignores increasing thermal pollution.”

    If by ‘cooked up’ you mean accurately calculated, then yes.

    If by ignores thermal pollution, you mean sets aside something too small to bother with on the regional to global scale, then yes. (Total human primary energy consumption ~ 12 TW or so (roughly 3 times the energy dissipated in the tides, roughly 1/3 the geothermal energy seeping out of the planet); divided by ~ 510 T(m2), this is roughly 0.024 W/m2. Some forcing that is!)

    “Clearly, negative feedback control systems don’t exist for some people.”

    Oh, here we go.

    The thing is, in the language of control systems (in so far as I can tell), most climatologists would agree that the net feedback is negative. If the net feedback were not negative, then the climate sensitivity would not be 3 K/(doubling CO2 or equivalent radiative forcing); instead, it would be INFINITY.

    What ‘we’ mean when we say that (the evidence indicates) a net positive feedback, is that the all the feedbacks BESIDES the ‘planck response’ – that is, the increase in outgoing LW radiative flux due to the way blackbody radiant intensity depends on temperature – amounts to a net positive feedback. For a stable climate, that net positive feedback still has to be smaller in magnitude then the planck response negative feedback.

    “Arthur Smiths rebuttal of Gerlich and Tscheuschner is refuted here:
    “Comments on the “Proof of the atmospheric greenhouse effect” by
    Arthur P. Smith
    Gerhard Kramm, Ralph Dlugi, and Michael Zelger
    http://arxiv.org/ftp/arxiv/papers/0904/0904.2767.pdf

    I’m not even going to bother. (G&T is so obviously junk, you’d have to pull physics out by the roots (including second law of thermodynamics) and smash it apart to corroborate it.) Don’t have the time. (Shouldn’t I have an open mind. Yes, but, as the saying goes, not so open as let my brain fall out and go splat all over the floor.)

    About the maximum temperature the Earth could have without a greenhouse effect (LW opacity in the atmopshere) but with albedo kept the same, but with a perfect blackbody (in LW spectrum) surface, is about 255 K. Now, it is true the surface does have some small nonzero LW albedo – I think it might be around 5 % – well, it depends a bit on the wavelength distribution, but assuming a greybody within the LW portion of the spectrum, that amounts to a temperature increase of about 1.25 %, or about 3 K. Meanwhile, I’ve estimated the variation of temperature over the surface of the Earth and over the year and over the diurnal cycle would result in a cooling of about 1 K due to the nonlinear dependence of blackbody radiant flux on temperature (okay, actually, same absolute variation about an average 255 K instead of 288 K would have an effect a bit larger, but (I don’t think by much…)). So you could take the estimated 33 K total greenhouse effect and take maybe 2 K off the top, you’ve still got about 31 K, give or take, for which the greenhouse effect is required.

    Besides which, the physics of the greenhouse effect makes sense.

  81. Patrick 027 said “The climate doesn’t fully equilibrate over one year.”

    From a radiation perspective, I find a one year delay in response a bit long. Every day, land and water on earth heat up and cool down. Heating and cooling by radiation/conduction/convection effects have a time lag of hours. The magnitude of effect is shown in the Trenberth graphic. Beyond that one could argue “equilibration” happens every day or never at all.

    The fact that there is a misnamed “atmospheric greenhouse” that makes the earth warm has nothing to do with the falsified radiative effects of trace amounts of CO2. Water vapour has the major effect and this is well known as are its mechanisms even though the complexity of response is unabel to be modelled yet.

    Feedback is a secondary phenomenon which controls the effect. In the case of warming in the sun, the clearing of clouds noted by Lindzen and Choi permits warming to be reduced. The end result of any negative feedback control system is a stable state with a degree of tolerance outside of which the system returns towards the mean state as determined by the set point. Any positive feedback system produces an inherently unstable response.

    Response– How do you support the Lindzen and Choi results and simultaneously accept G&T “falsified greenhouse.” Do you have a coherent thought beyond “human-induced cliamte change is bogus?” — chris

  82. Patrick 027, Reur Dec. 30, 09 @ 1:25 pm
    I’m trying to catch-up during my summer holiday season in Oz:
    You wrote in part :

    Actually, I don’t know offhand how they figured it out, but
    1. A lot of computer models, and much else, boils down to math. Do you believe that 1 + 1 = 2, or do you think it might be 1 + 1 = 2.01334523 ?
    2. If you see two people standing next to each other, you might be able to accurately gauge how much taller one is than the other even if you don’t know how tall either one is.

    You [Patrick] did not answer my ask of you to clarify what you were trying to convey, but anyway, I agree in your 1) that computers (etc) do accurately predict that 1 +1 = 2, but so what?.
    Computer models depend on many parameter inputs, and when those inputs are of poor understanding, as has been openly declared by the IPCC etc, then the outputs cannot be sensibly accepted as a substitute for ABSENT empirical data.
    As for your 2), I have no idea how to respond….. What do you mean?

    BTW; Gavin Schmidt wrote this over at RC; in part, not long ago, my emphasis added:

    “Alert readers will have noticed the fewer-than-normal postings over the last couple of weeks. This is related mostly to pressures associated with real work (remember that we do have day jobs). In my case, it is because of the preparations for the next IPCC assessment and the need for our group to have a functioning and reasonably realistic climate model with which to start the new round of simulations. These all need to be up and running very quickly if we are going to make the early 2010 deadlines…”

    May I also refer you to part 1 and 2 of a commentary on a talkfest on improved climate modelling in Reading (England)
    http://ccgi.newbery1.plus.com/blog/?p=85


    So, if the earlier models are sufficient, in the absence of supporting empirical data, to make massive policy and financial decisions etc, why is it recently necessary to “improve” those models????????????????????????????????????????
    Is that a complicated question?
    Feel free to Google on what I’ve written!

  83. G&T falsified radiative greenhouse effect. The effect of an atmosphere is more complex than trace gases capturing radiated IR, which is why the “greenhouse” name is confusing. A real life “greenhouse” on earth is a block to convection not IR radiation. Not a lot different from heating a hot tin shed really.

    Clouds are actually massive enough to block or reflect light waves, which has nothing to to with absorption and re-radiation.

  84. Marco, Reur January 2, 2010 @ 2:07 pm
    You wrote in part:

    Gavin Schmidt, a computer expert, is the one who takes care of the safety of realclimate.

    Oh really? I’m aware that he is a qualified mathematician and that he does climate computer modelling stuff, (you know; complex iterations based on various assumptions annat), but what has that to do with computer skills and RC site security?
    Here is an extract from the RC website description:

    We use WordPress blogging software and are hosted at webfaction.com. Site design is by Gavin, with some elements inspired by the setup at Cosmic Variance.

    I guess that ‘Wordpress’ may incorporate some site security measures, although it, and none of the other references make mention of such in their blurbs.
    What makes you think that the UEA/CRU site has inferior security to that of RC?

    You can speculate as much as you like that the Emails were hacked rather than leaked, but it is what SOME of them contain that it is important to analyse, regardless of how they were “stolen”.

    I’ve gotta go now, will say more later.+

  85. Patrick

    To my question:

    “How do K+T arrive at 0.9 W/m^2 or eight times this value?”

    You replied:

    The climate doesn’t fully equilibrate over one year. The 0.9 W/m^2 is the difference between the change in forcing and the radiative feedback (including Planck response, see below); it takes time for 0.9 W/m2 to increase the surface+tropospheric temperatures because of the heat capacity (mainly of the ocean – due to the short time period, only a portion of which is acting on the climate response). Continually adding radiative forcing continually adds to the imbalance, while continual temperature increase tends to reduce the imbalance over many years until it approaches zero.

    Sorry, you missed the point here, Patrick.

    I did not ask about reaching equilibrium.

    I simply stated that the radiative forcing from the annual increase in atmospheric CO2 concentration equates to 0.11 W/m^2, not 0.9 W/m^2, as shown on the cartoon, even if one assumes that the 2xCO2 climate sensitivity equals 3.2°C, or roughly 3 times the value for CO2 alone, as a result of assumed strongly positive net feedbacks.

    Reaching equilibrium is a second question.

    Lindzen and Choi show that the total outgoing LW + SW radiation increases with surface temperature, resulting in a net negative feedback, rather than the net positive feedback, as assumed above.

    The 2xCO2 climate sensitivity with no feedbacks = around 1°C; this equates with an equilibrium radiative forcing of 5.35 times ln(2) = 0.035 W/m^2.

    The L+C observations tell us that the 2xCO2 climate sensitivity including the net negative feedback will be around 0.5° to 0.7°C, which would equate with an equilibrium radiative forcing of 0.018 to 0.028 W/m^2.

    This is even further from the 0.9 W/m^2 shown on the diagram.

    This is the difference I questioned, not the time it takes to reach equilibrium.

    To repeat the question again:

    (a) Where does the 0.9 W/m^2 figure originate, (b) how was it calculated and (c) how does it compare with the theoretical equilibrium radiative forcing resulting from the increase in atmospheric CO2 concentration (in this case simplified to exclude other anthropogenic forcings, which IPCC tells us cancel one another out)?

    I realize that the 9 W/m^2 figure is by definition a very small difference between some very large fairly roughly estimated figures and, as such, is only an extremely rough approximation.

    Maybe you or Bob_FJ can answer this question.

    Max

    PS My earlier point was that the cartoon shows a static snapshot of a dynamic process, and would be more meaningful if it showed how the process really works dynamically, taking into account all we know about feedbacks, etc., but that is another topic.

    Response– K&T aren’t showing a RF from CO2 for one year’s period of time– chris

  86. Sorry, Patrick part of the equation was left out.

    The 2xCO2 radiative forcing equals 5.35 time ln(2) = 3.7 W/m^2
    The equilibrium radiative forcing of adding 2.5 ppmv CO2 equals:
    ln (382.5/380) = 0.00656

    5.35 * .00656 = 0.035 W/m^2

    Rest is OK

  87. Patrick027 and Bob_FJ

    The “annual energy budget” cartoon shows a

    “net absorbed” part of 0.9 W/m2 due to the enhanced greenhouse effect.

    As I pointed out, the annual increase in CO2 does not support such a high value, but rather a value of somewhere between 0.025 and 0.11 W/m^2, depending on how one estimates the net total feedbacks. With no feedback the value for CO2 increase alone equals 0.035 W/m^2.

    Since IPCC tells us (a) that all other anthropogenic forcings except CO2 essentially cancel one another out and (b) that natural forcing factors are insignificant, we can use changes in atmospheric CO2 alone to make a rough approximation.

    To arrive at 0.9 W/m^2, one would need to have a calculated increase in atmospheric CO2 of 1.132, which means from 336 ppmv (1979 value) to 380 ppmv (today’s value).

    So the 0.9 W/m^2 could be construed to be a 30-year value rather than an “annual energy budget” value.

    But in fact, it looks more like an unsubstantiated “plug number” to me, unless someone can explain its rationale and how it was calculated.

    Max

  88. Chris

    You wrote:

    Response– K&T aren’t showing a RF from CO2 for one year’s period of time– chris

    What are they showing then?

    Max

    Response– It is the net TOA imbalance (note the consistency with Hansen et al 2005 and measures of ocean heat content anomalies). I’m not sure why you choose to compare this to the radiative forcing for a one year incremental change in CO2, it makes no sense– chris

  89. For a moment, folks, let’s forget G+T and the whole basis for the GH theory; let’s also forget the time required to reach “equilibrium”.

    Let’s just concentrate on the revised K+T annual energy balance cartoon at the top.

    This figure shows a net absorbed portion of 0.9 W/m^2.

    This is the most significant number on the cartoon, since it tells us how much GH warming we can expect, yet it is the most suspect number, as well.

    It bears no relationship with the theoretical GH warming to be expected at equilibrium from annual changes in atmospheric CO2 (~ all GHGs); in fact it is 8 to 30 times this value, depending on how one assumes the net overall feedback from water (as vapor, liquid in low altitude clouds or ice crystals in high altitude clouds), lapse rate, surface albedo, etc. will impact the theoretical GH warming alone.

    K&T say it is based on model simulations by Hansen et al., and that this estimate has been used in their study to adjust CERES values, but this gives me no confidence at all that it is a realistic number. It rather just confirms my suspicion that it is a “plug number”.

    If anyone has an answer to my earlier questions on this number, I would be very interested.

    Otherwise I will be left with the suspicion that it is a meaningless number, and that the entire “annual energy balance” cartoon is aalso meaningless.

    Thanks in advance to anyone who can provide the answer.

    Max

  90. Chris

    Thanks for your comment.

    You wrote

    Response– It is the net TOA imbalance (note the consistency with Hansen et al 2005 and measures of ocean heat content anomalies). I’m not sure why you choose to compare this to the radiative forcing for a one year incremental change in CO2, it makes no sense– chris

    You are not answering my question, Chis.

    The “net absorbed” imbalance of 0.9 W/m^2 in the annual energy balance has to come from somewhere, right?

    Presumably it comes from the net GH forcing of GHGs; IPCC tells us that this is roughly equal to the forcing from CO2 alone, since the other anthropogenic forcings cancel one another out, and the natural forcings are negligible.

    So how can it be 8 to 30 times higher than the forcing one would expect from the annual increase in CO2?

    Simply saying it is the TOA imbalance and that it is consistent with Hansen et al. (2005) does not answer the question. Ocean heat content anomalies appear to be a moving target, but have nothing to do with my question in any case, as they have to be caused by something.

    If you do not know the answer to the question, that’s OK. I don’t either, and I’m just looking for someone who might know it.

    Max

    Response– Well I did. The imbalance does not occur over one year, and you also need to incorporate feedbacks into the picture (water vapor, clouds, etc). Like I said, the RF from CO2 over one year is not a useful number– chris

  91. Chris

    Not to belabor a point, but you did not answer my question regarding the 0.9 W/m^2 net absorbed portion in the K+T annual energy balance cartoon.

    Even if all feedbacks are considered (as the model simulations cited by IPCC estimate), we still have an annual RF from the increased GHE of one eighth of this number, or around 0.11 W/m^2.

    Where is the rest of the imbalance coming from, if not from the GHE?

    Are natural variabilities (or natural forcings not considered by IPCC) factored in?

    If you don’t know, that is OK. Maybe someone else here can answer the question.

    Max

  92. Chris and Blous79

    I checked the Hansen et al. reference cited in the K+T report
    http://www.sciencemag.org/cgi/content/abstract/308/5727/1431

    Our climate model, driven mainly by increasing human-made greenhouse gases and aerosols, among other forcings, calculates that Earth is now absorbing 0.85 ± 0.15 watts per square meter more energy from the Sun than it is emitting to space. This imbalance is confirmed by precise measurements of increasing ocean heat content over the past 10 years.

    This does not tell me much.

    The “precise measurements of increasing ocean heat content over the past 10 years”, which allegedly “confirms” the “imbalance” is a bit of a joke. The ocean has been cooling since (more precise but still very spotty) Argo measurements were put into service in 2003, leading team leader, Josh Willis to express “surprise” and then “adjust” the observations to fit what he expected them to be.

    But more importantly, the GH theory does not support a net imbalance of 0.85 (or 0.9) W/m^2 in the annual energy balance, but rather an imbalance of between 0.03 and 0.11 W/m^2, depending on feedback assumptions.

    Is energy being “created” out of nothing (besides model simulations)?

    Where is the postulated 0.9 W/m^2 coming from?

    Still looking for answers.

    Max

    Response– No, you’re not even bothering to read what you are criticizing, much like the “cycles” you are looking for which Tamino and gavin had to discuss over and over. Please re-read the K&T document (a full PDF version is online at Kevin Trenberth’s page). Actually read the literature on ocean heat content and adjustments instead of just assuming (incorrectly) that it is all adjusted to fit with what’s expected. Otherwise, you’re just asking people to do homework for you and then telling them it’s all wrong when they give it to you– chris

  93. Chris

    Sorry.

    I have thoroughly read both the K+T paper and the cited Hansen et al. 2005 paper, from where K+T got the 0.9 W/m^2 imbalance, which is what we are discussing here.K+T accept this number at face value, i.e. it is a “plug number”.

    In going through Hansen’s arithmetic, I can see that (depending on assumptions made) one can arrive at any “imbalance” number between around 0.2 and 0.6 W/m^2. It all depends on how much of the observed warming one assumes is “still hidden in the pipeline” and whether or not one assumes that the many solar studies are right, which show that around half of the observed warming can be attributed to the unusually high level of 20th century solar activity. If one assumes nothing still hidden in the pipeline, one arrives at a very low “imbalance”, as I indicated earlier.

    But, Chris, it is all very dicey, with a lot of assumptions piled on top of one another.

    I have read the studies on ocean heat content, which showed that the Argo readings did not correlate well with the crude earlier readings or with the theory that the ocean should be warming. I read (as I am sure you also did) that team leader, Josh Willis, was surprised by these Argo results, and that they had been “adjusted” later to fit the theory a bit better.

    The 0.9 W/m^2 “net imbalance” has not been confirmed by any empirical data derived from actual physical observations, Chris. It is a virtual number derived from computer model simulations.

    Max

  94. Manacker –

    just to clarify my response and then Chris’s response –

    The reason the time to reach equilibrium is important is that, if it were ~ 1 year, then the net imbalance would go to zero in that time, and the net imbalance seen would then be due to the change in forcing over the last ~ 1 year.

    Instead, the climate takes time to reach equilibrium, and so the net imbalance is due to a forcing change over a longer period of time.

    This is why using the RF change over 1 year doesn’t give you the answer.

  95. Max and Blous79
    I need to read through some of the papers mentioned above before I can comment much. I thought the rebuttal of A. P. Smith’s paper (unread by me) by the German Atmospheric Process Working Group and Fairbanks, Alaska University was interesting at a quick read but without working through all the time consuming maths.

    However, in brief, concerning only the 0.9 W/m^2 in the 2009 revised Energy Balance thingy, I have strong doubts about the accuracy of some of the related numbers.

    1) 341.3 W/m^2 total incoming shortwave should be OK, but;
    2) 101.9 W/m^2 reflected shortwave, during dawn to dusk, expressed as a year global average…. How accurate is that?
    3) 238.5 W/m^2 outgoing long-wave, day and night, expressed as a year global average…. How accurate is that?
    4) 0.9 W/m^2 net absorbed = 1) minus 2) minus 3) and it represents 0.26% of the total budget. Consequently any small error in the numbers for 2) and 3) would throw 0.9 W/m^2 awry.

    I also consider, stating it very briefly, that global year averages are a fanciful concept. For instance, the albedo of clouds of different species are certain to vary both spatially and temporally. Albedos and thermal properties at the surface and surface temperatures vary considerably both spatially and temporally, and radiation rates vary with the fourth power of T, and so-on.

  96. The climate scientists are ignoring the Nordell/Gervet paper because the W/m2 heat calculation ends up small in comparison with the radiative budget numbers. If the Nordell/Gervet calculations are firmly founded in energy/heat/mass/specific heat/etc based on verifiable empirical and experimental data, then it may be that a much smaller W/m2 figure is required to explain the observed global temperature changes. In that case the climate sensitivity parameter may be much smaller than thought.

    I had trouble figuring out where exactly all the Nordell/Gervet numbers came from – more math in the appendix. I have some issues with Nordell/Gervet assumptions on energy all being converted to heat, but no idea on what proportion of that is stored within manufactured materials/goods. (eg conversion of aluminum oxide to aluminum, iron ore to steel, etc). IPCC estimates thermal pollution at 0.03W/m2. Mackay estimates thermal pollution at around 0.1W/m2.

    Now if Hansen plugged a much smaller value for climate sensitivity parameter into the computer model, anything between +0.9 and -0.1 W/m2 net forcing from radiation could be real and still be perfectly consistent with Trenberth’s out of balance data.

  97. Max and Blous 79,
    Further my 2:05 am above concerning the accuracy of the 0.9 W/m^2 = (341.3 – 101.9 – 238.5 ) W/m^2; Kevin Trenberth himself and Tom Wigley both confirm that the summation at 0.26% of the budget is a fantasy, see:

    An update to Kiehl and Trenberth 1997

  98. Guys, I am gone for a week, so posts will not be moderated over that time. So don’t get scared when they don’t appear.

  99. Blous79, Bob_FJ and Patrick027

    Thanks for your posts on the 0.9 W/m^2 “net imbalance” in the K+T annual energy balance cartoon.

    Chris has told us he will be gone for a week, so my response to all this will be delayed, but I will post it anyway.

    K+T have not calculated the 9 W.m^2 figure they show in their annual energy balance cartoon as “net absorbed radiation” or “imbalance”. They have simply accepted Hansen’s figure of 0.85 W/m^2 and rounded it up to 9 W/m^2. Then they used this “plug number” to adjust all the other numbers they had from CERES, etc., to make it all balance out. So let’s not “blame” them for this figure, but let’s check out Hansen’s assumptions and calculations in arriving at this number.

    Summarizing Hansen’s determination of the 0.85 (or 0.9) W/m^2 figure: Total GH forcing is assumed to be 1.8 W/m^2 (1880-2003) and observed warming was 0.6-0.7 degC. Assumed climate response is 2/3degC per W/m^2 (equivalent to an assumed 2xCO2 climate sensitivity of 3 degC), therefore 0.65 degC warming is response to ~1W/m^2. But since theoretical GH forcing was 1.8 W/m^2, this leaves 0.8 W/m^2 still hidden “in the pipeline”.

    Checking Hansen’s logic, it is “circular”. He starts out with an assumed CO2 climate sensitivity, then calculates how much warming we should have seen 1880-2003, if all warming had been caused by AGW (ignoring all other factors). This calculates out at 1.2 degC. He then ascertains that the actual observed warming was only 0.65 degC. From this he does not conclude that his assumes climate sensitivity is exaggerated, but deduces that the difference of 0.55 degC is still hidden somewhere “in the pipeline”. Using his 2/3 degC per W/m^2, he calculates a net “hidden” forcing = 0.82 W/m^2, which he then rounds up to 0.85 W/m^2 (and K+T round up again to 9 W/m^2).

    Checking Hansen’s arithmetic: The theoretical GH forcing from 1880-2003 is 5.35 * ln(378/285) = 1.51 W/m^2 (not 1.8). Using Hansen’s figure of 2/3degC per W/m^2 puts theoretical warming at 1.0 degC. Observed warming was 0.65 degC leaving 0.35 degC hidden “in the pipeline”. This equates to a “energy imbalance” of 0.35/.6667 = 0.53 W/m^2 (not 0.85 or 0.9), all things being equal.

    But all things are not equal. Many solar studies show that 0.35 degC warming can be attributed to the unusually high level of solar activity over the 20th century (highest in several thousand years), although the exact mechanism for this empirically observed warming has not yet been determined. Let us assume that this covers the same 1880-2003 period cited by Hansen. Much of this occurred during the early 20th century warming period from around 1910 to around 1944, which cannot be explained by AGW alone. This leaves 0.3 degC observed non-solar warming (1880-2003). If we assume that one third of the theoretical GH warming over this long period is still hidden “in the pipeline”. we have 0.3 + 0.15 = 0.45 degC equilibrium GH warming 1880-2003 with an “imbalance hidden in the pipeline” of 0.15/.66667 = 0.22 W/m^2 (instead of 0.85-0.9).

    In addition to the solar studies, there are many observed natural factors that have caused warming. Notable among these are swings in the ENSO, which were partially responsible for many high temperatures in the 1990s, including most notably the all-time record high in 1998. The current cooling after 2000 is being attributed to these natural factors (called “natural variability” by Met Office), despite the fact that all models predicted record warming as a result of record increases in atmospheric CO2 concentration. So it is wrong to simply ignore these natural factors, as Hansen has done, and assume that all warming 1880-2003 was caused by AGW.

    Of course, if we assume that Hansen’s “hidden in the pipeline” hypothesis is wrong, we arrive at an imbalance equal to the GH forcing of the annual change in CO2 concentration or 0.03 to 0.11 W/m^2 (instead of 0.85-0.9), as I pointed out earlier.

    Patrick027 believes in a time delay to reach equilibrium, but cannot provide support for the 0.9 W/m^2 “imbalance” (8 to 30 times the forcing from annual change in GHG).

    Blous79 writes:

    anything between +0.9 and -0.1 W/m2 net forcing from radiation could be real and still be perfectly consistent with Trenberth’s out of balance data.

    Bob_FJ writes:

    0.9 W/m^2 net absorbed = 1) minus 2) minus 3) and it represents 0.26% of the total budget. Consequently any small error in the numbers for 2) and 3) would throw 0.9 W/m^2 awry.

    Kevin Trenberth himself and Tom Wigley both confirm that the summation at 0.26% of the budget is a fantasy

    Looks like my question is answered: the 0.9 W/m^2 shown on the K+T annual energy balance cartoon is a “plug number”, which is poorly substantiated based on “circular logic” and, as such, can be ignored. I would personally believe a more realistic estimate would be between 0.1 and 0.3 W/m^2, but it could well be even lower.

    Thanks for your input.

    Max

  100. Here is an alternative description of the greenhouse falsification which makes no reference to G&T.
    http://greenhouse.geologist-1011.net/

  101. The most robust estimate of climate sensitivity parameter appears to me to be the one of Shaviv at 0.35W/m2 based on historical data.
    http://www.sciencebits.com/OnClimateSensitivity

    Presently, I think the notion that all potential effects on earth temperature can be reduced to a single “climate sensitvity parameter” which is equivalent for ALL greenhouse gases is somewhat fanciful assumption on the part of IPCC.

    It seems perfectly reasonable that CO2 absorbs infrared and transmits it via conduction + convection to the atmosphere and causes by itself some warming. (Net backradiation has been debunked as non-physcical.)

    On the other hand, water vapour and clouds appear to cause a cooling negative feedback as one would expect for a relatively stable climate system. Lindzen and Choi’s paper describes the scientific basis of this based on observation.

    Click to access Lindzen-and-Choi-GRL-2009.pdf

    I note the correlation of the data analysed by Lindzen and Choi in scatterplots is worse than the correlation of global temperature and US aviation fuel use.

    The satellite temperature data supports cooling. The airport-biased land weatherstation data with large numbers of eliminated rural weather stations since 1990 supports warming. It seems fairly likely the land based temperature records have been manipulated to concoct warming even without the Climategate evidence.

    What would the estimates of climate sensitivity be if correct temperature records are used? I expect net sensitivity over the long term of any complex but stable homeostatic system with negative feedbacks to be zero.

    • Blous79:
      Did you just seriously used Roy Spencer’s website to refer to Lindzen&Choi?

      Funny.

      Did you also see that Spencer already noted a few problematic issues?
      And TWO papers to be published soon show Lindzen&Choi to be wrong:
      http://www.realclimate.org/index.php/archives/2010/01/first-published-response-to-lindzen-and-choi/
      (The first is the rebuttal to LC09, the second an independent paper looking at satellite data, and showing the models with high climate sensitivity to work well).

      Of course, Shaviv’s analysis is based on a flawed correlation of GCRs and temperature. I’ve got a challenge if you believe him to be right:
      Ask him why the Laschamp excursion saw a MAJOR amount of GCRs hitting the earth, and the earth’s climate responding by…doing essentially nothing.
      With his claimed GCR sensitivity there should have been a huge change in global temperature.

  102. It is expected there will be argument about the Lindzen and Choi paper by Trenberth.

    There is still no doubt the effect of water vapour is much larger than the effect of CO2. Any person on earth can tell that clouds have a substantial effect on surface temperature. There is also no doubt that the climate models are unable to model cloud effects accurately.

    NASA’s TOA data show cooling and we know the surface temperature data is seriously biased.
    http://isccp.giss.nasa.gov/projects/browse_fc.html

    Earthshine data analysis suggest cloud changes in response to warming.
    http://www.nasa.gov/centers/goddard/news/topstory/2004/0528earthshine.html

  103. Marco

    You wrote to Blous of the flaw in the GCR/cloud theory (Shaviv, Svensmark)

    Of course, Shaviv’s analysis is based on a flawed correlation of GCRs and temperature. I’ve got a challenge if you believe him to be right:
    Ask him why the Laschamp excursion saw a MAJOR amount of GCRs hitting the earth, and the earth’s climate responding by…doing essentially nothing.
    With his claimed GCR sensitivity there should have been a huge change in global temperature.

    I’d say we’d all be better off to wait for the results of the CLOUD experiment at CERN, before we write off the CGR/cloud theory.

    Citing the Laschamp geomagnetic excursion 40,000 years ago as proof that the theory is false is weak, Marco. Let’s wait for some actual experimental results from today.

    It would be just as wrong to cite the past decade’s cooling despite all-time record CO2 increase as proof that the GH theory is false.

    Both incidents may simply show us that there are other, as yet unknown forcing factors, which may have overshadowed the GCR effect in one case and the GH effect in the other.

    Max

    • Max,

      First of all, the CLOUD experiment won’t prove much. It will, at best, show that GCRs can(!) result in the formation of nuclei that may(!) ultimately, when growth continues, lead to nucleus sizes that result in clouds. It does not and can not show in any way how important the mechanism is for cloud formation. I’d say that an event 40,000 years ago, with proven high levels of GCRs and a non-responding climate, indicates that the mechanism is likely of very little importance. We’re not talking about the variations that Svensmark needs to invoke to explain the 0.7 degrees increase over the last century, we’re talking about MANY factors more. You’d have to have a HUGE feedback that suddenly kicked in around the same time. For the current stasis (it’s NOT cooling this last decade, and you know this well, you already tried that lie elsewhere) there need only be minor feedbacks.

      Note also that the Shaviv correlation (and Svensmark correlation) falls completely apart when you look at slightly different data. Shaviv’s correlation is based on a flawed model of our galaxy, Svensmark’s correlation is based on mixing data sets that are known not to be equivalent.

      • Marco

        You say that the CLOUD experiment “will not prove much”.

        Let’s wait and see what it does prove before we write it off.

        It could prove Shaviv and Svensmark (plus the others who agree with the GCR/cloud theory) right.

        It could demonstrate that a significant portion of the observed 20th century warming, which can be empirically linked to the unusually high level of solar activity, but cannot be explained by direct solar irradiance alone, came from the GCR/cloud effect.

        That would be a major breakthrough, giving us important new insight into what causes our planet’s climate to behave as it does.

        So let’s wait for the results. That’s what science is all about, Marco.

        Max

      • Max, you may want to read up on the CLOUD experiment. It cannot show anything you so desperately want it to show. The experiment is explicitely meant to show that GCRs can affect the formation of nuclei that ultimately may(!) grow to the required size to act as nuclei for clouds.

        The experiment cannot show whether the nuclei WILL grow to larger size, it cannot show how much of an effect the GCRs have compared to many other processes that cause cloud formation, and it won’t show whether the clouds provide positive or negative feedback.

        Oh, and solar activity has been slightly declining since the 1980s. And yet, most of the warming we observed was right at that time…

  104. manacker –

    1.a
    “if all warming had been caused by AGW (ignoring all other factors).”

    I’m not in a position to keep up with a lot of the literature, but I do suspect that “all other factors” were not ignored, at least not without justification.

    1b.
    “Checking Hansen’s logic, it is “circular”. He starts out with an assumed CO2 climate sensitivity, then calculates how much warming we should have seen 1880-2003,” … “This calculates out at 1.2 degC. He then ascertains that the actual observed warming was only 0.65 degC. From this he does not conclude that his assumes climate sensitivity is exaggerated, but deduces that the difference of 0.55 degC is still hidden somewhere “in the pipeline”. Using his 2/3 degC per W/m^2, he calculates a net “hidden” forcing = 0.82 W/m^2,”

    That’s not circular logic, because he is not then prooving that the climate sensitivity is what he assumed it is (according to your depiction).

    However, studies of how much heat is accumulating in the oceans, etc, if they agree with the radiative imbalance, would then indirectly support the assumed climate sensitivity. (PS sharp fluctuations would not be a likely signal for – ‘imbalance filled, done warming’, because the approach to equilibrium tends to be a exponential decay of an imbalance. Because additions are continually being added to the imbalance, a linear approximation of overall behavior suggests a sum of decaying exponentials; thus, a sharp turn withou a sharp turn in forcing suggests internal variability and is not the signal that indicates equilibrium has been attained.)

    2.
    “But all things are not equal. Many solar studies show that 0.35 degC warming can be attributed to the unusually high level of solar activity over the 20th century (highest in several thousand years), although the exact mechanism for this empirically observed warming has not yet been determined. Let us assume that this covers the same 1880-2003 period cited by Hansen. ”

    Again, not circular logic, but at least some study (one of the infamous ones – either Scaffeta and West or something related to their work, I think) seemed to me to be based on the assumption that all variability with frequencies in a range centered on solar forcing oscillations were in fact due to solar forcing oscillations. Thus it can be seen as placing an upper limit, not a best estimate, on the influence of solar forcing, at least those aspects of solar forcing that correlate with the relevant variables involved in those oscillations.

    The TSI changes just have not been very big and the theory of non-TSI solar forcing is much-less well substantiated than that of TSI forcing, greenhouse gas forcing, etc. (A third possibility is that the efficacy of solar forcing is rather large, perhaps via some effect on the circulation of the upper atmosphere that rearranges things in the troposphere with some cloud feedback … or whatever, but … any math or data to back that up?)

    3.
    “Patrick027 believes in a time delay to reach equilibrium, but cannot provide support for the 0.9 W/m^2 “imbalance” (8 to 30 times the forcing from annual change in GHG).”

    “cannot provide support” – Well, I hadn’t tasked myself with that job. But what I was pointing out is that the 0.9 W/m2 (or whatever value it may be) imbalance is not the change in radiative forcing over a year or other such arbitrary time period. Using a linear approximation, it would be the sum of exponentially decaying terms, each due to a change in radiative forcing at some time and decaying since that time with an e-folding time that is proportional to the heat capacity per unit area of the system and is also proportional to the climate sensitivity (it is proportional to the climate sensitivity because positive radiative feedbacks add to the imbalance even while the Planck response reduces it – see http://www.realclimate.org/index.php/archives/2009/12/unforced-variations/comment-page-24/#comment-152435 ) – note that this is for feebacks and heat capacity that work smoothly and on timescales that are short compared to the climate response time, and thus would exclude portions of the ocean’s heat capacity and some biogeochemical feedbacks and potentially some other things…

    (quick summary of heat capacities – numbers might follow soon):
    small:

    air

    land surface (for short to intermediate time periods – over geologic time, diffusion of heat through the crust is important, but that’s too slow to have much effect even on ice age-interglacial variations)

    net change in water vapor (net latent heat per unit temperature change)

    medium?:
    net latent heat of melting ice or reduced snow

    large:
    upper ocean (maybe 70 m average depth (thus about 50 m averaged over globe) for annual time scale, I think)

    larger:
    rest of ocean (full impact of heat capacity realized over 1000+ years)

    with a little rounding:
    50 m * 1000 kg/m3 * 4 kJ/(kg*K) / (30 Ms/year)
    = 200 MJ/(m2 *K) / (30 Ms/year)
    = ~ 6.7 (W/m2)*years/K
    Who doesn’t “believe” in a time delay?

  105. Sorry, Patrick.

    The 9 W/m^2 “imbalance” is a “plug number”, as I have shown.

    You have not brought any empirical support for this number (nor has Hansen et al., as I pointed out).

    I believe it is time to move on to something else, since no one here has brought any hard data in support of this “plug number”.

    Max

  106. Blous79

    “It seems perfectly reasonable that CO2 absorbs infrared and transmits it via conduction + convection to the atmosphere and causes by itself some warming. (Net backradiation has been debunked as non-physcical.)”

    1.
    CO2 along with all the other mass of the atmosphere transports heat convectively (where allowed by the physics) and through conduction – in this respect the CO2 is not particularly special (although different specific heat values and molecular speeds would alter the thermal conductivity, and different molecular masses and specific heats would alter aspects of convection (the convective lapse rate in particular – also affected by gravitational acceleration, too, which must be kept in mind when studying atmospheres (or any other convecting layer, including mantles and cores) of other planets ) – but that would be in proportion to atmospheric composition, so that a few hundred ppm of ___ tends not to be important in that regard – the most important compositional effect in the atmosphere is variations in water vapor, which can change the density as if the temperature were higher than otherwise; for the purposes of atmospheric dynamics a ‘virtual temperature’ can be assigned to deal with that issue, so long as it is noted that this is not the thermodynamic temperature that changes with heating or cooling, or adiabatically with changes in pressure.)

    It should be noted that conduction – aka heat transport by molecular diffusion and collision – as well as mass transports by molecular diffusion (which is involved in latent heat transport via water vapor) – are of very minor influence over the vast majority of mass of the atmosphere, and have significant roles only in the heat and mass fluxes between the surface and the lowest ~ 1 mm or so of air (Wallace and Hobbs p.334-335) and in an extremely thin (in terms of mass) portion of air at the ‘top’ of the atmosphere.

    Although it is also true that small scale convective motions can act as a form of diffusion (eddy diffusion) within larger-scale processes.

    2.
    More important point here is:

    what is ‘net backradiation’? Please do not confuse net radiative fluxes of any sort with gross radiative fluxes, as this will cause a HUGE amount of trouble. The backradiation shown in the diagram above is gross, not net. Gross radiative fluxes go back and forth and this is allowed by the second law of thermodynamics – IF it is not, then we have one branch of physics at war with another, which would be a huge surprise to scientists who have been working with both subsets of physics and never had any problem with logical inconsistency or inconsistency with observations in so far as modern understanding goes.

    The second law of thermodynamics more generally refers to entropy –

  107. Patrick027

    Apparently in defense of the 0.9W/m^2 “plug number” for net imbalance used by K+T and originating with Hansen et al., you present a theoretical treatise and then ask:

    Who doesn’t believe in a time delay?

    You apparently feel that this is a question of “belief”, rather than scientific evidence, based on empirical data derived from physical observations or experimentation.

    Lindzen has explained the circular logic at work here fairly succinctly in:

    Click to access 230_TakingGr.pdf

    The weakness in the logic goes as follows:

    GCMs cannot explain the late 20th century warming without invoking external anthropogenic forcing.

    Several natural factors are not considered adequately by the GCMs. This includes PDO, AMO, ENSO plus solar forcings for which the mechanism is as yet unknown.

    Lindzen points out that anthropogenic warming can be limited to around one-third of the observed surface warming using basic theory, modeling results and observations.

    The observed warming is not alarming, therefore for it to be alarming, the anthropogenic warming must be higher than what has actually been observed.

    The GCMs can simulate the recent trend in surface temperature, but only by invoking largely unknown properties of aerosols and ocean delay in order to cancel most of the greenhouse warming.

    Alarm, we see, …requires that greenhouse warming actually be larger than what has been observed, that about half of it be cancelled by essentially unknown aerosols, and that the aerosols soon disappear.

    This obviously requires several “leaps of faith”, which are not substantiated in any way by empirical data from physical observations, but rather by pure hypothesis (and theoretical calculation, as you have presented).

    As Lindzen points out:

    Ocean delay is itself proportional to climate sensitivity.

    This is the “circular logic” used by Hansen et al. in arriving at the 0.9 W/m^2 “imbalance”.

    Finally, Lindzen points out that latest results since the more accurate Argo system has been in place do not show warming of the upper ocean, pointing to the suggestion that we do not know much about the heat exchange between the upper and much larger and colder lower ocean, and how the various ocean current oscillations play a role.

    Compounding all this uncertainty is that GCMs have not been able to adequately explain the early 20th century warming (1910-1944), which was as significant as the late 20th century warming, but with a much smaller increase in human GHGs.

    So the logic goes:

    1. Our models cannot explain the early 20th century warming
    2. Our models know that AGW was a principal cause of the late 20th century warming
    3. How do our models know this?
    4. Because they cannot explain it any other way.

    Max

  108. – that entropy cannot decrease in a closed, isolated system (closed often means isolated, but I think in engineering the term closed refers to mass fluxes and thus still allows energy inputs or outputs through system boundaries) – it can remain constant, and may increase.

    For compositional variation, mixing to reduce heterogeneity increases entropy. For a group of molecules in one part of space, random motions characteristic of local thermodynamic equilibrium will result in a tendency to spread out – thus, diffusion and effusion. Warm matter tends to emit photons as allowed by emission cross sections; photons tend to be absorbed by (non-photon) matter as allowed by absorption cross sections; when the non-photon matter is in local thermodynamic equilibrium, the two cross sections are equal (for cross sections facing the same direction, at any one frequency, and any one polarization, etc, when those things are relevant) and thermodynamic equilibrium occurs when the emission and absorption of the same frequencies of photons occurs at the same rate, etc. When there is more of something in one place or state than in another, the tendency for that something to flow outward is increased, in so far as is allowed by kinetics and energetics; the energetics and conditions can shift the equilibrium point, but, when allowed by kinetics, equilibrium is reached when the forward and reverse reactions, fluxes, or changes, etc, occur at the same rate.

    (For water and oil, spontaneous unmixing followed by gravitational seperation into layers will occur because of the energetics of the different states – there is more to the entropy of the system than the compositional homogeneity. Likewise, uneveness in mass distributions can spontaneously amplify via gravitational collapse, provided that the energy originally present as gravitational potential energy is able to ‘spread out’ into different forms of energy, such as mechanical vibrations, enthalpy, and photons (if there is too much energy present in other forms, the flow into graviational potential energy will prevent collapse or reverse the process (H escape into space, and in the more distant future, provided that space is suffientialy devoid of matter (insured by expansion of the universe – equilibrium occurs when equal and opposite fluxes are balanced. And so on – atoms arrange themselves in crystals and particular macroscopic forms (snow crystals), etc, as conditions allow.)

    The second law of thermodynamics does not have its own ‘force’ with which it acts on matter; rather, it operates essentially by default. It is a consequence of statistics.

    In fact, the spontaneous non-emission of any photons from emitting and absorbing material in the direction of other emitting/absorbing material with greater temperature would probably require violation of the second law of thermodynamics, as would the spontaneous non-flow of molecules in some particular direction. Photons are not emitted as a consequence of the condtions under which they will be absorbed. Molecules do not rebound from collisions as a result of future collisions.

    ———

    Regarding:
    http://greenhouse.geologist-1011.net/

    1. Do a lot of people not understand or misunderstand the greenhouse effect? Yes. But at least some people, such as myself, and certainly including climatogists who study radiative fluxes, do understand the greenhouse effect.

    2. (blank lines omitted for clarity of quotation boundaries).

    2a.

    “Wishart (2009, p.24) explains:
    The Moon is another excellent example of what happens with no greenhouse effect. During the lunar day, average surface temperatures reach 107ºC, while the lunar night sees temperatures drop from boiling point to 153 degrees below zero. No greenhouse gases mean there’s no way to smooth out temperatures on the moon. On Earth, greenhouse gases filter some of the sunlight hitting the surface and reflect some of the heat back out into space, meaning the days are cooler, but conversely the gases insulate the planet at night, preventing a lot of the heat from escaping.”

    It is true that clouds both reduce solar heating overall and absorb some heat in the air as well as contributing to the greenhouse effect. It is true that water vapor absorbs some solar (mainly SW) radiation as well as terrestrial (most LW) radiation. Even CO2 absorbs a little SW radiation. But the greenhouse effect is distinct from the effects on solar radiation. It is the reduced solar heating combined with backradiation from the atmosphere and relative lack of atmospheric diurnal temperature changes that reduce the diurnal temperature range. (maybe more on that, when I get to it, here: https://chriscolose.wordpress.com/2009/12/08/interactive-carbon-cycle-model/ – for now, the explanation is that, while the mass of the atmosphere is not large compared to the ocean or crust, etc, convection and diffusion of heat up and down beneath the surface is limited so that, for land surfaces in particular, there is not a lot of heat capacity available to smooth out diurnal temperature variations, while a majority of solar heating occurs at the surface (or within the surface material – important distinction for the ocean); the diurnal solar heating cycle tends to drive a larger temperature variation on land surfaces than in the air above, but the range is still small compared to absolute temperature, so that the radiative heating of the air by the surface doesn’t drive much of a diurnal temperature cycle in most of the atmosphere (the diurnal cycle is filtered out by successive steps in the transfer of energy).

    2b1.
    “Plimer (2009, p. 365) really describes this situation very well when he writes:
    Everyone knows what the greenhouse effect is. Well … do they? Ask someone to explain how the greenhouse effect works. There is an extremely high probability that they have no idea.”

    Agreed, see above.

    2b2.
    “What really is the greenhouse effect? The use of the term “greenhouse effect” is a complete misnomer. Greenhouses or glasshouses are used for increasing plant growth, especially in colder climates. A greenhouse eliminates convective cooling, the major process of heat transfer in the atmosphere, and protects the plants from frost.”

    A rose by any other name…
    (while the roles of conduction, convection, and radiation differ, there is still a general analogy among greenhouses, the atmosphere’s significant opacity to LW radiation, building insulation, and winter coats – they all slow the rate of heat flux, requiring greater temperature variation to sustain the same flux for a given heat influx via a different pathway (SW radiation, combustion of fuel, metabolism, etc.)

    2c.
    “Plimer (2009, p. 366-375) goes on to explain the dynamics predicted by Kirchhoff’s Law, stating, “All the CO2 does is slows down heat loss. Atmospheric CO2 does not trap heat, as insulation does.”. Archer (2009, p. 15-29) uses the kitchen sink analogy to describe similar dynamics in that a partially blocked drain will not prevent a sink from emptying but slow drainage so that for a given inflow (eg. the tap) a given water level is maintained – much the way a given temperature is maintained for a given thermal radiation level, depending on the emissivity of the material.”

    Okay, but:

    “”All the CO2 does is slows down heat loss. Atmospheric CO2 does not trap heat, as insulation does.”.”

    No, insulation ‘traps’ heat BY slowing down heat loss. The similiarity is greater than implied by the quote.

    2d.
    “To his credit, Plimer is the only recent author to acknowledge the role of convection. Plimer is also the only author to acknowledge the respective roles of both kinetic heat (eg. convective transfer) and electromagnetic heat (eg. “radiation balance”).”

    In the popular literature? I wouldn’t know, but in as far as the science is concerned, atmospheric convection is most definitely included in the theory of atmospheric and oceanic processes and climate.

    2e.
    “Kirchhoff’s Law is unavoidable, as it is based on the conservation of energy in that a body cannot not emit more energy than it receives or absorbs.”

    It can emit more if it is warmer than the sources of recieved photons, but I’m guessing that’s what was meant (that for a given polarization (not really important for atmospheric gases at least), frequency, and line of sight, emissivity = absorbtivity).

    2f.
    “Emissivity describes the proportion of absorbed energy that is initially emitted. As this proportion remains relatively constant, energy accumulates raising the temperature until the emitted radiation is equal to the absorbed radiation courtesy of stored energy congested within the body by lack of emissivity.”

    Wording is a bit confusing. Emissivity is the emitted intensity as a function of that of a perfect blackbody at the same temperature. At local thermodynamic equilibrium (as opposed to the electronic conditions that allow fluorescence and phosphorescence), emissivity = absorbtivity. Thus NET fluxes between emission and absorption are from warmer to cooler material.

    2g.
    “Beware of wheels within energy diagrams as these usually constitute the energy creation mechanism of perpetual motion machines. One such gem of clarity, used uncited by Plimer (2009, p. 370), was offered by Kiehl and Trenberth (1997, Fig. 7):”

    HERE IS THE BIG PROBLEM:

    There is no perpetual motion machine or violation of the second law of thermodynamics or any creation of energy in the Kiehl and Trenberth diagram. The author has fundamentally misunderstood what s/he is looking at.

    The ‘wheels’ that the author percieves are not perpetual motion machines – they are the back and forth fluxes of photons along the same channels. Along any given channel (frequency, line of sight, and when it matters, polarization, etc) at any given time, photons can travel in both directions, but the NET flux is from higher temperature to lower temperature if all non-photon parts are individually at local thermodynamic equilibrium.

    Entropy is not being destroyed; it is being created (and then lost to space, which is allowed).

    Energy is not being created anywhere. (except for E=mc2 in the sun, of course).

  109. Patrick 027, Reur massive missive to Max at 5:08 pm:

    Towards the bottom you wrote

    “…The ‘wheels’ that the author percieves are not perpetual motion machines – they are the back and forth fluxes of photons along the same channels. Along any given channel (frequency, line of sight, and when it matters, polarization, etc) at any given time, photons can travel in both directions, but the NET flux is from higher temperature to lower temperature if all non-photon parts are individually at local thermodynamic equilibrium…”

    I don’t think Max will mind if I refer you to some of our earlier exchanges. (before Max joined in here).

    I recall that above somewhere you have agreed that EMR from the Earth’s surface is of even hemispherical distribution; that most of it is lateral rather than up; and thus most initial absorption per quantum theory is probably within the GHG’s rather close to the ground. Furthermore, that back radiation diminishes with altitude and is spherical in distribution, and for that latter reason alone, the downward components are immediately halved in whatever may be re-emitted. Then there are also lapse rate and increasing photon path length effects. Also, BTW, interestingly, in a typically isothermal pocket, the most intense horizontal radiation, has no effect on the temperature. (= zero HEAT transfer)

    Therefore, at best, the K & T illustration of 396 W/m^2 up, and 333 W/m^2 down, is conceptually the condition only at the surface, whereas wrongly, it is showing conditions as uniform between the clouds and the surface. Furthermore, the 396 W/m^2 up, was I understand, a rather fanciful average based on an S & B calculation. However, this is just background to the following:
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Q:
    Let us consider a typical hot & dry sandy desert and we will find that the world-predominant and most powerful wide spectrum GHG; water vapour; is virtually absent, as are clouds. Consequently, long-wave EMR escapes very rapidly, and as soon as insolation stops, the desert very rapidly cools, compared with other typical landscapes. The predominant GHG is CO2, which has narrow absorption spectrum bands and comprises under 0.04% of the atmosphere. It is this ~0.04% that absorbs the EMR, and yet it is the total air some 2,500 times greater that is measured to have become very hot. The albedo of desert sand at ~0.4 is higher than most surfaces

    There are some useful secondary conclusions that can be drawn from this, for instance, at quantum theory level; what happens with the molecular kinetic energy (HEAT) that is created in that initial absorption?
    Can you elaborate on this and some of the other processes?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    Frankly, I think that some of your stuff is not pertinent and so excessively long and meandering, that any inclination to respond is diminished. I thought I’d just pick-out one topic related to K & T.

  110. Blous 97/ Manacker (Max)/Patrick 027
    Sorry, but further my 8:04 pm, by the time I got to the end of Patrick’s unaddressed 5:08 pm which is apparently a continuation of his 2:28 pm, I must have forgotten that the first 2:28 pm was addressed to Blous 97

    Thus for where I wrote Max, please read Blous 97

    Chris,
    Any chance you could advise what html tags other than:
    b = bold
    I = italics
    blockquote = Blockquote
    are available to enhance text?
    Patrick’s posts would be easier to follow if he would use italics or blockquote, when he is quoting.
    Strikethrough and image can be handy
    Patrick
    You may be interested in the following link, but there is heaps out there on html tags.
    http://werbach.com/barebones/barebones.html#general

  111. “Energy is not being created anywhere. (except for E=mc2 in the sun, of course).”

    – by the way, that’s an example of where a little bit of accuracy is sacrificed for brevity. If I had really been precise, I would have commented that energy is approximately conserved, allowing for very small amounts of energy creation/destruction from C-14 production and decay, etc, and chemical reactions, etc. (E = mc2 is not only for nuclear reactions, it’s just that it tends to not be significant in chemical or physical changes outside of relativistic situations, etc…) – and then of course, point out that treating mass as a form of energy, we can still say that energy is conserved…

    “The second law of thermodynamics does not have its own ‘force’ with which it acts on matter; rather, it operates essentially by default. It is a consequence of statistics.”

    Note that small population sizes allow for ‘violation’ (or at least, apparent violation – conceivably, if all contributors of entropy are accounted, this dissappears??) of the second law of thermodynamics – for example, a single molecule can be entirely in one side of a chamber or another; 2 molecules might find themselves on the same side of a chamber perhaps 1/2 of the time. Even at thermodynamic equilibrium within an otherwise isothermal chamber, tiny spatially-uneven temperature fluctuations could occur as individual molecules move back and forth, etc. But you couldn’t on average use this behavior to produce work, because it’s random.

    —-

    “Along any given channel (frequency, line of sight, and when it matters, polarization, etc) at any given time, photons can travel in both directions, but the NET flux is from higher temperature to lower temperature if all non-photon parts are individually at local thermodynamic equilibrium.”

    (note that for this purpose, a ‘line of sight’ can bend at points of reflection, and can branch at partial reflection or become diffuse through scattering. But in those cases, each branch or subset of multiple branches has some weight less than 1; all the weights add to 1. Going further, there is a weighting function that applies to all photons arriving a particular point from a particular direction at a particular frequency and polarization, etc, which is the spatial distribution of all absorption cross sections that are ‘visible’ looking back in that direction (following the ‘line of sight’, however it may be branched, bent, diffused, etc.) – this is the distribution of absorption of photons going in that direction passing through that point, and, when at each location, multiplied by the blackbody intensity as a function of temperature and then integrating over space, is equal to the radiant intensity that arrives at that point from that direction (via processes at local thermodynamic equilibrium – thus not including fluorescence, phosphorescence, etc.). The weighting function for each direction can be mulitplied by the cosine of the zenith angle (angle from vertical) and then integrated over a hemisphere of solid angle to find the weighting function of the flux per unit horizontal area from all directions from one side of that unit area.

    The concept of net fluxes through a channel is important. When all **small unit volumes are at local thermodynamic equilibrium, the net fluxes along each channel will tend to create entropy or at least not destroy energy. For example, a net heat flux from higher temperature source to lower temperature sink creates entropy; note that, for example, the radiative flux from colder emitter to warmer absorber destroys entropy, but an equal flux in the opposite direction creates entropy at the same rate, so the net entropy production can be determined by the net flux, and the two fluxes in opposite directions along the same channel cannot occur in isolation from each other – a channel can be openned up (increasing transmission between objects, increasing absorptivity of objects, etc.) or partially or completely blocked to both fluxes at once, but not openned to a flux in one direction and closed to the flux in the opposite direction (Maxwell’s Demon). In contrast, different channels can be opened or closed as independently as allowed by the physics.

    —————

    **local thermodynamic equilibrium is a good approximation for the vast majority of the mass of the atmosphere. What this means is that, on a scale large enough to contain a statistically significant population of molecules, ions, electrons, etc, the statistics of the velocities of molecules, etc, and the distribution of energy among translational, rotational, vibrational, and electronic energy is at thermodynamic equilibrium –

    (?? maybe chemical energy too, except wherein kinetic barriers make reactions slow – ie CH4 is more abundant than it would be at chemical equilibrium for the temperature, amount of oxygen, etc, but the chemical reaction rate is not driving other aspects of each unit volume away from thermodynamic equilibrium in a significant way. Generally, radiatively-relavent aspects of local thermodynamic equilibrium can be maintained if the rate of thermalization of energy from absorption of photons is rapid relative to the absorption of those photons; I’d expect similar logic would apply to chemical reactions.)

    – This is NOT the same as the whole system being at thermodynamic equilibrium. The climate system as a whole and even many parts thereof are in disequilibrium, but sufficiently small unit volumes that still encompass the processes of emission and absorption of most radiation are approximately in local thermodynamic equilibrium (among other things, they can be approximated as isothermal).

    Note that when a unit volume is not in local thermodynamic equilibrium, it might still be the case that different populations of particles may be in thermodynamic equilibrium within themselves – in this case, multiple substances may occupy the same space with different temperatures. But it may also be the case that the distribution of energy within a subpopulation may not fit thermodynamic equilibrium for any one temperature.

    —————————

    Regarding heat fluxes, the flux of entropy S that accompanies a heat flux Q is equal to Q/T, where T is temperature. Thus, for a heat flux Q from T1 to T2, the entropy leaving T1 is S1 = Q/T1, while the entropy arriving at T2 is S2 = Q/T2. If T2 is smaller than T1, then S2 is larger than S1, so that the total entropy has increased. If entropy is conserved, then Q must be proportional to T – the heat flux from T1 is Q1 and the heat flux reaching T2 is Q2, so that S = Q1/T1 = Q2/T2, and W = Q1-Q2 = the work that can be produced; this describes an ideal heat engine; in reverse, this describes an ideal heat pump or refrigerator.

    The entropy of the photons from the sun, not having been much altered before reaching the atmosphere, is approximately 341.3 W/m2 / 5780 K ~=
    0.0590 W/(m2 K). This is an approximation, in actuality, the entropy per unit energy will vary a bit with wavelength because different parts of the spectrum are dominated by different parts of the photosphere, which is not isothermal over it’s visible portion (temperature decreasing outward from below, also granulation, sunspots, etc.) and there are some absorption lines, and some emission from other parts of the sun. (So far as I have been able to figure out, the entropy per unit energy of photons at a particular frequency, direction, and polarization, and phase (applicable to laser light) is a function of the brightness temperature at that part of the spectrum for that direction, etc.)

    But the entropy is conserved as the photons spread outward from the sun because the intensity is conserved (locally the rays become more parallel while the flux per unit area declines; the flux per unit area per unit solid angle is conserved***), which means that a good parabolic lens can concentrate the radiation on an object that will approach ~ 5780 K temperature if it only loses heat via radiation in the same directions as it recieves it.

    The portion of solar radiation reflected from the Earth is scattered – the intensity is much reduced, thus increasing the entropy per unit energy and decreasing the brightness temperature (differently for different wavelengths).

    For the portion absorbed, the entropy before entering the atmopshere is ~ 239 W/m2 / 5780 K =~ 0.0413 W/(m2 K). The entropy after absorption is about 78 W/m2 / 255 K + 161 W/m2 / 288 K ~= 0.865 W/(m2 K), thus about 0.824 W/(m2 K) of entropy is created by that process – this is a rough approximation, though, since the absorption in the atmosphere is distributed over layers with different temperatures (and I’m not sure the average temperature is actually 255 K – that’s somewhat of a ballpark figure), and the absorption at the surface is somewhat concentrated in regions with temperatures greater than 288 K (which implies the creation of entropy by solar heating of the surface is overestimated in the above calculation – note that uneven heating and cooling both horizontally allows the production of work from heat – essentially all of that is converted by to heat within the atmosphere and surface material, though, which is why it is not really an error to leave it out of such diagrams as shown above (which is somewhat of an approximation anyway); also, the actual production of kinetic energy is small in comparison to the heat fluxes shown above. So far as I know, the non-heat energy flux accross the tropopause in particular is small compared to radiative fluxes, so that convective heating of the stratosphere as a whole can be approximated as zero; however, the work done on the stratosphere and mesosphere by the tropospheric heat engine does have great significance in driving motions there, which are generally thermally indirect – cold air rising, warm air sinking, work converted to heat, as in a heat pump/refrigerator (the coldest part of the atmosphere is actually in the summer polar mesopause region).

    The entropy flux from the surface via the heat flux from the surface to the atmosphere is roughly (17+80+356-333 = 120 W/m2)/288 K ~= 0.417 W/(m2 K), or actually a bit less than that since this flux is concentrated within the warmer parts of the surface. As that heat is conducted/diffused to the lowest ~ 1 mm of air and then convected around to cooler heights and horizontally to cooler regions, converted to kinetic energy and converted to heat again by viscous processes, mixing of stratified layers, and driving of heat pumps, and as radiation is absorbed, and emitted and absorbed again within the atmosphere, entropy is created.

    A rather rough estimate of the entropy flux leaving the Earth is about 40 W/m2 / 288 K + 199 W/m2 / 255 K ~= 0.919 W/(m2 K). It may be a bit more than that, since, if the whole-spectrum brightness temperature of the Earth as seen from space is ~ 255 K and yet a portion of the LW flux to space is directly from the surface mainly warmer than 255 K, then the portion from the atmosphere should have an effective brightness temperature a bit less than 255 K – although it will vary over the spectrum. But using 0.919 W/(m2 K), this implies creation of roughly 0.054 W/(m2 K) of entropy within the climate system after solar heating – or a bit . Combined with the perhaps somewhat less than 0.824 W/(m2 K) of entropy created in the process of recieving and absorbing solar radiation, there is roughly a total creation of 0.88 W/(m2 K) of entropy within the system, or maybe a bit more, depending…

    ———–

    ***- if there were refraction, the rays would be bent so that, in the absence of scattering, absorption, or emission, intensity would vary, but it would vary in a particular way, and would remain a function of the index of refraction – in fact, a perfect blackbody embedded in a transparent refracting material emits as a function of temperature and refractive index (the same will be true for radiation with a blackbody with some index of refraction different than 1) – but if the transition to different refractive indices is gradual to avoid reflection, the intensity that reaches a refractive index of 1 along a line of sight coming from the blackbody is the same as that emitted by a blackbody directly into a material of refractive index 1, or into space. The loss of outward flux is accounted for by total internal reflection (some of the photons from a blackbody coated with refractive index greater than that of the next layer of material must be reflected or bent back to the blackbody). Total internal reflection does not violate the second law of thermodynamics, but adds an interesting complexity to it’s radiative application.

  112. Another important point –

    Climatic equilibrium, and for that matter, ecological equilibrium and biological homeostasis – these are NOT examples of thermodynamic equilibrium.

    Thermodynamic equilibrium is a state wherein the net fluxes along chanels have gone to zero, and corresponds to a state of maximum entropy (for the system being considered, not including additional possible increases in entropy upon further interaction with other systems, etc.).

    Climatic equilibrium and other such equilibriums are states in which, in a time-average, there can be and generally are nonzero net fluxes along some channels (there have to be in order for the systems to have ‘interesting’ behavior – life in particular goes to zero at thermodynamic equilibrium). In climatic equilibrium, the global-time average vertical heat fluxes balance, or any imbalance is accounted for by conversions to and from kinetic energy, etc. Locally, vertical flux imbalances are balanced by horizontal fluxes. Daily and seasonally and with other fluctuations, flux imbalances are balanced by storage terms, which in the time average goes to zero. Momentum fluxes balance in the time average, etc. And there is a long-term steady state (encompassing shorter term variability and cycles, which will be self-similar across longer time intervals) in the organization of all of that.

    By analogy, a tendency to approach climatic equilibrium is like the tendency of the water level in a reservoir to shift until the outflow to a lower reservoir matches the inflow from a higher reservoir. Thermodynamic equilibrium occurs when all reservoirs have the same level.

  113. BobFJ – I must thank you for that manual; I always figured there must be such a manual out there but never found it myself – once upon a time there was a blog that came with some such instructions but I didn’t pay much attention to it at the time.

  114. Patrick 027, Reur 10:33 pm
    OK, fine; just to help you along a bit, the html tags have a start and end point, but they don’t work the same everywhere, or they may have different command values. They use angle brackets or if you like greater-than and less-than symbols, but if I show such here, they will activate a command causing confusion. Thus I’ll give you an example substituting [ and ] brackets instead.
    If you want to express ‘knickers’ in bold you can embrace it HERE in html with the figurative brackets thus:
    [b] ‘knickers’[/b]. The end slash denotes the end of that command. However, elsewhere ‘b’ may have to be substituted with ‘strong’

  115. Re BobFJ –

    recall that above somewhere you have agreed that EMR

    … I understand your point but wish to reword for clarity. Perfect blackbody radiation is isotropic (and unpolarized, and incoherent (as opposed to laser light)). I think surface emission may be approximated at such.

    Isotropic radiation is not ‘immediately halved’, but whereas the intensity along one direction decays exponentially ( not including scattering and emission into the path along the way – ie this applies to the photons present at one point that reach another point ) over optical thickness, the flux per unit area including all directions over a hemisphere decays as a sum of exponentials, with the fastest decaying components corresponding to rays near horizontal (for flux through a horizontal area).

    The intensity emitted by a horizontal layer of air can be anisotropic if it is optically thin in the vertical direction. In that case, the emissivity along paths at angles closer to horizontal approaches 1, while a minimum in emissivity is found along the vertical direction.

    As the strongest variations in temperature per unit optical thickness are generally in the vertical direction, the net fluxes per unit areas are largest generally for horizontal areas. It really isn’t necessary to consider the anisotropy of emission of individual layers to see this – the net intensity in any direction is larger for larger temperature gradients relative to optical thickness, provided that the temperature extremes are themselves not limited to optically thin layers as measured over the path (or with scattering, paths), or more generally, over the weighting function contributing to emission from a direction at a point. (ie the downward emission from the thermosphere is no big deal, it’s too thin to matter to radiative fluxes in most of the rest of the atmosphere). The intensity along a path contributes to the flux through a unit area in proportion to the cosine of the angle from the perpendicular to the area. The largest fluxes per unit areas will tend to be found through nearly-horizontal areas if the largest intensities are along near-vertical directions, and that also goes for net fluxes and net intensities.

    To sum up, radiative fluxes and intensities found at a location correspond to the emission cross sections that can be seen from that location (following diffused or bent or branched lines of sight where scattering or reflection occur, though these are of minor importance to LW radiation under Earthly conditions), and thus net fluxes and net intensities are due to temperature variations that occupy optically significant regions (noting that the surface is optically thick, and space can generally be treated as an opaque blackbody near zero K) and are visible from a location (signficant temperature variation per unit optical thickness) so that they can affect the fluxes and intensities that pass through a location. Within a more transparent region, fluxes and intensities depend more on conditions some distance away; when opacity gets very large, the weighting function is compressed closer to the point of view – photons are mostly from nearby, and this tends to reduce any anisotropy (in so far as optical properties are isotropic – ie some materials have different indices of refraction in different directions, etc, but anyway…), since everything that can be ‘seen’ is nearly isothermal – thus the radiation tends toward an isotropic distribution with zero net intensities and fluxes, and the gross intensities and fluxes tending toward perfect blackbody values for the temperature at the point of view.

    Where the atmosphere as a whole is not very opaque (some wavelengths in the absence of clouds, etc.), the backradiation increases downward from space as there is more and more atmosphere to contributed as one gets beneath more and more air. This is the general tendency even when the atmosphere is very opaque, except within or near inversions, which occur at some places and times within the troposphere, and generally within the stratosphere, though the lower stratosphere can be nearly isothermal at some latitudes.

    The backradiation of the atmosphere to the surface is that flux which reaches the surface, and does not include photons from one part of the atmosphere and are absorbed by another (just as the flux from the surface that is absorbed by the atmosphere is just that – it doesn’t include upward fluxes emitted by the air). Except for the effects of inversions at sufficient opacity, the gross downward LW flux at any level tends to be concentrated away from vertical, due to 1. longer pathlengths through warmer regions away from vertical (except for inversions) and 2. shorter paths near the vertical direction that expose the darkness of space (except at higher opacities). Similarly, the upward LW radiation from below – however much may be directly from the surface – tends to be concentrated near vertical (with the exception of the effects of sufficient opacity with inversions, as this is the shortest optical path to the surface and, higher up, the shortest path to the warmer lower troposphere (to the extent that the surface has a nonzero LW albedo, there is a reduction in emission from the surface relative to a perfect blackbody, but the reduction in the gross upward LW flux is smaller because some of the backradiation at the surface is scattered back up by the nonzero LW albedo – I don’t know all the details, but aside from the effects of the spectral distribution of LW albedo relative to atmospheric opacity, it’s likely that this scattered radiation is more strongly absorbed by the atmosphere than the emission from the surface because it will tend to be concentrated at wavelengths where the atmosphere is more opaque (except in relatively opaque inversions); if the scattering is not perfectly diffuse, then aside from the effects of directional dependence of optical properties, it will also be concentrated away from the vertical (except for relatively opaque inversions).

    “Therefore, at best, the K & T illustration of 396 W/m^2 up, and 333 W/m^2 down, is conceptually the condition only at the surface,”

    yes, although if all emissions (from surface and from air) are included, the gross and net fluxes should not generally change rapidly over very short distances (except in foggy inversions, etc.).

    “whereas wrongly, it is showing conditions as uniform between the clouds and the surface. “

    – the diagram is schematic, so I wouldn’t say it is wrong. It isn’t meant to be taken literally in such details (this would be obvious to other scientists studying the issue; perhaps it is problematic to present such diagrams to the public?? – but I enjoy diagrams!)

    “Furthermore, the 396 W/m^2 up, was I understand, a rather fanciful average based on an S & B calculation.”.

    It is based on approximations, but I wouldn’t call them fanciful.
    ” Q:
    Let us consider a typical hot & dry sandy desert and we will find that the world-predominant and most powerful wide spectrum GHG; water vapour; is virtually absent, as are clouds. Consequently, long-wave EMR escapes very rapidly, and as soon as insolation stops, the desert very rapidly cools, compared with other typical landscapes. “

    True, smaller regional LW opacity (and reduced absorption and reflection of solar radiation) tends to increase the diurnal temperature range. This applies to deserts as well as regions of higher elevation, and on smaller scales, mountains and valleys (in sufficiently steep terrain, mountains occupy a portion of the lines of sight from valleys to much of the atmosphere and space, and are also in the shade during parts of the day -etc.) The role of backradiation in the surface energy budget also suggests that colder conditions would allow a larger diurnal range, such as in elevated regions, but more generally, this may be compensated in some way by the lack of solar heating and it’s associated diurnal cycle that allow for colder conditions.

    However, don’t forget that hot deserts with low relative humidity can still have significant water vapor present (not as much as in the rainforests, but comparable to ‘humid’ conditions at sufficiently cold temperatures.

    Also, a dry surface can be a contributor to larger diurnal temperature range and warmer surface temperatures in general, as the reduced evapotranspiration means that, for the same conditions in the atmosphere, the surface must get warmer to get the same total convective + radiant cooling, and also, dew and frost formation at night releases latent heat.

    ” CO2, which has narrow absorption spectrum bands and comprises under 0.04% of the atmosphere. It is this ~0.04% that absorbs the EMR, and yet it is the total air some 2,500 times greater that is measured to have become very hot. The albedo of desert sand at ~0.4 is higher than most surfaces

    Those ‘narrow’ bands occupy roughly 19 % to 28 % of the flux-weighted LW spectrum (going by the 13 to 17 and then 12 to 18 micron intervals –
    see https://chriscolose.wordpress.com/2009/10/08/re-visiting-cff/#comment-1500 – a more precise calculation yielded similar results for the 12-18 micron interval.

    It shouldn’t be problematic that a small amount of radiatively-important material mediates the radiant heating and cooling of a large mass – the CO2 (and H2O, etc.) molecules are mixed with the rest and exchange energy via collisions, and a thin layer of ink casts a long shadow.

    Yes, deserts tend to have high albedos compared to other non-snow/ice surfaces.

    ” There are some useful secondary conclusions that can be drawn from this, for instance, at quantum theory level; what happens with the molecular kinetic energy (HEAT) that is created in that initial absorption?
    Can you elaborate on this and some of the other processes?

    When a population of molecules absorb and emit photons considerably less frequently than they collide with each other (as is the case in the vast majority of the mass and optical thickness of the atmosphere), when that population of molecules gains or loses energy via photon absorption or emission, 1. the change in energy propagates to the other molecules, so that the whole mass of air changes temperature according to the the change in energy divided by heat capacity. 2. the gain in energy tends to be thermalized before any subsequent emission, so that the energy distribution tends towards thermodynamic equilibrium with an amount characteristic of some temperature, and subsequent emission occurs as a function of temperature

  116. Re manacker –

    ” you present a theoretical treatise and then ask: “
    Who doesn’t believe in a time delay?
    ” You apparently feel that this is a question of “belief”, “

    You didn’t notice that I put quotes around “belief”?

    You were the one who projected that word onto me.


    1. Our models cannot explain the early 20th century warming
    2. Our models know that AGW was a principal cause of the late 20th century warming
    3. How do our models know this?
    4. Because they cannot explain it any other way.

    1. – they explain some of it. The earlier 1900s ‘peak’ is not that far above model ranges.
    2,3. – we know GHGs have calculable radiative forcing. Positive water vapor and surface albedo feedbacks are observable.
    4. – that, and because it lack of an effect by CO2, etc, would require additional explanation we don’t yet have.


    This obviously requires several “leaps of faith”, which are not substantiated in any way by empirical data from physical observations, but rather by pure hypothesis (and theoretical calculation, as you have presented).

    1. I have not yet begun to present theoretical calculations! (to paraphrase a Revolutionary War captain who’s name doesn’t stick in my memory for some odd reason).

    2. Where would we be without theory and hypothesis? We’d have a bunch of data points with no story behind them and nowhere to go with them.

    3. Not substantiated? If you think Lindzen’s work is substantiated, you’ll be completely bowled-over (or is it bulled-over – I’ve never used that phrase before in conversation) several times over by the evidence that supports AGW.

    Lindzen has purported to find evidence for strong negative feedbacks generally in the tropics. Even if these existed, they don’t eliminate the positive water vapor feedback and surface albedo feedback, and potential positive cloud feedbacks that may be found in other regions. Furthermore, the results of a single study or two are not guaranteed to be robust against the test of time; but the overall body of science has produced results that have withstood. Without significant positive feedbacks, how do you explain warming with solar forcing or internal variability or anything? GCR? – where’s the trend, where’s the theory, where’s the data and ROBUST correlation? Oceans – where’s the part of the ocean where more and more cold water is being hidden (you can’t just say it’s cold down there – it has to be getting coldER if the change is just the removal of cooler water from the surface) Seems to me that it is the contrarians who repeatedly and enthusiastically make several leaps of faith, and then make the same leaps again and again and again and again, and rely on belief.

  117. Patrick027

    Once we laid the “plug number” for the “net imbalance” shown in the annual energy balance cartoon of K+T, we appear to be going around in circles here.

    You wrote:

    Lindzen has purported to find evidence for strong negative feedbacks generally in the tropics. Even if these existed, they don’t eliminate the positive water vapor feedback and surface albedo feedback, and potential positive cloud feedbacks that may be found in other regions.

    Yes. Lindzen has found net overall negative feedbacks in the tropics, based on net outgoing LW + SW radiation. This is in contradiction to what all models cited by IPCC have simulated. Oops!

    The “tropics” account for a bit more than one-third of our planet’s surface.

    This is also the region where Spencer et al. found strong negative feedback from clouds with warming.

    As far as the “other regions” are concerned, a study by Norris shows:
    ftp://eos.atmos.washington.edu/pub/breth/CPT/norris_jcl04.pdf

    A recent study documented large changes in tropical mean outgoing longwave radiation
    (OLR) and reflected shortwave radiation (RSW) reported by the Earth Radiation Budget Satellite (ERBS) during 1985-99, attributing them to cloud changes.

    Results show that upper-level cloud cover over low and midlatitude oceans decreased between 1952 and 1997, causing a corresponding increase in reconstructed OLR. At middle latitudes, low-level cloud cover increased more than upper-level cloud cover decreased, producing an overall rise in reconstructed RSW and net upward radiation since 1952. At low latitudes, the decline in RSW associated with the decrease in upper-level cloud cover compensates for the rise in OLR, yielding zero change in net upward radiation if the increasing low-level cloud cover trend is not considered. RSW reconstructed from increasing low-level cloud cover, if reliable, leads to an increase in net upward radiation at low latitudes. The decrease in reconstructed OLR since 1952 indicates that changes in upper-level cloud cover have acted to reduce the rate of tropospheric warming relative to the rate of surface warming. The increase in reconstructed net upward radiation since 1952, at least at middle latitudes, indicates that changes in cloud cover have acted to reduce the rate of tropospheric and surface warming.

    So the Lindzen and Spencer observations of an observed net negative cloud feedback with surface warming in the tropics appear to hold for the middle latitudes, as well.

    A positive water vapor feedback may well exist, as you have suggested (although NOAA observations show us that actually measured water vapor content has decreased as temperatures have increased since the late 1940s), and the surface albedo feedback depends on global snow and ice cover, which has not changed substantially.

    But, more importantly, the Lindzen and Choi observations show us that the net overall feedback for ALL these factors with warming is negative.

    You ask for ROBUST correlations on GCR and clouds.

    I could ask for the same on CO2 and temperature.

    They do not exist

    Max

  118. Marco

    You made a comment about solar activity.

    I’m sure you are aware of the solar studies, which have shown that the 20th century solar activity was unusual in at least 8,000 years. These studies attribute around 0.35degC of the total 0.65degC 20th century warming to this unusually high level of activity.

    If you check the more recent record you will find that the average peak Wolf number for solar cycles 10-14 (1858-1902) was around 88, while the average for cycles 19-23 (1955-2008) was around 148 or 68% higher.

    The linear rate of increase in solar activity over the period 1858-2008 (14 solar cycles) was around 8.5% per cycle.

    Cycles 21 and 22 (1975-1996) were quite strong, while cycle 23 (1996-2008) was a bit weaker, but still significantly stronger than those in the late 19th and early 20th century.

    Cycle 23 has gone out and cycle 24 appears to be starting off very weak (almost no sunspots for past two years), so that is where the real drop in solar activity has occurred.

    But I believe we have drifted off the topic of the CLOUD experiment, where we will just have to wait to see what this shows us about how our climate responds to changes in GCR.

    Max

  119. “The “tropics” account for a bit more than one-third of our planet’s surface.”

    Very true – if defined by the 30 deg latitude circles, the tropics are half the planet. (One half which I prefer not to live in by the way.)

    Did Lindzen study the whole of that region adequately? Random samples would tend to follow (on average) the tendency for the whole, but … are these random samples?

    Suppose there are negative cloud and water vapor feedbacks. The later in particular cannot possibly exist through all temperatures, because that would imply that as the atmosphere gets colder and colder, relative humidity would exceed 100 % – okay, that does actually happen, especially in parts of the tropopause region where good ice nuclei are hard to comeby – but even more amazingly, more water would have to be found in masses of air than was ever present in air leaving the ground – which would require much more evaporation of precipitation from above, which would require reduced relative humidity … or warmer rain … well, to sum up, it’s just not realistic – there must be a range of climates for which water vapor feedback is positive or esle we wouldn’t have the water vapor that we have.

    Let’s suppose that at sufficiently high temperatures, upper air water vapor and cirrus cloud cover decline. Where might that happen first? I’d guess the western tropical Pacific. Hey, isn’t that Lindzen’s favorite place on Earth?

    But what might the mechanism be?

    Well, – and I’m not even sure if this has ever been part of Lindzen’s argument – but even with increasing surface and near surface air temperature, and decreasing moist adiabatic lapse rate (it’s smaller at higher temperatures; the moist adiabatic lapse rate is not much different from the dry adiabatic lapse rate in the uppermost troposphere), the increasing height of the tropopause in response to greenhouse-forced warming is enough that the tropopause level will actually tend to be colder (I think the same might also be true for solar-forced warming, but maybe not as much because of the stratospheric warming in that case (??)).

    So air going up, with condensation and precipitation, will end up with a smaller water vapor pressure and thus, with some adjustment for air pressure, a smaller water vapor mixing ratio (a measure of specific humidity) – if it gets near enough to the tropopause level. If the amount of unprecipitated condensed water stays the same, then the air’s total water is less, and remains so as it then sinks and warms back through the troposphere.

    But the level of non-divergence doesn’t kiss the tropopause level. A significant amount of air will be exiting updrafts at higher temperature and more water vapor than before.

    Of course, the level of non-divergence would presumbably shift upward (?), but would the average water vapor mixing ratio upon exiting saturated updrafts actually decrease?

    The other mechanism which might (??) be what Lindzen was proposing is that there would be less unprecipitated condensed water coming out of the tops of cloud updrafts (and thus there would be less evaporation to boost the water vapor of sinking clear air, and the cirrus cloud cover itself might be reduced) – for the reason that under warmer conditions, the growth of cloud droplets to larger sizes would occur more rapidly, which would increase the precipitation. Well, if this mechanism were understood to work as such, then presumably it would be part of the models in some way. I think it hasn’t been accepted – either it’s not so simple or something doesn’t work out… (???)

    ——

    More later, but for now:

    outflow from tropical cloud updrafts is not the only source of dry air in the subtropics.

    Lindzen’s results on these negative feedbacks don’t tend to hold up well.

  120. 1. It gets hot in summer and in the sun. (Large positive radiative forcing.)
    2. It gets cold in winter and at night. (Large negative radiative forcing.)
    3. The earth moves around the sun every 365.25 days and rotates every 24 hours. (Reliable cycles and seasons)
    4. Less sun reaches the ground when it is cloudy. (Variable negative feedback)
    5. Nobody can model cloud behavior reliably. (Unpredictable)
    6. Earth is not a closed system from energy perspective and solar flux is not constant. (External perturbations)
    7. Earth’s atmosphere is different from all known planets in the universe. (70% surface is water.)
    8. Water has a high specific heat. (Can absorb large amounts of heat energy without large increase in temperature)
    9. The relationship between changes in energy and temperature is robust. (ΔQ = c * m * ΔT)
    10.
    10. The radiative forcing temperature model [ (ΔTs): ΔTs = λRF] has no theoretical proof based in the laws of physics. (λ is unreliable)

    • Blous79
      generally (tendencies):
      1 – correct
      2 – correct
      3 – yes – approximately (not really important here, but it’s just fun to note that the Earth rotates slightly more than once every 24 hours in an inertial reference frame (this rotation is important to coriolis effect). The Earth rotates ~ 366.25 times in a year, but going around the sun once in the same time in the same direction, this reduces the number of diurnal cycles by one (in a manner of speaking, one of those rotations is ‘used up’ to catch up with the motion around the sun); hence there is one less solar day (mean 24 hours) than sidereal day within a year.

      4a. “Less sun reaches the ground when it is cloudy.”- during the daytime, yes. Also, less LW radiation from below/within the cloud levels is transmitted across the cloud levels. Higher colder cloud tops have a stronger greenhouse effect; the SW – albedo effect dominates over the greenhouse effect for clouds with low tops at low latitudes or nearer the summer solstice in the midday.

      4b. if warming increased cloud cover, it could be either a negative or positive feedback depending on the spatial and temporal distribution of the cloud cover. if warming decreased cloud cover, it could be either a negative or positive feedback depending on the distribution. It isn’t so obvious that warmer conditions will increase areal cloud coverage in the global average so far as I know – with a thicker troposphere and assuming the same fraction of each level has cloud coverage and the same amount or lack thereof of correletion among levels, there could be greater areal extent of clouds just due a higher troposphere, with a tendency to approach 100 % coverage at infinity, except the troposphere would never actually get inifinitely thick … one way to visualize this would be a lifting of all existing cloud coverage following the tropopause level with an insertion of ‘new’ cloud coverage below – note much of that new cloud volume would be covered by higher clouds and would have both less SW and LW effects; however, the lifting of the existing clouds would generally (exceptions with inversions) increase the difference between cloud top temperature and surface temperature, which would tend to increase the cloud greenhouse effect. However, that is conjecture on my part; all that could change depending on how clouds are arranged relative to themselves, temperature, humidity, surface albedo, and incident solar radiation. A general tendency for midlatitude storm tracks to shift poleward would by itself tend to reduce the SW albedo effect of those clouds. The net effect of tropical clouds on top-of-atmosphere radiative balance is generally a relatively small residual of competing SW albedo and LW greenhouse effects (Hartmann p.78 map). Don’t forget also that clouds do absorb some SW energy within the troposphere, though that is a smaller effect relative to other cloud effects (Hartmann, p.68).

      5. it is a major source of uncertainty, but that’s not the same as saying we know nothing or that the models are useless. It also leaves open the possibility that modeled climate sensitivity could be either greater or lesser due to cloud uncertainty.

      6. True. (did anyone say otherwise?)

      7. relatively unique for the solar system; to say it is the only such planet in the universe, even of the known planets – that may be incorrect.

      8. yes. (for latent heat of vaporization, this affects the moist adiabatic lapse rate; for specific heat of the liquid and for latent heat of fusion (melting), that affects the time lag for climate response to forcing changes; a longer response time due to heat capacity doesn’t itself say anything about the climate sensitivity.)

      9. Yes.

      10. “The radiative forcing temperature model ” – it’s a sort of global-scale, simplified climate parameterization; the numerical values of the coefficients may be infered from data and/or model output, the later being based on physics. To the extent that it is a good approximation, it is almost like treating a collection of molecules/atoms/ions/etc. as a macroscopic object (solid or fluid or ___ ). Do you have a problem with modelling a spring as a solid with some elasticity – as opposed to all the quantum mechanical underpinnings of the macroscopic behavior?

  121. Chris

    Now that we have laid to rest the putative “net imbalance” figure of 9 W/m^2 in the K+T annual energy balance cartoon, I’d like to move on to an unrelated comment you made to me:

    Response– No, you’re not even bothering to read what you are criticizing, much like the “cycles” you are looking for which Tamino and gavin had to discuss over and over.

    I have demonstrated to you that I have read very well indeed what I “am criticizing” (as I am sure you have now been able to observe), so we can lay that part to rest, as well.

    As far as the remainder is concerned, I believe you are referring to the observed multi-decadal warming and cooling cycles apparent in the long-term HadCRUT temperature record.

    I have plotted the long-term record and shown these observed cycles to make our discussion easier.

    Tamino apparently has trouble “seeing” these, even though they are quite apparent to most people.

    Gavin has not denied their existence; he has simply given his personal opinion that they are statistically insignificant.

    IPCC, on the other hand, has recognized and reported them in its most recent AR4 WG1 report, concentrating most of its efforts on the late 20th century warming trend, which it tells us started around 1976. The HadCRUT record shows that this cycle showed a linear increase in observed temperature of around 0.4 degC.

    IPCC mentions briefly, but gives much less attention to, an equivalent multi-decadal period of warming in the early 20th century, where it points out that there is “more uncertainty regarding the causes of early 20th-century warming than recent warming”. The early 20th century warming cycle has also been studied by Delworth and Knutson, who observe, “over the period 1910-1944, the linear trend in observed temperature is 0.53K”.

    IPCC also mentions the multi-decadal mid-century slight cooling trend (in between the two warming cycles).

    So the cycles are known, recognized and mentioned by IPCC and pretty apparent to the naked eye in the plotted long-term record.

    Earlier cooling and warming cycles are also evident in the 19th century and turn of the century, but these are not given any attention by IPCC.

    Do you personally deny, like Tamino, that these cycles have even occurred?

    Or do you simply believe, like Gavin, that they have occurred but are statistically insignificant?

    If so, are you of the opinion that only the overall long-term trend of the entire record is statistically significant (roughly 0.04 degC warming per decade, with an overall warming of 0.65 degC since the record started)?

    Or do you have some other opinion? If so, what is this?

    I think it’s fair to ask you, since you brought up this topic in the first place.

    Thanks in advance for your reply.

    Max

    • “Tamino apparently has trouble “seeing” these, even though they are quite apparent to most people.”

      I think I’ve seen that post and … as I recall, part of the problem may be a confusion over what the term ‘cycle’ actually means.

      Most strictly, a cycle is a repeating pattern (generally over time) with a specific frequency and thus a specific period.

      More colloquially, ‘cycles’ may refer to any fluctuation about some baseline, the later being either constant or not.

      However, your reference to ‘warming cycles’ – that’s a bit confused. IF there is a cycle, or cycles, or any quasicyclical or episodic or or irregular fluctuation with no long term trend – these are not ‘warming’ or ‘cooling’ in total. There may be warming and cooling phases. If the amplitude of the variation is small and the period long and an underlying longer-term trend (of warming) is large enough, the cooling phase of a fluctuation might not appear to have cooling at all, just a reduced rate of warming.

      • Patrick027

        Back to the observed warming/cooling cycles. You seem to dislike the word “cycle”, for some reason or another:

        IF there is a cycle, or cycles, or any quasicyclical or episodic or or irregular fluctuation with no long term trend – these are not ‘warming’ or ‘cooling’ in total. There may be warming and cooling phases. If the amplitude of the variation is small and the period long and an underlying longer-term trend (of warming) is large enough, the cooling phase of a fluctuation might not appear to have cooling at all, just a reduced rate of warming.

        The modern HadCRUT record started in 1850. Over the entire period, it shows a linear rate of warming of around 0.04degC per decade, or a total warming of about 0.65degC.

        This warming occurred in several warming and cooling cycles of around 55 to 65 years for a complete cycle.

        There was a late 19th century warming cycle followed by a turn of the century cooling. Let’s ignore these two for now, since the early temperature records may not have been that good.

        The warming cycle starting around 1910 and ending around 1944 has been reported and studied, as I mentioned earlier, as has the ensuing cycle of slight cooling from 1944 to 1976.

        The late 20th century warming started around 1976 (as reported by IPCC). It appears to have ended recently. The jury is still out on whether or not the 21st century cooling “blip” we are seeing will become part of a new longer cooling cycle or not.

        Studies by solar scientists and others point in that direction while most AGW-supporters do not believe we are seeing the beginning of a multi-decadal cooling phase. The Met Office has conceded, however, that we may see a leveling off of warming for the next decade or two due to natural variability.

        But who knows what the future will bring?

        Professor Akasofu has referred these cycles, as have others, so they are “nothing new”.

        In all cases, the warming (during the warming cycles) exceeded the observed overall linear warming rate of 0.04degC per decade by a factor of 3 to 4, while the cooling cycles showed a slighter rate of cooling. Even the current “blip” only shows a cooling rate of around 0.1degC.

        This would confirm that the overall underlying linear trend will most likely continue to be one of warming.

        My question to you: Why do you have a hard time accepting these periodic oscillations as “cycles”?

        Is there something “bad” about this word?

        Just curious.

        Max

  122. Yes. Lindzen has found net overall negative feedbacks in the tropics, based on net outgoing LW + SW radiation. This is in contradiction to what all models cited by IPCC have simulated. Oops!

    So one must be incorrect. Which one?

    This is also the region where Spencer et al. found strong negative feedback from clouds with warming.

    Did they?

    (although NOAA observations show us that actually measured water vapor content has decreased as temperatures have increased since the late 1940s)

    Where is that from? My understanding is that water vapor has in fact increased with positive radiative feedback.

    and the surface albedo feedback depends on global snow and ice cover, which has not changed substantially.

    Okay, here’s a point where I tend to forget specifics – Obviously the Arctic sea ice has decreased; has the Antarctic sea ice increased …
    —-
    (which can happen even with global warming because of issues regarding the stability of the Antarctic ice sheet (for now) and salinity feedbacks and the upwelling of cold water in a ring around Antarctica that is driven by winds that are expected to increase with global warming due to the upwelling water’s contrast with warming at lower latitudes, etc…)
    —-
    … to the same extent (with same albedo effect)? I’ve heard that but best I can recall the global sea ice has actually decreased. Snow cover … well it depends, don’t know the measurements offhand, but it’s certainly a reasonable expectation that time-integrated coverage will shrink with warming.

    —- clouds:
    I’m a little confused:
    this looks interesting and I’ll look at it some more,
    ftp://eos.atmos.washington.edu/pub/breth/CPT/norris_jcl04.pdf
    , but a table near the end gives a net cloud radiative change (TOA, I believe) of 2.5 +/- 1.9 W/m2 (near global, ocean only), which, taking the 2.5 W/m2 value, is just huge, But from p.25:
    The sum of the low-latitude and midlatitude feedbacks results
    in a near-zero cloud cover feedback for the global ocean. It is important to note, however, that
    trends in surface radiation flux are not the same as trends in TOA radiation flux. The decrease in
    upper-level cloud cover over the low-latitude ocean results in more OLR but less RSW, thus
    acting to cool the atmosphere and warm the surface. These diabatic tendencies could be balanced
    by an increase in surface evaporation and latent heating by precipitation.

    —-
    So what is the conclusion?

    But, more importantly, the Lindzen and Choi observations show us that the net overall feedback for ALL these factors with warming is negative.

    ALL these factors? Anyway, it appears it was an unsuccessful attempt
    http://www.realclimate.org/index.php/archives/2010/01/lindzen-and-choi-unraveled/
    http://www.realclimate.org/index.php/archives/2010/01/first-published-response-to-lindzen-and-choi/
    http://www.realclimate.org/index.php/archives/2010/01/lc-grl-comments-on-peer-review-and-peer-reviewed-comments/

    You ask for ROBUST correlations on GCR and clouds.

    By the way, aside from the problem of a lack of recent trend, some solar-climate correlation studies have involved an assumed relationship with a phase alignment that shifts over time – in other words, the simple correlation breaks down. I will add that of course, it is possible to imagine scenarios in which changes in other factors or nonlinearities modulate the response so that a very real cause-effect linkage may exist and yet not show a fixed phase relationship through many cycles. However, Occam’s razor – it is one thing when the mechanisms for such complexity have been established, another to assume they must exist – the later approach leads to concluding that anything and everything explains anything and everything.

    I could ask for the same on CO2 and temperature.They do not exist

    Are You @#*ing Kidding ME?
    Do the last 100+ years ring a bell?
    The last several hundred thousand years? (ice sheet feedback alone is not sufficient with a Charney sensitivity of only ~ 0.7 K/(W/m2), and counting all non-Charney feedbacks (CO2, CH4, ice sheets, vegetation, maybe aerosols?), a Charney sensitivity in the same range as model output is supported)
    The last 500 million years? (noting that paleodata for CO2 is sparse around the Ordivician time and there is reasonable expectation of a drawdown associated (caused by, in fact) the Appalachian orogeny)
    Venus and Mars, etc.? (remembering to correct for gravity, atmospheric mass, composition, distance from sun, albedo)

  123. Patrick 027, thankyou for your 1,719 word missive of 12:06 am. Whilst I found it rather heavy going, I agree with most of it, to the extent that if we pick out the bits pertinent to my ask; as quoted next; we can proceed:

    “…There are some useful secondary conclusions that can be drawn from this, [Bob_FJ preamble] for instance, at quantum theory level; what happens with the molecular kinetic energy (HEAT) that is created in that initial absorption?
    Can you elaborate on this [1] and some of the other processes [2]?”

    Firstly, to elaborate on your pertinent comments on my preamble, I don’t think that you would disagree that hot dry deserts are the hottest places on Earth, despite that the GHG’s are at a minimum, and their albedo is roughly double (or even greater) than more familiar terrain such as verdant grasslands or forest etc, not to mention the oceans at modest latitudes.

    Concerning [1], you confirm my quantum theory understanding that without other inputs, the roughly 0.04% of the air that can absorb long-wave EMR, (mostly close to the surface), then heats the other 99.96 of the non-absorbing air via molecular collisions. Thus instead of predominantly instantly re-emitting the energy gained by absorption of photons, those little guys are instead very busy losing their kinetic energy, (HEAT) in collisions.

    However, you did not address [2] which makes things rather more complicated.
    Air temperatures can be over 50C, and sand temperatures around 20C hotter still. Thus it is apparent that the air is also being heated via boundary conduction from the very hot sand. The big difference between surface and air temperature also implies significant convection, but, also rather importantly, the big boundary temperature sink of some 20C implies a major conductive heat transfer. (proportional to T1 – T2)

    One question to ask is; what is the relative importance of HEAT transfer to all of the air as in [1] versus that in [2]?
    Another Q is; how much does re-emission diminish whilst feeding collisional KE transfer?
    And…..
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    BTW Patrick, my text above has a word-count of 331.
    If you reply, could you please keep your response to about the same order of word-count? Please!

  124. Bob_FJ said
    January 11, 2010 @ 9:58 pm

    Sounds all sensible to me. Then heat on ground makes water evaporate from surface esp soil. The CO2 transfers heat to water vapour in the atmophere, which warms, rises, cools, condenses, makes clouds that block the sun.

    I still haven’t seen a credible critique of Nordell and Gervet’s work on thermal pollution that explains the fundamental flaw in their logic. Only that since their answer disagrees with the consensus, that it must be wrong.

  125. Patrick 027
    I’ve noticed that you are quite fond of quoting realclimate as a source of wisdom.
    You are not embarrassed by the fact that Mann is somewhat compromised by the CRU emails then? Or that even PSU has announced an investigation into his activities, with calls elsewhere for a truly independent review?
    A search of eastangliaemails.com for the name Schmidt also gives 127 hits, so he too is implicated. (oh and Tamino etc of course)
    You might find this detailed analysis of some important emails an interesting read:
    http://www.assassinationscience.com/climategate/
    Or, if you prefer a perhaps less detailed pictorial flowchart, and you have a large TV monitor, there is:

    The Climategate Timeline: 30 years visualized

    I have not contributed at realclimate for a long time after having a significant number of important posts deleted without explanation, perhaps giving the impression that I didn’t want to continue the debate. Also, critical snips were made in other posts, for instance I had a reference to Akasufo’s recent paper removed every time; thrice. Must have hit a raw nerve I guess. Not the way to conduct scientific debate eh?.

    That is why I like this site….. Sceptical debate is allowed…. Thanks Chris!

    Response– I post all comments insofar as they are kept civil, but with the disclaimer that some (such as this, and most of the recent posts that haven’t been from Patrick) are often pure non-sense and don’t contribute at all to the “debate”– chris

    • Sorry, but the analysis by “Dr.” Costella is not an interesting read, unless you firmly believe in conspiracies. The first four are already arguing towards the desired outcome (that of there being a conspiracy). I looked at random through a few more, and it’s all the same type of reasoning: the desired outcome is to find something fishy; now, how can I interpret what is written so that it sounds fishy?

      And yes, I know John Costella has a PhD, but anyone who falls for conspiracy theories and so clearly argues towards the outcome doesn’t deserve one. The whole site is one huge orgy of conspiracy theories…

  126. Chris,
    In response to my January 13, 2010 @ 4:29 pm you wrote in a footnote:

    “Response– I post all comments insofar as they are kept civil, but with the disclaimer that some (such as this, and most of the recent posts that haven’t been from Patrick) are often pure non-sense and don’t contribute at all to the “debate”– chris“

    I get the impression that you don’t currently have enough time to participate as well as you might like to here, and that yours was just a quickie response. I’m puzzled as to what parts of my post might be interpreted as pure nonsense. Is there any chance that you could find time to elaborate?
    Also, did you intend to imply that Marco’s various posts, (or could that possibly be Mark, the regular, from RC?), are also pure nonsense? I have no big disagreement with that, but I’m just curious as to your thoughts on his contributions.

  127. Blous 79, nice one from you (January 13, 2010 @ 5:11 pm) concerning Gavin Schmidt. I rather like this quote:

    “…When asked if she thought the Climategate documents were a big deal at first sight, Lucia [PhD, re her blog] responded, “Yes. In fact, I was even more sure after Gavin [Schmidt] sent me his [threatening] note.”…”

    I don’t think Gavin really thought it through before taking such a silly step!

  128. Bob_FJ, you are a lying slime ball. The only person using the term “threatening” is you. It was nowhere else on what you linked to and supposedly quoted.

    Is this all you slimy deniers have, lies and more distortions? You should be ashamed of yourselves, I hope that your family members do not read your contributions on climate science, they would be very disappointed by your behaviour.

  129. Before there was ever Climategate, there was the quiet scientific protest by IPCC scientists.

    This article seems to have slid by most people’s attention, but seems to me to be rather important statement of science by the people on the inside of the IPCC. Note while the author is
    Professor Ann Henderson-Sellars, she is actually reporting a workshop of IPCC lead authors:
    http://environmentalresearchweb.org/cws/article/opinion/35820

    This article by John Christy, Professor of Atmospheric Science, University of Alabama is compulsory reading – every last word.
    http://news.bbc.co.uk/2/hi/science/nature/7081331.stm

  130. Chris

    You commented:

    Response– I post all comments insofar as they are kept civil…

    Ian Forrester just wrote to Bob_FJ:

    Bob_FJ, you are a lying slime ball [etc.]

    Are we redefining “civil”?

    Tell Ian to clean up his act, Chris.

    Max

  131. Ian Forrester, Reur;

    “Bob_FJ, you are a lying slime ball. The only person using the term “threatening” is you. It was nowhere else on what you linked to and supposedly quoted
    Is this all you slimy deniers have, lies and more distortions?…”

    Responding in part to my quote:

    “…When asked if she thought the Climategate documents were a big deal at first sight, Lucia [PhD, re her blog] responded, “Yes. In fact, I was even more sure after Gavin [Schmidt] sent me his [threatening] note.”…”

    In case you are unaware of the convention, may I advise you that where some text in quotations may be bounded thus [Text], it is an indication that they are NOT actually part of the original text, but are added by the quoter to clarify context whatever.

    Here is a copy of Gavin’s Email to Lucia in full; note the date in the Climategate timeline:

    Date: Thu, 19 Nov 2009 15:48:21 -0500
    From: Gavin Schmidt
    To: lucia liljegren
    Subject: a word to the wise
    Lucia, As I am certain you are aware, hacking into private emails is very illegal. If legitimate, your scoop was therefore almost certainly obtained illegally (since how would you get 1000 emails otherwise). I don’t see any link on Jeff-id’s site, and so I’m not sure where mosher got this from, but you and he might end up being questioned as part of any investigation that might end up happening. I don’t think that bloggers are shielded under any press shield laws and so, if I were you, I would not post any content, nor allow anyone else to do so. Just my twopenny’s worth
    Gavin

    Ian, you can play with semantics on the use of my commentary of “threatening” if you wish, and I’d be happy to substitute say: [pressuring] or [ominous], if you prefer. However, the fact is that Lucia reacted in a way that shows that she did not think it was “friendly” and wrote: “…“Yes. In fact, I was even more sure after Gavin sent me his note.”…”

    As more background, Lucia has also written this:

    “As you know from my comment of Part I of Courrielche’s Climategate series, when I received that email, I was convinced they were important. Reasons:
    1) Gavin, who rarely emails me, bothered to send the email.
    2) The headers says “Received: from sphinx.giss.nasa.gov (sphinx.giss.nasa.gov [169.154.204.2])
    by cubmail.cc.columbia.edu (Horde MIME library) with HTTP; Thu, 19 Nov 2009
    15:48:21 -0500″. So, I knew it was really from Gavin and his spidey-sense was sufficiently activated to motivate him to email promptly and from his NASA office.
    3) I’d looked at enough emails to know the contents sure looked legitimate.”

  132. I don’t think that you would disagree that hot dry deserts are the hottest places on Earth despite that the GHG’s are at a minimum, and their albedo is roughly double (or even greater) than more familiar terrain such as verdant grasslands or forest etc, not to mention the oceans at modest latitudes.

    During the daytime, so far as I know offhand, yes, they are the hottest places on Earth (of large-scale regions at the surface as opposed to volcanic craters, lightning, etc.). In terms of the daily average, maybe still true for parts, would have to check a map…

    Note that under a dry airmass with dry air at the surface in particular, with large-scale subsidence within the troposphere, there is little chance of forcing an updraft to the point of condensation and to the point that it can sustain itself even if the lapse rate is significantly more than the moist adiabatic lapse rate. Thus,

    1. a local radiative-‘convective’ equilibrium could allow significantly warmer surface and lower-troposphere temperatures with a colder upper troposphere for the same optical properties. (PS if the deserts were warmer than surrounding regions for a great thickness of the troposphere, you might reasonably expect that, although with modification from the coriolis effect, the deserts would become areas of large-scale ascent, convergence near the surface (which tends to reduce the near-surface lapse rate and make initiation of localized cumulus convection easier), and a tendency for moist convection with precipitation – ie they wouldn’t be dry.)

    2. there is not such a local equilibrium even in the daily average in general; heat is being transported horizontally in and out of regions. The subtropical dry regions are generally being heated by outflow from updrafts in cloudier regions with more precipitation (which are being cooled (accounting for latent heat loss at the surface) by inflow from other places.

  133. Ian Forrester

    manacker and Bob_FJ are a couple of tag team deniers who pollute many decent blogs with their nonsense. Some times the tag team partner is “Black Wallaby.”

    These creatures have spoiled many blogs. An example is Andrew Dessler’s blog which he discontinued in part because of these (one) idiot(s).

    They should be banned from any decent discussion on climate science till they become honest and informed.

    Response– I don’t think they are actually fooling anyone here 🙂 — chris

  134. Patrick027

    You wrote:

    (although NOAA observations show us that actually measured water vapor content has decreased as temperatures have increased since the late 1940s)
    Where is that from? My understanding is that water vapor has in fact increased with positive radiative feedback.

    NOAA data on specific humidity trend since 1948

    Annual Average (graph)
    http://www.cdc.noaa.gov/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Specific+Humidity+(up+to+300mb+only)&level=300&lat1=90&lat2=-90&lon1=180&lon2=-180&iseas=1&mon1=0&mon2=11&iarea=1&typeout=2&Submit=Create+Timeseries

    Annual Average (raw data)
    http://www.cdc.noaa.gov/cgi-bin/data/timeseries/timeseries.pl?ntype=1&var=Specific+Humidity+(up+to+300mb+only)&level=300&lat1=90&lat2=-90&lon1=180&lon2=-180&iseas=1&mon1=0&mon2=11&iarea=1&typeout=1&Submit=Create+Timeseries

    Global Temperature versus Atmospheric Water Vapor Content (graph)

    and the surface albedo feedback depends on global snow and ice cover, which has not changed substantially.
    Okay, here’s a point where I tend to forget specifics – Obviously the Arctic sea ice has decreased; has the Antarctic sea ice increased …

    Yes, albeit at not quite the same rate as the decline in Arctic sea ice, so the global total has decreased slightly. Depending on the month the most recent global sea ice extent was between –3.5% and +0.4% off, since the record started, with an average of around 1.5%.
    ftp://sidads.colorado.edu/DATASETS/NOAA/G02135

    The global sea ice extent varies annually between 20 and 27 million square km (msk), with an annual average of around 24 msk.

    The other factor influencing surface albedo is the Northern Hemisphere snow cover as reported by Rutgers University.
    http://climate.rutgers.edu/snowcover/table_rankings.php?ui_set=1

    This varies monthly between around 3 and 47 msk, with an annual average of around 25 msk. Latest figures (2009) show that this has decreased by a total average of 1.5%, since the record started.

    So we have a total net loss over the entire measurement period averaging 1.5% of 49 msk or 0.73 msk.

    So you can see that the minor global ice and snow changes we have seen since the record started have a negligible impact on our planet’s surface albedo (which was my point).

    Max

  135. Patrick027

    As far as the correlation between atmospheric CO2 and “global temperature” go, lets first stick to the “last 100 years”.

    The observed multi-decadal warming and cooling cycles do not correspond ROBUSTLY with the rather steady increase in CO2. The early 20th century warming occurred when there was hardly any increase in CO2, while the mid-century cooling occurred when there was an accelerated increase in CO2 during the post WWII boom. Only the most recent late 20th century warming correlates ROBUSTLY.

    I’d say that one out of three is not ROBUST.

    As far as the long range (proxy) data go, the curve used by Al Gore to show this ROBUST correlation actually shows that atmospheric CO2 lags temperature by several centuries. It then begins to cool when CO2 is very high and begins to warm when CO2 is much lower. Also not ROBUST.

    Sorry, Patrick.

    You’d be best off to concentrate your discussion on the late 20th century warming (as IPCC has done), where there really is an apparent ROBUST correlation.

    Max

  136. PS if the deserts were warmer than surrounding regions for a great thickness of the troposphere, you might reasonably expect that, although with modification from the coriolis effect, the deserts would become areas of large-scale ascent,

    But it is a bit more complicated than that – on the poleward side of the subtropics, hot air sinks and cold air rises because kinetic energy is being supplied to do work on the air; this kinetic energy comes from warm air rising and cold air sinking within baroclinic waves… actually, this energy cycling – offhand I think it’s called the Lorenz energy cycle – is seen from a specific point of view – the zonal average (mean) and variations from that (eddies) – differential heating/cooling in the mean produces available potential energy (APE), which, in the absence of the coriolis effect, would simply be converted to kinetic energy (KE in this comment) and then converted to APE again upon overshooting (like a spring bouncing back and forth about it’s equilibrium position), and back and forth – if in a confined cell this would be a standing wave, but ‘in the open’ the process at one location would radiate energy as waves of kinetic energy and APE; either way, the fate of the energy is …

    You know what, that’s more than I have time to get into right now.

  137. But as long as I brought it up, the Lorenz energy cycle looks like this

    zonal average diabatic heating/cooling distriubtion – > APE of the zonal mean

    APE of the zonal mean – > APE of the eddies (via rearrangement caused by KE of the eddies)

    eddy-correlated diabatic heating/cooling – > APE of the eddies

    APE of the eddies – > KE of the eddies

    KE of the eddies – > viscosity

    KE of the eddies – > KE of the zonal mean

    KE of the zonal mean – > viscosity

    KE of the zonal mean – > APE of the zonal mean (thermally indirect; the Ferrel Cell)

    frictional heating – > not much APE there

    Note it’s not a closed loop – ultimately there is input of lower-entropy energy via horizontal + vertical heating/cooling distributions, and an output of higher entropy energy when KE is converted to heat by viscosity/mixing, most of which does not go back into producing APE.

  138. (Lucia) “Gavin and his spidey-sense ” …

    Wow! Does this mean that Rachel Carlson is Batman? I shouldn’t be surprised, what with all these Jokers running around…

  139. BobFJ – Concerning [1], you confirm my quantum theory understanding that without other inputs, the roughly 0.04% of the air that can absorb long-wave EMR, (mostly close to the surface), then heats the other 99.96 of the non-absorbing air via molecular collisions. Thus instead of predominantly instantly re-emitting the energy gained by absorption of photons, those little guys are instead very busy losing their kinetic energy, (HEAT) in collisions.

    Yes, and note also that these ‘little guys’ gain energy in collisions, which they can emit as photons, thus cooling the air. (PS this could still be true if all the molecules in the air were absorbing and emitting photons; the point is that a relatively higher collisional frequency allows thermalization of the energy so that emission occurs as a function of temperature.)

    However, you did not address [2] which makes things rather more complicated.
    Air temperatures can be over 50C, and sand temperatures around 20C hotter still. Thus it is apparent that the air is also being heated via boundary conduction from the very hot sand. The big difference between surface and air temperature also implies significant convection, but, also rather importantly, the big boundary temperature sink of some 20C implies a major conductive heat transfer. (proportional to T1 – T2)

    Most convection from the surface goes through an initial step of conduction/molecular diffusion (including diffusion of water vapor) – this is important within about 1 mm of the surface. The convective fluxes shown in Keihl et al and similar diagrams include this initial step (it’s implied).

    Some radiation from the surface may be absorbed in the lowest part of the atmosphere and then convected higher, but it is easier to treat radiation as seperate – that for a given temperature distribution, clouds, etc, the radiative heating and cooling rates are distributed in various ways, and then convective heating and cooling rates are distributed in various ways, etc.

    One question to ask is; what is the relative importance of HEAT transfer to all of the air as in [1] versus that in [2]?

    Remember that for the same near-surface lapse rate and/or temperature difference between the surface and the air just above, a wet surface, with some dependence on the relative humidity of the air, will convectively (including conduction and diffusion steps) lose heat to the air faster than a dry surface. This is because the sensible heat flux is proportional to the temperature difference over distance (and on turbulent motions) and will tend to be the same whether a surface is wet or dry, while latent heat loss depends on other factors.

    Interestingly:

    Cold dry air over a warm wet surface: both latent and sensible heat fluxes from the surface.

    Warm humid air over a cold dry surface – both fluxes from air to the surface

    Hot dry air over a warm wet surface: the latent flux may be from the surface while the sensible heat flux can be down to the surface.

    However

    warm humid air over cold wet surface – both fluxes could be down to surface (warm air is cooled and humidity may condense, some of the latent heat might be released at the surface).

    cold humid air over warm dry surface – the sensible heat flux is from the surface, and the latent heat flux – well, it depends, because assuming relative humidity is not more than 100 % in the cold air, the warm surface could, if porous, simply absorb some water vapor if the relative humidity is low enough such the water vapor mixing ratio is lower than in the air above, but there wouldn’t be any condensation at the surface.

    Because the water vapor mixing ratio at 100 % relative humidity increases roughly exponentially with temperature, the same relative humidity and surface wetness and same temperature difference and wind will tend to allow more effective evaporative cooling of the surface at higher temperatures. Sensible heat dominates the convective heat flux in cold regions.

    Another Q is; how much does re-emission diminish whilst feeding collisional KE transfer?
    And…..

  140. I left an undifferentiated quote at the end of my last comment
    Another Q is; how much does re-emission diminish whilst feeding collisional KE transfer?
    And…..

    For sufficient rate of thermalization of the energy, emission of photons is a function of temperature and optical properties. Emission from excited states occurs as excited states are found for a collection of substance at some nonzero temperature; most of those excited states are not the direct result of photon absorbtion if thermalization is rapid enough, hence ‘re-emission’, in that sense, would be relatively infrequent.

  141. manacker – The observed multi-decadal warming and cooling cycles do not correspond ROBUSTLY with the rather steady increase in CO2. The early 20th century warming occurred when there was hardly any increase in CO2, while the mid-century cooling occurred when there was an accelerated increase in CO2 during the post WWII boom. Only the most recent late 20th century warming correlates ROBUSTLY.

    That’s not how to identify a correlation. Think about it.

  142. As far as the long range (proxy) data go,

    You’ve been around enough that I’d expect you to know the hole in your argument there. Plus, I hinted at the logic in my comment about it.

    • Patrick 027

      You still seem to believe that the Vostok long-term temperature/CO2 reconstructions show the GH impact from swings in CO2 concentration, despite the fact that the temperature swings preceded the CO2 swings by several centuries, i.e. “effect” preceding “cause”.

      For the calculated theoretical GH impact of CO2 as compared to the reconstructed temperature, see:

      This shows, even at a high estimate of 3K for 2xCO2 climate sensitivity, the CO2 impact is very small compared to the actual temperature swings, which occurred prior to the changes in CO2, in any case.

      At a more reasonable climate sensitivity of 1K, the theoretical CO2 contribution is hard to find at all.

      Sorry, Patrick, the case for causation is just not there in the observed (reconstructed) data.

      And that was the point I made earlier.

      Max

  143. Marco, catching up; concerning your inserted response in my January 13, 2010 @ 4:29 pm , you wrote in full:

    Sorry, but the analysis by “Dr.” Costella is not an interesting read, unless you firmly believe in conspiracies. The first four are already arguing towards the desired outcome (that of there being a conspiracy). I looked at random through a few more, and it’s all the same type of reasoning: the desired outcome is to find something fishy; now, how can I interpret what is written so that it sounds fishy?
    And yes, I know John Costella has a PhD, but anyone who falls for conspiracy theories and so clearly argues towards the outcome doesn’t deserve one. The whole site is one huge orgy of conspiracy theories…

    It seems to me that you had decided beforehand that you would not seriously read that analysis, or the alternative one that I offered to you. Certainly, you have admitted that you skimped through Costella‘s. What you should do, if you were unbiased, is to read it, (and/or the other timeline chart), through in full, including any links. Incidentally, it is not only about your term “conspiracy”, but is multi faceted.

    For instance, let me refer you to my post way above:

    An update to Kiehl and Trenberth 1997


    The subject of this here thread, the K & T Earth’s Energy Budget diagram, was published in IPCC reports of 2001 & 2007, (WG1; 3AR & AR4) without any hesitations, and yet the two referenced recent Emails from Tom Wigley and from Kevin Trenberth stated boldly things such as:

    We are not close to balancing the energy budget. The fact that we can not account for what is happening in the climate system makes any consideration of geoengineering quite hopeless as we will never be able to tell if it is successful or not! It is a travesty!

    Do you seriously suggest that there is anything ambiguous in this simple example; but one of many?

    • Bob_FJ:
      It is obvious you are willing to trust Costella *solely* on the basis that his analysis fits your desired outcome. A true skeptic would not trust someone who aligns himself with a host of conspiracy theories.

      Of course, Trenberth has on multiple occasions explained his remark, see his latest here:
      http://www.dailycamera.com/ci_14167354?source=most_viewed
      I am very curious to see if you dare admit to making a mistake, but I’m not holding my breath…

    • Oh, and let me add that your Trenberth quote is woefully inaccurate.

  144. Ian Forrester, I notice that you have not responded to the correcting facts and logic in my January 14, 2010 @ 4:48 pm , but instead, true to form, you have embarked on a new tack of attempted character assassinations. It does not behold well on you for any rational readers here, because it is not part of any sensible discussion

  145. Ian Forrester wrote:

    They should be banned from any decent discussion on climate science till they become honest and informed.

    Heil Hitler!

    • I suppose that in your idea of a democracy, the value of x in 2 + 2 = x is decided by polling? The adherents of a religion have a right to pull their kids out of math class. Well…

  146. Doesn’t anybody have any comments on what the IPCC scientists said – refer (Blous79 said
    January 14, 2010 @ 2:45 pm )

    • What is your issue with Ann Henderson-Sellars? There is NOTHING in her essay which indicates any major uncertainty of the IPCC lead authors regarding AGW.

      John Christy is a different story. He’s got his impressions, likely based on his extreme minority position. That is, he can’t fit his own data in the bigger picture, thus considers all the others wrong, and doesn’t like being called out on his position. It’s completely laughable that he attacks models, considering his own UAH MSU dataset is…based on modeling. It’s even more laughable that he quotes his physics teacher. Why? Because at our present level of ignorance we have every indication that temperatures will increase MUCH more than Christy thinks. Taking our current knowledge, even with uncertainty we have every reason to take action.

      Note that if there is even a 1% chance that a dam breaks, you WILL see immediate action. In climate science we’ve past the 50% chance level long ago. But the Christy’s of this world suggest to wait until we know…how certain? 99%? 99.9%?

      • Marco said
        January 15, 2010 @ 1:29 pm
        “What is your issue with Ann Henderson-Sellars? There is NOTHING in her essay which indicates any major uncertainty of the IPCC lead authors regarding AGW.”

        Nothing here, just move on…..

        Henderson-Sellars:
        “I believe it is essential for the climate change research community to be transparent and honest about what it can and cannot deliver and how, if ever, current inadequacies can be resolved.”

        Workshop IPCC participants:
        “Serious inadequacies in climate change prediction that are of real concern
        • The rush to emphasize regional climate does not have a scientifically sound basis.
        • Prioritize the models so that weaker ones do not confuse/dilute the signals.
        • Until and unless major oscillations in the Earth System (El Nino-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO) and Atlantic Multidecadal Oscillation (AMO) etc.) can be predicted to the extent that they are predictable, regional climate is not a well defined problem. It may never be. If that is the case then we should say so. It is not just the forecast but the confidence and uncertainty that are just as much a key.
        • Climate models need to be exercised for weather prediction; there are necessary but not sufficient things that can best be tested in this framework, which is just beginning to be exploited.
        • Energy budget is really worrisome; we should have had 20 years of ERBE [Earth Radiation Budget Experiment] type data by now- this would have told us about cloud feedback and climate sensitivity. I’m worried that we’ll never have a reliable long-term measurement. This combined with accurate ocean heat uptake data would really help constrain the big-picture climate change outcome, and then we can work on the details.
        • [Analyse] the response of models to a single transient 20th century forcing construction. The factors leading to the spread in the responses of models over the 20th century can then be better ascertained, with forcing separated out thus from the mix of the uncertainty factors. The Fourth Assessment Report missed doing this owing essentially to the timelines that were arranged.
        • Adding complexity to models, when some basic elements are not working right (e.g. the hydrological cycle) is not sound science. A hierarchy of models can help in this regard.”

        All of that is “science-speak” for there are lots of problems and our degree of certainty clouded.

      • The science speak is all referring to regional and short-term details!

        We have loads of uncertainties about the exact increases in temperature during interglacials, but we know 100% sure the temperatures increased dramatically. We have problems getting the exact global timeline for the climate changes on a global-regional basis (that is, we know where it started first, but cannot exactly know whether e.g. the arctic started to show significant warming 500 or 1000 years later). I could make long lists in many fields of science where certain details are missing. Regardless, we know the bigger line for certain. We have no idea where gravity really comes from, but we know it’s there, and for just about all practical purposes we know what to expect. Until you go to sub-atomic level, where we suddenly have some more problems getting gravity to work properly. People like you would see that as “uncertainty” of gravity, so let’s forget about it and no longer send any rockets in space. After all, there is uncertainty on gravity…

  147. Ian Forrester

    Bob_F and manacker, you are wasting everyone’s time here. You are so full of nonsense and misinformation.

    Just letting any newcomers know about your deceitful tactics. How many blogs still allow you to post your nonsense? It seems to me you are banned from quite a number.

  148. Chris

    To Ian’s rather emotional outburst on Bob_FJ and myself you commented:

    Response– I don’t think they are actually fooling anyone here — chris

    Correct. Nor are we trying to, Chris.

    Max

  149. Blous79

    I read the attachments you cited and found them both interesting. The write-up on “what the IPCC authors really think” was a bit of an eye-opener, showing a side one rarely sees in what I would call the myopic “AGW for Dummies” world of the IPCC.

    Some author comments that caught my eye (as a rational skeptic of the premise that AGW is a serious threat):

    Until and unless major oscillations in the Earth System (El Nino-Southern Oscillation (ENSO), Pacific Decadal Oscillation (PDO), North Atlantic Oscillation (NAO) and Atlantic Multidecadal Oscillation (AMO) etc.) can be predicted to the extent that they are predictable, regional climate is not a well defined problem. It may never be. If that is the case then we should say so. It is not just the forecast but the confidence and uncertainty that are just as much a key.

    The Fourth Assessment Report is rather weak at including the latest research and thereby is losing credibility in the science community. During the whole process it loses actuality [timeliness].

    WGII is easily the weakest of the three reports. The reasons seem to be two-fold: (i) poor downscaling and (ii) the lack of a coherent methodology for impact study.

    Progress requires more attention to addressing basic model flaws. Without alleviating these, future IPCC assessments will look very similar each time. What a waste of resources…climate science will get what it deserves if it does not apply itself more to basics rather than what it is doing currently.

    There are many more constructive suggestions how IPCC could improve its processes and results, but the above are just a few.

    Thanks for an interesting link.

    Max

  150. Patrick027

    Sorry, Patrick, you are waffling.

    The CO2/temperature correlation is not apparent in a good part of the 20th century record.

    The long-term (proxy) record also did not show a CO2 causation for temperature change, but rather the opposite, with a delay of several centuries.

    You have addressed neither point.

    Max

    • “The CO2/temperature correlation is not apparent in a good part of the 20th century record.”

      – You are missing the forest for the trees. See my response about cycles.

      ” The long-term (proxy) record also did not show a CO2 causation for temperature change, but rather the opposite”

      Combining the record with known physics and logic, what is shown is that there is a biogeochemical feedback to orbitally-forced glaciations and deglaciations. If only the other factors were responsible for the temperature difference, the climate sensitivity would have to be higher than thought, and it would be very odd if albedo changes, or solar heating, could cause global climate change and yet CO2 can’t do much. Very, VERY odd.

      • Sorry Patrick. You’re still waffling, without providing any evidence.

        The recent 150-year temperature/CO2 record shows poor correlation due to the multi-decadal warming/cooling cycles that do not correlate with the steady increase in CO2 and the longer-term record shows clearly that CO2 lags temperature by several centuries.

        Max

      • Max, the longer-term record shows that the START of the warming comes before CO2 increases. Gee, this isn’t anything new. In fact, it was predicted (by Jim Hansen, amongst others) before it was observed. However, for over 5000 years the two (CO2 and temperature) rise in unison.

        You simply can NOT explain the total increase in temperature during the interglacials (or the PETM) without including greenhouse gases (CO2 and CH4 in particular). You can’t explain the temperature of earth during its early phase (first 2 billion years) without including CO2.

    • “You have addressed neither point.”

      Why should I? Good information on those points is readily available.

      • Good information?

        Please specify.

        Your responses are usually quite long (several hundred words). Why so short this time?

        Max

      • Why so short? Because, while I enjoy explaining things, I don’t want to get sucked into wasting my time.

        In the interest of explainations, I will just add for clarification:

        To compare Charney sensitivity of past climatic changes to other changes, it is necessary to include the feedbacks which are not included in Charney sensitivity as forcings.

        Feedbacks in Charney sensitivity include fast-response feedbacks to forcings from water vapor, clouds, and I believe snow cover and sea ice (?), but NOT from biogeochemical feedbacks (CO2 and CH4) NOR from ice sheets, NOR from ecological succession (or biological evolution), so far as I know. From the comparison of glacial-interglacial forcings I’ve seen (which listed aersols as forcing), I’d assume that aerosols are not included in Charney sensitivity. Question: short term vegetation albedo changes, such as yellowing or grass in dry spell or for deciduous trees shorter time periods without leaves? – counted as feedback or not in Charney sensitivity? Also, forest fires…?

        Of course, ice sheets and CO2 and some other things are really feedbacks on the apparently orbitally-forced climate changes. But how do you determine climate sensitivity to CO2 changes if you include the CO2 as a feedback? Hence the perspective-dependent classification. This doesn’t mean that scientists don’t agree on the distinctions, it just means that for different purposes, different classifications make sense. For example, a modeler might wonder what the effect would be of removing water vapor – and in a modelling experiment, water vapor change could become a forcing (this would either require removal of surface water or a ‘silencing’ of some of the physics, and if the goal is just to remove the radiative effect of water vapor, then it is the physics that must be changed). Also, such studies as the Kiehl and Trenberth 1997 paper, which look not at changes in radiative forcings but at a particular climatic state, tend to describe the radiative forcing of water vapor, clouds, and CO2, etc, even though one couldn’t actually remove water vapor from the atmosphere and, given the heat capacity of the climate system, expect any persistent change, as the temperature change would be slow and the water vapor level would bounce back quickly (even without the heat capacity, such that temperature went to equilibrium for smaller water vapor, the water vapor would still be in disequilibrium and bounce back, etc.).

        So when the ice sheet albedo, CO2, and some other ‘forcings’ are weighed against the global average temperature change between glacial and interglacial conditions (the specific comparison I’ve seen is from at or near the peak of the last ice age to preindustrial interglacial times, as I recall), the infered climate sensitivity is in the same range as model results (noting that the time frame being considered for model results doesn’t extend through the disintegration of ice sheets. Does it include possible poleward expansion of boreal forests (CO2 sink, but albedo positive feedback)? Changes in DMS emission from plankton? Negative surface albedo feedback from drying regions? I don’t actually know but I’m pretty sure they don’t include CH4 feedback emissions, etc.)

        Now, it is true that 1. not all types of forcings will produce the same climate sensitivity or that 2. the climate sensitivity is necesarily the same over all climates.

        1. – one way of quantifying this is to assign an efficacy of forcings relative to some standard (CO2 is generally used as the standard – though I think it might be interesting to use a hypothetical partially-opaque non-reflecting surface at the tropopause level as the standard), so that climate response (change in global average surface temperature) = climate sensitivity to CO2 radiative forcing * forcing * efficacy of forcing relative to CO2.

        The efficacies of greenhouse and solar forcing should be more similar when the forcing is tropopause level forcing with equilibrated stratosphere. It’s possible solar forcing effects on ozone and upper atmospheric circulation might change things, but I don’t know of any strong argument that the efficacy of solar forcing should be several times larger than CO2 (which is what would be required for solar to come even close to dominating the overall 20th century changes).

        The efficacy of dark aerosols emitted near snow/ice boundaries can be larger because the direct heating effect can be concentrated near a positive feedback mechanism.

        Related to that, the ice sheet ‘forcing’ efficacy might be larger because of the proximity to snow cover and sea ice (?).

        2. – Climate sensitivity probably gets very large for sufficiently cold conditions (beyond typical ice ages) when the sea ice boundary approaches the subtropics.

        In so far as the interglacials with ice sheets as forcings, the snow cover and sea ice would tend to have larger albedo effects per unit area due to being at lower latitudes – However, that may be the more important snow albedo feedback, since the greater equatorward extent is at least partly canceled by areal displacement by the ice sheets themselves.

        It is helpful to study PETM and other past climate changes in the past both warmer and colder than now…

      • 2.
        “since the greater equatorward extent is at least partly canceled by areal displacement by the ice sheets themselves.”

        Actually, no, not if the forcing by ice sheets is the difference between those conditions and just seasonal snow cover along with whatever else is present instead of ice sheets.

        Greater latitudinal shifts in the snow/ice boundary could be expected per unit global average temperature change when that boundary is retracted poleward because the larger temperature gradients are found in midlatitudes (this wouldn’t be true when snow/ice extends through the midlatitudes, with some adjustment for season). Of course, area per unit latitude change is smaller at higher latitudes, and the distribution of the temperature gradient can/will change, but also, the surface temperature gradient over some latitudes decreases with global warming…

        1.
        Orbital radiative forcing was not included for the comparison. Why? Because the global annual average radiative forcing is generally quite small. In terms of incident solar radiation TOA, the only change is from eccentricity, and that’s a minor amount. The major effect of orbital forcing is through spatial and temporal rearrangement of solar heating; there could be some global average forcing due to shifting more or less solar energy onto higher or lower albedo regions, but the climate response could potentially, due to the larger regional effects, be of the opposite direction – in other words, the efficacy of orbital forcing as measured by global annual average can be very large and possibly negative. Although this might be different if only Charney sensitivity is considered (?).

  151. Note that the IPCC scientists comments included that “climate models should be exercised on weather prediction”. This is cited as irrelevant by less-scientific commentators.

    The fact that clouds have an important influence on both weather and climate is known and is plain to see for anyone living on earth. We don’t know enough about clouds to predict their response to CO2 and aerosols.

  152. Note another key point from the IPCC workshop:
    “Climate change research topics identified for immediate action
    […]
    • Reducing climate sensitivity.”

    Climate sensitivity is the single key model parameter which is critically used for prediction. High sensitivty implies less stable temperature. Lower climate sensitivity implies more stable temperature. The IPCC scientists think the IPCCs assumptions on climate sensitivity are too high – ergo the predictions of temperature rise are too extreme.

    • Ian Forrester

      Blouis79, what the IPCC authors are referring to is reducing the uncertainty of climate sensitivity, not reducing actual climate sensitivity.

      If you actually read what is going on in the world of climate science rather than cutting and pasting of nonsense from denier sites you might have actually understood what you were reading in that article which you cited.

      • Reducing uncertainty would be nice too, but the words “reducing climate sensitivity” are quite plain and clear in meaning to any student of the English language.

      • Blouis79:
        Any student of the English language would actually be taught that words and expressions may have different meanings, depending on the person using it. Culture being one major factor. Say “theory” to a scientist or a layman, and the two will have different views on the meaning of the word.

        When climate scientists really want to “reduce climate sensitivity”, you would see that reflected in the literature. However, what you see in the literature are attempts to reduce the UNCERTAINTY in climate sensitivity. The literature shows you wrong. Learn from it.

    • Despite what Ian Forrester hypothesizes, it is clear that the scientists have stated that the IPCC estimates for 2xCO2 climate sensitivity (based on model simulations) are too high.

      Recent studies by Spencer et al. and Lindzen and Choi based on empirical data derived from actual physical observations, rather than simply model simulations, have confirmed this.

      A value of 0.5 to 0.7degC makes more sense than a value of 3.2degC, as IPCC states.

      Max

      • Lindzen&Choi has been thoroughly debunked.
        http://www.realclimate.org/index.php/archives/2010/01/lindzen-and-choi-unraveled/

        Roy Spencer didn’t ‘believe’ it either (albeit put in nice words), and even Lubos Motl showed severe reservation (to put it mildly).

        Oh, and I have a real scientific challenge for you. Let’s see if you finally will come with something substantial:
        If climate sensitivity to 2*CO2 is only 0.5-0.7 degrees, what then explains the SIX degrees increase during interglacials and that same temperature increase during the PETM?

        Failure to provide an explanation backed by proper calculations will constitute a failure of your claim of low climate sensitivity.
        Note that Lindzen has been asked the same question and never answered it.

  153. Marco, Reur responses inserted on my January 15, 2010 @ 12:34 am

    “Bob_FJ: It is obvious you are willing to trust Costella *solely* on the basis that his analysis fits your desired outcome. A true skeptic would not trust someone who aligns himself with a host of conspiracy theories…”

    Well, if you were to actually READ the analysis, (you admitted that you flicked it), you might discover that it is largely a question of arranging some of the more important Emails, (out of the 1079 total), in a way which clarifies their sequence and context etc. Various links and publications are also referenced which adds to the interest. If you don’t like the commentaries made by Costella, and IF you would like to know more about Climategate, then perhaps you would prefer the colour keyed flow chart alternative that I linked to. (sequencing the Emails in fuller detail, less commentary, and different groupings) It is also possible to search the Emails, for instance, for Gavin =131 hits, and for Schmidt = 127 hits at:
    http://www.eastangliaemails.com/search.php (somewhat different to your single digit claim above somewhere!)

    BTW, when you say that Costella aligns himself with a host of conspiracy theories…. would you care to elaborate on that?

    “…Of course, Trenberth has on multiple occasions explained his remark, see his latest here:
    http://www.dailycamera.com/ci_14167354?source=most_viewed
    I am very curious to see if you dare admit to making a mistake, but I’m not holding my breath.”

    We have been through this before; see:

    An update to Kiehl and Trenberth 1997

    “Oh, and let me add that your Trenberth quote is woefully inaccurate.”

    I guess you mean this:
    “…We are not close to balancing the energy budget. The fact that we can not account for what is happening in the climate system makes any consideration of geoengineering quite hopeless as we will never be able to tell if it is successful or not! It is a travesty!”
    But this is a cut-and-paste from one of the Emails, Trenberth to Wigley. I’ve checked, and there is no mousing error. Why do you say that it is woefully inaccurate?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    BTW, I guess your position is that YOU are not interested in Climategate. You may be puzzled to learn that there are various studies and investigations that imply that it will not go-away as you might like. Incidentally depth of interest as indicated by Google currently gives 8,210,000 web listings that include the word Climategate.

    • 1. Check assassinationscience, Bob, and then get back to me as to how you still consider anything on that website credible. It would be interesting to see your beliefs on the murder of John Kennedy, the death of senator Wellstone, 9/11, and the moon landings.
      The sad thing is that the main author of that site (not Costello) is actually capable of seeing the errors of Intelligent Design. If only he would use his reasoning ability to the other areas.

      2. Costello did not just arrange the e-mails to provide his desired context, he also selectively quoted parts and added his interpretation (working continuously to a desired outcome). I already noted that I have not found a single instance, while ‘flicking’ through the examples, where he did not distort the message in the e-mail, or used the most negatively possible interpretation. I HAVE seen the e-mails, read many of them, to see if there was anything there. And the more I read, the angrier I have become with the deniosphere for deliberate distortion. McIntyre, who I gave credibility in the past, but whose approach I have criticised, has now also fallen into deliberate distortion to get a desired outcome.

      3. Gavin Schmidt may have been on the cc list many times, been mentioned in the e-mails many times, but as an author he is there a ‘whopping’ 5-6 times.

      4. Your quote of Trenberth is woefully inaccurate, exactly because you forget about the context. As Trenberth also notes in his Daily Camera opinion, the energy budget from 2003-2008 is a problem, NOT the energy budget for the prior period. And it is the latter you attacked as something that should not have been reported in the IPCC report. Too bad for you that there WAS a pretty good energy budget there. We lack that for the later period. The warming is going somewhere at this moment, and it isn’t outward into space or warming the surface.

      5. I don’t care that there are many people with “an interest” in climategate. Moon landing hoax gives me 2.7 million hits, AIDS hoax over a million.
      I actually do have an interest in climategate, because it is characteristic of the attack on science I have seen many times in various other fields. This time, however, the attacking cabale includes quite a few people who should (and in several cases do) know better.
      I know it will not go away, because there will always be people like you who WANT the e-mails to show something. You will soon be able to enjoy yourself with e-mails released (by NASA) about GISTEMP. I’m 100% sure people (and I would not surprised that includes you) will try to twist and turn those e-mails into something they don’t show.

  154. Patrick 027 Reur January 14, 2010 @ 8:24 pm , you wrote:

    “(Lucia) “Gavin and his spidey-sense ” …
    Wow! Does this mean that Rachel Carlson is Batman? I shouldn’t be surprised, what with all these Jokers running around…”

    Given that MOST of what you Patrick have written to me (here and at RC) in a scientific grain I generally have had little dispute with, but with the qualification that much of it is far too impertinent and excessively lengthy in response to what I wrote, and thus I have this to say:
    Despite that we seem to have agreed on some pertinent things in your last two posts, I’m very disappointed to see that you have now sunk to the tactics of personal attack that is typical of Marco and Ian Forrester and others at RC and Gristmill etc. Surely you can do better than that? You seem to have otherwise been a very intelligent person!
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    BTW, I don’t know what “spidey-sense” actually means but can guess broadly what it infers from the context, and it very clearly does not describe the identity of any person. How do you leap from there to suggesting that Rachel Carlson, a female, might be Batman, a fictional male character?
    ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
    I guess that Patrick also does not know the meaning of “spidey-sense”. Can anyone else help?

    • Despite that we seem to have agreed on some pertinent things in your last two posts, I’m very disappointed to see that you have now sunk

      Sometimes in life, things happen that shouldn’t be left to slide.

      How do you leap from there to suggesting that Rachel Carlson, a female, might be Batman, a fictional male character?

      It was a joke. Though not without point. But a joke nonetheless. (Not being particularly involved in the comic book world myself, nonetheless (I’ve seen a few movies and … sometimes you just pick things up from the broader media) I am aware of a distinction between Marvel and DC – I was ‘concerned’ (not really) that someone might call me out on mixing Spiderman with Batman, since they are effectively in two different ‘parallel universes’ … but it doesn’t really matter in this context, now does it? Anyway…

    • And sure, it might have been more artful to suggest that Rachel Carlson was Wonderwoman or the Invisible Women or whatever – or Robin (since robins do impart sound to the spring) but … it’s fiction anyway; I just wasn’t thinking of gender at the time (and how do we know that Batman is really a man named Bruce – he’s supposed to have a secret identity, so …). (Not quite the same thing, but … Does Gavin Schmidt have 8 appendages? I don’t think so.)
      I certainly meant no offense to Rachel Carlson.

    • … and for any possible offense (with regard to Rachel Carlson or any connections from there), I do apologize.

  155. Folks, it is interesting to observe how the wheels are coming off the AGW bandwagon as a result of recent Climategate exposes, the Copenhagen fiasco and an awakening of the general public.

    The recent UK “science museum” poll points this out, as do the recent polls taken in the USA.

    The fact that it has cooled over the past nine years despite all-time record increase in CO2 and that this winter has been extremely harsh so far is not helping the AGW cause.

    The many failed predictions of the UK Met Office for rising global temperatures, barbecue summers, record warm years, milder than normal winters, etc. (which never really happen) have not helped.

    The recovery of Arctic sea ice since 2007 has also raised doubts about the dire predictions of imminent “ice free summers” made two years ago (39% of the cumulative loss by end September 2007 has been recovered in the two years to end September 2009).

    ENSO and Solar Cycle 24 are not cooperating and solar scientists are now predicting 20 to 30 years of global cooling, despite projected all-time increases in atmospheric CO2.

    Will the wheels come off of the bandwagon and the AGW gravytrain end up in the ditch?

    Who knows?

    But it is beginning to look that way.

    I would appreciate any comments that any of you might have on this.

    Max

  156. Ian Forrester Reur January 15, 2010 @ 9:35 am you wrote

    Bob_F and manacker, you are wasting everyone’s time here. You are so full of nonsense and misinformation.
    Just letting any newcomers know about your deceitful tactics. How many blogs still allow you to post your nonsense? It seems to me you are banned from quite a number.

    Speaking for myself, I’m not sure, but I may be nominally excommunicated from realclimate only, (unless of course I change my IP and ID, which is not difficult, if I could be bothered), given that a bunch of my posts were deleted in lengthy delayed moderation. Thus, I no longer have any interest in going there, if posts are deleted or snipped, and partly because the non-contributory responses from Mark et al there are a great yawn.

    Here follows just one post of mine that was deleted without explanation from RC after lengthy pause in moderation:

    Concerning K & T 1987, [should be 1997] please note that the claimed upwelling of EMR, (Electro Magnetic Radiation…. also known as Infrared Light, or long-wave radiation), of 396 w/m^2 is opposed by 333 w/m^2 back radiation, which slows down the rate of escape of HEAT via that transport process of EMR. Furthermore, by definition, EMR is not in itself HEAT.
    Here is a simple analogy, comparing ELECTRICITY to EMR in two of its aspects:
    1) Hold an electrical resistor in your hand, and pass a suitable current through it. What you should feel is HEAT that has been converted from electricity via its “absorption” of electrons in the resistor.
    2) Now, expose some of your skin to adequate sunlight, and you should experience a similar sensation. The sunlight, (short-wave EMR) will be converted to HEAT by a somewhat similar process. In this case it is via dermal molecular absorption of photons of light.
    3) In the analogy 1), if an appropriate voltage for the experiment is say 200 volts across the resistor, then the identical result would be obtained, if there were two opposing EMF’s of 400 volts and 600 volts across that same resistor. (BTW, nothing would happen if the opposing voltages were equal, AOTBE).

    As far as I’m aware, there is nothing violating of science in the above comment, so would you care to provide your wisdom to comment why it was deleted?

    Do you have any evidence for claiming: It seems to me you are banned from quite a number. [of blogs]

    • I recall seeing something like that by you before (perhaps several months ago in the above comments?)

      The opposing voltages is an interesting analogy, I’ll give you that. But it could lead to confusion, because unlike the heating by electrical current, opposing electromagnetic energy fluxes from many small wavelength, high frequency photons, even though they may cancel in passing via linear superposition of waves, nonetheless tend to emerge from such overlaps (as is the case with linear superposition) just the same. Except for the small LW albedo of the surface, the backradiation is absorbed at the surface (the portion not absorbed is scattered upwards and thus contributes to the upward LW flux from the surface, which should significantly reduce the reduction in upward LW flux caused by the LW albedo), and the entirety of the upward flux from the surface is actually emitted from the surface. In other words, approximately 300 W/m2 of photons are emitted, via energy transitions, by the air and survive unabsorbed until reaching the surface, and except for the small LW albedo, approximately 396 W/m2 of photons would be emitted by the surface.

      Possibly some of your comments have been deleted because of repetition or lack of constructive contribution? (I say this because I think I’ve answered some of your questions 2 or maybe 3 times now).

  157. Ian Forrester

    You wrote on January 15, 2010 @ 9:35 am

    Bob_F and manacker, you are wasting everyone’s time here. You are so full of nonsense and misinformation.
    Just letting any newcomers know about your deceitful tactics. How many blogs still allow you to post your nonsense? It seems to me you are banned from quite a number.

    “It seems to me” you are talking through your hat here, Ian.

    I have been active on RC as well as other sites.

    Open dialogue is never a “waste of time”. In fact, it is exactly what we need on major issues such as AGW, where there are still so many unknowns and differing opinions.

    Simply sticking the head in the sand and saying “the science is settled” is silly.

    That’s why we need sites like this one and others on both sides of the debate to keep the discussion alive.

    Believe me, Ian, you can learn from these discussions as I have.

    Max

  158. Marco

    You asked:

    If climate sensitivity to 2*CO2 is only 0.5-0.7 degrees, what then explains the SIX degrees increase during interglacials and that same temperature increase during the PETM?

    First, I put far more trust in today’s physical observations (Spencer et al., Lindzen and Choi, Norris) than in reconstructions of events from the far distant past.

    These tell me that the 2xCO2 climate sensitivity is likely to be below 1K, due to the strongly negative feedback from clouds (increasing reflected incoming SW radiation with surface warming while not substantially increasing the absorption of outgoing LW radiation).

    There is no compelling evidence that interglacial temperatures were driven by CO2.

    But let’s look at the PETM, just for the hell of it.

    Atmospheric CO2 rose by a factor of around 9 (by 2,400 ppmv), assuming all of the carbon released was CO2, while temperature rose by 6°C. This would translate into a 2xCO2 climate sensitivity of below 2°C, all other things being equal. Since a portion of the carbon release is estimated to have occurred in the form of methane (from clathrates), which has a much higher GH impact than CO2, the calculated 2xCO2 climate sensitivity is probably around 1°C.

    Another question raised by the PETM: why did it suddenly begin to cool again despite these very high CO2 levels? Very shortly thereafter (in geological terms) there was an extreme cold period resulting in mass extinctions.

    But, hey, there are so many unknowns about what happened in the PETM, that it is unwise to try to calculate a 2xCO2 climate sensitivity from the little bit we think we know.

    So let’s stick with today’s empirical data based on physical observations instead. It makes more sense.

    Max

    • Another question raised by the PETM: why did it suddenly begin to cool again despite these very high CO2 levels? Very shortly thereafter (in geological terms) there was an extreme cold period resulting in mass extinctions.

      It could cool again because after many 1000s of years, the CO2 would fall back toward previous levels.

      My understanding was that the extinction occured with the warming.

      I don’t know what the uncertainty range is in CH4 vs CO2, but there are actually other warm time periods to look at (Cretaceous).

      ———

      First, I put far more trust in today’s physical observations (Spencer et al., Lindzen and Choi, Norris) than in reconstructions of events from the far distant past.

      Skipping Spencer and Lindzen for reasons of prior experience, here’s a more recent paper coauthored by Norris:

      Clement 2009 – (note this (or some aspect thereof – it depends on …) is critiqued on another site referenced below)
      Observational and Model Evidence for Positive Low-Level Cloud Feedback
      http://www.sciencemag.org/cgi/content/abstract/sci;325/5939/460

      Amy C. Clement,1,* Robert Burgman,1 Joel R. Norris2

      Feedbacks involving low-level clouds remain a primary cause of uncertainty in global climate model projections. This issue was addressed by examining changes in low-level clouds over the Northeast Pacific in observations and climate models. Decadal fluctuations were identified in multiple, independent cloud data sets, and changes in cloud cover appeared to be linked to changes in both local temperature structure and large-scale circulation. This observational analysis further indicated that clouds act as a positive feedback in this region on decadal time scales. The observed relationships between cloud cover and regional meteorological conditions provide a more complete way of testing the realism of the cloud simulation in current-generation climate models. The only model that passed this test simulated a reduction in cloud cover over much of the Pacific when greenhouse gases were increased, providing modeling evidence for a positive low-level cloud feedback.

      other Norris paper again for reference:
      ftp://eos.atmos.washington.edu/pub/breth/CPT/norris_jcl04.pdf

      Other websites for Norris:
      http://scripps.ucsd.edu/Profile/jnorris

      ———-

      Others:

      ———-

      (PS never been to iamericas.org before, can’t guarantee it’s safe)

      Click to access Norris-Scripps_July10.pdf

      p.16/27 – interesting map with suspicious-looking pattern, suggestive of satellite data interpretation error.
      last page:

      Conclusions
      • There are decadal fluctuations in low-level cloud cover over the NE
      Pacific in multiple, independent cloud datasets.
      • Cloud changes are physically consistent with local meteorological
      changes: Cloud cover decreases when sea surface temperature is
      warm, sea level pressure is low, and trade winds are weak.
      • Only one current global climate model realistically simulates
      observed cloud-meteorological relationships
      • This model predicts decreased cloud cover over the NE Pacific,
      warmer sea surface temperature, and weaker trade winds during the
      coming century
      􀃎Observed decadal variability in low-level clouds may be an
      analogue to how these clouds will change with global warming
      􀃎Positive feedback, enhanced warming

      ———-

      “Cloud feedback
      Chris Bretherton
      University of Washington
      with Rob Wood, Peter Blossey, Matt Wyant, Dennis
      Hartmann, Mark Zelinka”

      Click to access bretherton.pdf

      p.5/34: “Hence the spread in GCM-simulated cloud feedbacks is not surprising.”
      (But note that, so far as I know, no GCM produces the net negative feedback suggested by Lindzen, etc.)
      p.9/34 explanation that because of other feedbacks (H2O vapor in particular, apparently), the same clouds have reduced LW greenhouse forcing. (Low clouds should also have a reduced SW cooling effect for the same reason). Thus, without any cloud feedback, there is a change in cloud forcing. (for changes larger than infinitesimal, it can matter, not to the total climate response, but to the apportionment of feedbacks and forcings, in what order forcings and feedbacks are counted.)

      p.10/34

      FAT and positive high cloud feedback
      • Fixed Anvil Temperature (FAT) hypothesis (Hartmann
      and Larson 2002): Tropical ice clouds move upward in a
      warmer climate following isotherms, at least in models.

      Perhaps this is an average effect over various cloud cover patterns, wherein, of high clouds, the highest cloud top varies within some range of the upper troposphere (?). Because I had expected the highest clouds actually get colder (?). Interesting…

      p.28/34 very interesting critique of the Clement 2009 paper.

      NE Pacific cloud variability: a cloud feedback analogue?
      (Clement et al. 2009 Science)
      • NE Pacific interannual low cloud variability responds to
      Pacific Decadal Oscillation, increasing when regional SST
      decreases (noted earlier by Klein&Hartmann, Norris, etc.)
      • Clement et al. treat this as evidence of positive cloud
      response to a warmer climate (warmer SST⇒less clouds).
      • This ignores crucial role of free troposphere:
      Global warming: free trop warms more than subtropical
      SST, increasing LTS.
      NE Pac variability: free trop changes much less than
      subtropical SST, decreasing LTS.
      • One MUST look for observational analogues with vertical
      stratification changes similar to global warming
      and/or
      convincingly test individual cloud feedback mechanisms
      (Clement et al. present a flawed approach to this, too)

      ———————–
      nice summary of issues on first page – top of second:

      Click to access CPT_description.pdf

    • Max, first of all, you seem to rely SOLELY on those people who calculate low climate sensitivity. Lindzen&Choi is already debunked, not even a few months after it was published, with a notable reference in that debunking to others who have done very similar analysis and found a much higher climate sensitivity.

      Second, there IS compelling evidence that the interglacials were in large part driven by CO2. One simply cannot come even close to a 6 degrees rise in temperatures without taking CO2 into account.

      Third, why do you completely misrepresent the PETM? The CO2 levels did NOT increase by a factor 9! There are some problems in calculating the exact values, but most indicate a doubling of CO2 at the PETM. This actually was part of Zeebe et al trying to find an explanation for the rest of the warming. 2*CO2 isn’t nearly enough to explain the total increase.

      And, no, there was no “extremely cold period resulting in mass extinctions” shortly thereafter. Shortly after the PETM came the Eocene Optimum!

      But let’s indeed look at the physical evidence: so far all those that calculate a low climate sensitivity have been proven wrong. Why do you so desperately cling to falsified science?
      Let me answer that question for you: because it fits your predetermined result.

      • Marco

        Thanks for your response to my earlier post. Let us go through it.

        You say Lindzen and Choi has been “debunked”. And I say that the “debunk” will then get “debunked”, etc. (ad nauseam). L+C showed (based on ERBE observations) that the net total outgoing SW + LW radiation over the tropics increased with higher surface temperature, contrary to what was estimated by the model simulations. No one has shown these observations to be false. They do sort of confirm the earlier study by Spencer et al., which showed that the net cloud feedback over the tropics is strongly negative with warming.

        Your logic on the interglacials goes:

        Second, there IS compelling evidence that the interglacials were in large part driven by CO2. One simply cannot come even close to a 6 degrees rise in temperatures without taking CO2 into account.

        This is poor logic. Just because we cannot explain it any other way does not provide “compelling evidence” of anything. It may just mean that we do not understand all there is to know about what drove past climate change.

        Then you opined on the PETM:

        There are some problems in calculating the exact values, but most indicate a doubling of CO2 at the PETM.

        There are theories that the temperature spike during the PETM was not even caused by a massive release of carbon.
        http://www.theresilientearth.com/?q=content/could-human-co2-emissions-cause-another-petm
        But leaving that aside the estimates for how much total carbon (as CH4 and CO2) was released vary from 1,200 to 10,000Gt

        Archer discusses two sources of carbon, “totaling maybe 10,000 Gt”.

        Click to access bg-4-521-2007.pdf

        Chris Colose mentions a study by Panchuk, Ridwell, and Kump, which puts it at 6,800 GtC.

        A busy week for paleoclimate

        Using this figure, this is 4 times the total amount contained in all the optimistically estimated fossil fuel reserves on our planet. It would have caused much more than “a doubling of CO2”.

        And, at an equilibrium temperature increase of 6K, this calculates out to a 2xCO2 climate sensitivity of 2K, if all of the released C was CO2. If a significant portion over the period was as “more potent” methane (which did not oxidize immediately to CO2), then the calculated 2xCO2 CS would be significantly lower.

        If the total C release was only 1,200 Gt, then this would have caused an increase of around 560 ppmv and you would be correct in saying “2*CO2 isn’t nearly enough to explain the total increase”.

        So it all depends on whose estimate you take on the carbon release.

        But my point was that the PETM is a poor example to “prove” a 2xCO2 CS of 3K, and we both appear to agree on that.

        As far as the cooling after PETM goes, it obviously did cool, despite very high CO2 levels, but you are right; there was no subsequent glaciation. My error.

        As for your conclusion that I refer to studies that demonstrate a low climate sensitivity because it fits my predetermined result, that’s your error.

        Max

      • Max –
        This is poor logic. Just because we cannot explain it any other way does not provide “compelling evidence” of anything.

        Very true, and entirely applicable if we had simply blurted out ‘the butler did it’.

        But we KNOW CO2 has an effect. Without any other feedbacks besides the Planck response (emission as a function of temperature), there has to be x (~ 1 deg) warming from y CO2 (~ a doubling) – with some error bars of course, but not large.

        Similar logic applies to other forcing agents.

        It is climate sensitivity that is primarily the question (for the global average response).

        And the orbital forcing cannot by itself cause much global average temperature change without feedbacks, in particular the ice sheets.

        Why would ice sheets cause cooling but a drawdown of CO2 not? Some wiggle room for efficacy, etc, but still.

      • There are other periods of time besides PETM, but back to PETM:

        1. CH4 longevity:

        CH4 is currently (or recently*) oxydized or otherwise removed from the atmosphere at a rate of about 0.15 to 0.20 ppm/year (derived from Hartmann, p.26, *table references Watson et al 1992).

        This implies a residence time of about 10 years.

        If this were just a simple chemical reaction with oxygent in the air, we could suppose that any amount of CH4 would, without replenishment, just exponentially decay with a time scale (e-folding time) of around 10 years.

        However, the reaction is mediated (in part or fully? – I wouldn’t claim to know) by hydroxl radicals, produced by UV photons hitting water vapor. I suppose, depending on how much UV is available, that the reaction rate should be accelerated in a warmer climate with more water vapor.

        There may be other complexities involved, but assuming a constant ppm drawdown rate, 2400 ppm of CH4 would be ‘removed’ (mostly converted to CO2) over about 12,000 to 16,000 years.

        But it could be different than that.

        2.
        CO2 accumulation longevity

        As CH4 is converted to CO2, if the C-cycle feedback is similar to present, a significant fraction of the first accumulation of CO2 might go into the upper ocean. However, that reservoir would shortly ‘fill up’, etc. CO2 could be taken up by the deeper ocean more slowly. Over 1000s of years, weathering and dissolution of carbonate minerals would buffer the pH and allow the oceans to hold more CO2. The utlimate sink would be through chemical weathering of various silicate minerals, providing Ca ions (and Mg? others? or is Ca the only one that will do it?) which will further buffer the pH (actually I think a variety of ions would do that), allow net uptake of CO2 from the air and allow precipitation of carbonate minerals (Mg does form carbonate minerals but not in the same presently-ubiquitous manner that Ca does – significant Mg-bearing carbonate minerals require certain conditions to form, the details of which I don’t know offhand).

        3.
        Radiative forcing.

        Note that the ~ 20 times radiative forcing per (additional ? – or average?) CH4 molecule relative to (additional ? – or average?) CO2 molecule is contingent on the amounts present. If there were as much CH4 as CO2 even now, I don’t offhand know which would have the greater effect in total. Maybe I’ll look it up and get back to you on that.

        4.
        How long did the forcing perturbation in total last (or how long was it above x, y, z, and so forth)?

        If most of the forcing perturbation were gone by 100,000 years (I really don’t know how long it was), it would make sense to expect that the temperture perturbation would diminish over the same time frame. If you want to find evidence of a disconnect, you’ll need to find a mismatch in timing – and if there are multiple estimates for each, you can’t just pick and choose whatever combination you want and hold it up in victory.

        5.
        Sensivitivity.

        A 2 K per CO2 doubling sensitivity is actually within model results, granted at the low end of the expected range – but still well above some values given by Lindzen and other such people.

      • 3. TWO VERY IMPORTANT POINTS –

        A.

        Yes, CH4 is a WEAKER GHG than CO2 when both have the same head start.

        See Chapter 4 of Ray Pierrehumbert’s (sp? – no time to look up name) climate book.

        The maximum effect for the same amount of C is found with a mix of CH4 and CO2.

        B.

        6800 Gt or 10000 Gt C ? would have been added to an amount of CO2+CH4 in the atmosphere already greater than present. The amount of CO2 prior to the PETM may have been ~ 1200 ppm, give or take.

        (Roughly, if 600 Gt C = 300 ppm CO2+CH4, then we might have up to 5000 ppm CO2 added to 1200 ppm, or approx 2 doublings, not 3. Removing a little of that and giving it to methane, a little more effect.)

        Adjust number of doublings accordingly.

      • – although any significant CH4 would still do some heavy lifting.

        but then there is the issue of the organic haze that would develop with enough CH4 (I think that happens when CH4 ppm > CO2 ppm ??) – that would limit further warming by CH4 and also – destroy the ozone layer ? – but form a carbon-based replacement UV shield…

        CH4 oxydation produces H2O in the stratosphere. But even though H2O residence time in the stratosphere is long enough to ascribe it a forcing, I don’t think it’s long enough to be of proportionate significance for 1000s of years.

      • A correction/clarification –

        CH4 drawdown via oxydation would only be limited to some ppm/year value independent of CH4 concentration if (in the approximation that all CH4 oxydation is mediated by OH radicals, which I’m not really sure about) the entire production rate of OH radicals was consumed by that process.

        OH radicals are very short-lived. I don’t think they ‘wait around’ for particular types of molecules to reach them, and the abundance of other substances relative to CH4 suggests to me that the residence time of CH4 in the atmosphere could be kept near 10 years for much higher concentrations – but I could be wrong – it depends on the kinetics and energetics of OH interactions with the alternative choices (H2O, CO2, O2, N2, etc.)

        If a lot of CH4 were released within a few years, then there might be a whole lot of H2O loading of the stratosphere (what is the residence time of H2O in the stratosphere?). Interesting… —

        Of course, the net reaction does use up O2, and so O2 places a constraint, but my impression is that the CH4 oxydation rate is well within such limits for the present O2 amount – right?

        —-

        And I was wrong about the organic haze – I think it is actually the C/O ratio (all atoms, thus including O2, CO2) that determines whether methane could form a significant haze. So probably no haze for the PETM.

        —-

        The PETM was a 200,000 year event according to Pierrehumbert’s “Principles of Planetary Climate”. If the entirety of the C emission was ever acting (as radiative forcing via CH4 and CO2) at the same time, it would not have been for most of that time period, and a series of episodic emissions or something more gradual would not have raised the total atmospheric level so high (and would have kept CH4 relatively low).

      • … Any more on PETM I’ll post at the bottom of the comments since the order is getting a bit tangled up.

        Except:

        I may have been wrong about the source of OH radicals; one source says that they are produce – by UV and water vapor – but indirectly; the process is UV photodissociation of ozone to kick off an oxygen atom which then combines with a water molecule to produce two OH radicals.

  159. Ian Forrester

    Manacker, all I learn from your (and others) verbal diarrhea is that you are always willing to waste anyone’s time who tries to show how wrong you are.
    Patrick has it right when he says you repeat the same thing over and over again, even when you are shown that you are wrong (over and over again). All you are really showing us is that you don’t understand the science you are discussing and b) you are unable or unwilling to learn.

    Why do you not write up some of your nonsense and try and get it published in a reputable science journal? (note: that is a rhetorical question, we all know the answer why).

    You have a history through out the blogsphere of wasting everyone’s’ time far more than you are worth.

  160. The opposing voltages analogy is no different from the simple heat conduction equation derived from Fourier’s Law:
    ΔQ = specific heat * mass * Δu

    If two bodies in a closed system are of equal temperature, nothing will change. They can both be white hot radiating lots of infrared but net heat transfer between the bodies will be zero and never anything different.

    Radiative effects have nothing to do with the intensity of radiation in a given space. Radiation does not block radiation. Radiation does not modify the flow of radiation. Net heat transfer by any mechanism is still determined by the laws of thermodynamics. Warmer masses transfer heat to cooler masses by conduction/convection/radiation.

    Onsager reciprocal relations are probably important at top of atmosphere where the tendency of the warmer atmosphere and higher atmospheric pressure tending to lose mass to space is countered by gravitational effects.
    http://en.wikipedia.org/wiki/Onsager_reciprocal_relations

    • Radiation does not block radiation. Radiation does not modify the flow of radiation.

      Whoa – hold on. Are you arguing against anyone here, because nobody ever actually said any such thing.

      Unnecessary complexity alert: (Although in general, when waves (sound, gravity waves, Rossby waves, etc.)propagate through a medium in which the medium is altered by the wave, there can be some nonlinearities, and this applies to photons as well, although so far as I know, linear superposition is a good approximation for photon propagation in most materials and most ordinary intensities (nonlinearities tend to become more apparent and important for large wave amplitudes.)

  161. Open dialogue is never a “waste of time”. In fact, it is exactly what we need on major issues such as AGW, where there are still so many unknowns and differing opinions.

    Simply sticking the head in the sand and saying “the science is settled” is silly.

    Open dialogue certainly can be a waste of time if one or more parties are not actually open to the dialogue.

    The abundance of differing opinions doesn’t necessarily reflect the actual state of knowledge.

    Parts of the science are settled. For a given action, there is an actionable level of intelligence. Some policies are already justified even with the remaining uncertainty.

    That’s why we need sites like this one and others on both sides of the debate to keep the discussion alive.

    Believe me, Ian, you can learn from these discussions as I have.

    There is a potential to learn, and the initial posts are good. But with the content coming up lately, you might get more by getting into a textbook.

    • Patrick027

      Open dialogue certainly can be a waste of time if one or more parties are not actually open to the dialogue.

      Speak for yourself, Patrick. I have observed both you and Bob_FJ in an extended ongoing exchange here. You obviously both have your own opinions and separate knowledge, but I have not seen that either of you is not “open to the dialogue”, as I have with some others, who resort to name-calling instead of addressing the issues involved.

      The abundance of differing opinions doesn’t necessarily reflect the actual state of knowledge.

      The “actual state of knowledge” is a moving target. New information is available almost every day. And there are times when old paradigms are replaced by new knowledge, and other times when the defenders of the old paradigms try to defend them by attempting to rebut or refute the new knowledge. And these papers then get rebutted again, etc.

      Parts of the science are settled.

      In a topic as complicated as our planet’s climate and what drives it, the science is far from settled, Patrick. An example: The Clausius-Clapeyron equation is settled (i.e. water vapor pressure increases as temperature rises and, thus, atmospheric water vapor content should increase with temperature), but how this really works in our planet’s climate system is not settled at all. The models are still much too primitive to really figure out what happens to water (as vapor, liquid droplets or ice crystals).

      For a given action, there is an actionable level of intelligence. Some policies are already justified even with the remaining uncertainty.

      Here you are moving away from the science to politics and policies. Carbon taxes (direct or indirect) will have no impact on our planet’s climate whatsoever. No tax ever has. I have seen no actionable proposals to cause a significant change in our planet’s climate, have you? If so, what are they and what will be their impact?

      The “remaining uncertainty” is enormous. The first decade of the 21st century is showing us that natural variability (a.k.a. natural forcing) can more than offset the theoretical GH warming from record increases in CO2. Are we headed for an extended period of cooling caused by these natural forcing factors (solar activity, ENSO, etc.)? Several scientists believe so. What “policy” should we enact to cope with this eventuality? Or should we better wait until we see what is really happening to our climate long term, and then adapt to whatever really happens?

      You see, Patrick, it is not all so simple. Our knowledge is still far too limited and our computer models far too primitive.

      We (science) may know a lot about our climate already. But what we do not know about our climate far surpasses what we do know, and is therefore more important. It is arrogant to think otherwise.

      And remember what Einstein said about arrogance:

      “the only thing more dangerous than ignorance is arrogance”

      Max

      • You mentioned a need for dialogue:
        on major issues such as AGW,
        this could be taken to reference both the scientific issue and the political/economic/societal issue.

        And remember what Einstein said about arrogance:

        But isn’t it arrogant for you and others to keep saying that one or a few studies have shown you that climate sensitivity must be so small, when so much leads away from that conclusion and when those studies have either errors or else have not withstood verfication well, or at the least yeild, in honesty, quite uncertain conclusions?

        Don’t forget the immense uncertainty in the studies whose conclusions you like.

        Several scientists believe so. But do they have good reasons for those beliefs (note YOU used the term ‘belief’ – I thought you didn’t like that in the context of good science). If they had good reasons they would have an easier time convincing others.

        Of course the state of scientific knowledge can change. That’s why we still have scientists doing science. But there’s a difference between refining a theory and completely overturning it. Newtonian physics is still an acceptable approximation for many circumstances in spite of the success of Relativity. It might also be argued that Galileo was not overturning previous science perhaps so much as he was overturning assumptions. And Quantum Mechanics has not shown much of prior macroscopic-scale physics to be incorrect.

        People may resist change, but I am just trying to be correct. If a new study is ‘attacked’ or really attacked, this is not necessarily just because the ‘old school’ doesn’t like the results; it may actually be because the study has flaws or errors or overeaches in conclusion, etc. You have to consider what the critisism actually is.

        Think about it – for every Darwin (and Wallace, right?), Einstein, Newton, Copernicus, and Galileo, how many other would-be revolutionaries might propose some idea that ends up falling flat? Just because their new ideas were successful doesn’t mean that any new idea will be successful. Otherwise, I could just say that Relativity is actually completely wrong, and you’d have to accept that I must be right, since I am expressing an idea that challenges the status quo, and that I am an underdog, and so forth.

  162. Patrick 027, Reur response inserted in my; January 16, 2010 @ 3:22 am

    “I recall seeing something like that by you before (perhaps several months ago in the above comments?)
    The opposing voltages is an interesting analogy, I’ll give you that. But it could lead to confusion, because unlike the heating by electrical current, opposing electromagnetic energy fluxes from many small wavelength, high frequency photons…”

    Yes, you are correct, you have waffled on this once before above, when it was addressed to YOU. However, this time, for different reasons, it was shown to Ian Forrester as an example, in response to a different non-scientific accusation/question he aimed at me and Max. Thus, there was no value in you commenting again, causing distraction, especially as it was NOT addressed to you.

    Your waffle is and was also impertinent. For instance, the net HEAT transfer rate between two sources of EMR is related to the difference in their opposing radiating power, regardless of their frequency spectra. Another way of looking at it is via S & B and the pertinent part of the famous equation for radiative power; (T1^4 – T2^4)…. And of course there is no reference to frequency or wavelength.
    Furthermore you should know that two black bodies at T1 and T2 each have their own unique emission spectra as a bell curve, the centroid of which correlates the mean T at the quantum level. I’m more and more thinking that you introduce your complicated and often extremely lengthy irrelevances here to try and awe some of the impressionable that may not even understand what you write.

    To summarize, the opposing voltages analogy does NOT lead to any confusion, and you are wrong to imply so.
    In response to your final speculation, its presentation here and at RC was relevant to the discussion in hand, and was both scientifically accurate and NEW to the discussion…. Of course you did not see it at RC, because it disappeared without trace by deletion in moderation.

    • I’m more and more thinking that you introduce your complicated and often extremely lengthy irrelevances here to try and awe some of the impressionable that may not even understand what you write.

      No, when people keep asking questions, then more questions, then more questions, or bringing up real or percieved issues, it is almost inevitable that any satisfactory response goes into the complexities of what could otherwise be described more briefly.

      Furthermore you should know that two black bodies at T1 and T2 each have their own unique emission spectra as a bell curve, the centroid of which correlates the mean T at the quantum level.

      Yes. (But it is also worth noting, very importantly, that the black body radiant intensity at any portion of the spectrum increases with increasing temperature. Towards long wavelengths, the relationship is almost linear. At shorter wavelengths, a small percentage change in T results in a large percentage increase in radiant intensity. At the peak per unit wavelength, a 1 % increase in T results in about a 5 % increase in intensity.)

      …. Of course you did not see it at RC, because it disappeared without trace by deletion in moderation.

      Fair point; I don’t remember the context.

      Your waffle is and was also impertinent. For instance,

      First manacker and now you have said I have waffled. Pardon me but I thought waffling implied some indecisiveness. Certainly I have expressed uncertainty on some points, but that’s just being honest, and the points on which I have been said to waffle, I have actually done something quite different.

      Most analogies, even good ones, fail at some point, but so long as that point is beyond the intended purpose, it’s okay. A potential problem with the voltage analogy is that any nonzero voltage through a resistor (with finite resistance that allows some current) would cause heating, whereas a nonzero net LW flux of opposite direction causes cooling instead of heating or heating instead of cooling, etc.

  163. Marco, Reur response inserted in my; January 15, 2010 @ 5:19 pm

    Taking your most important point first:

    “ 4. Your [Bob-FJ] quote of Trenberth is woefully inaccurate, exactly because you forget about the context. As Trenberth also notes in his Daily Camera opinion, the energy budget from 2003-2008 is a problem, NOT the energy budget for the prior period. And it is the latter you attacked as something that should not have been reported in the IPCC report. Too bad for you that there WAS a pretty good energy budget there. We lack that for the later period. The warming is going somewhere at this moment, and it isn’t outward into space or warming the surface.”

    You are wrong again because (a) it was a cut-and-paste, and b) I did not forget the context. The context is perfectly apparent by virtue of dates and content in two Emails above, which I repeat below in full, with bold emphasis appropriately added. (resisting the temptation to make it all bold!)

    From: Tom Wigley To: Kevin Trenberth; Subject: Re: BBC U-turn on climate
    Date: Wed, 14 Oct 2009 16:09:35 -0600
    Cc: Gavin Schmidt et al
    Kevin,

    I didn’t mean to offend you. But what you said was “we can’t account
    for the lack of warming at the moment”.
    Now you say “we are no where
    close to knowing where energy is going”. In my eyes these are two
    different things — the second relates to our level of understanding,
    and I agree that this is still lacking.

    Tom.
    ++++++++++++++++++
    Kevin Trenberth wrote:
    > Hi Tom
    > How come you do not agree with a statement that says we are no where
    > close to knowing where energy is going or whether clouds are changing to
    > make the planet brighter. We are not close to balancing the energy
    > budget.
    The fact that we can not account for what is happening in the
    > climate system makes
    any consideration of geoengineering quite hopeless
    > as we will never be able to tell if it is successful or not! It is a
    > travesty!

    > Kevin

    Clearly they are discussing the current “bad news” of the global average warming plateau, which does not agree with the models etc. Also, they agree here (and elsewhere) that the level of understanding is poor. Furthermore, Trenberth bemoans it is “…quite hopeless as we will never be able to tell…”

    I also requote something else from above, but with bold added:

    “3) Kevin Trenberth’s on-line article discusses possible mechanisms for the current cooling plateau, and laments the current inability to properly measure the processes involved. Here is an extract which is not contradicted within that document:
    “…Perhaps all of these things are going on? But surely we have an adequate system to track whether this is the case or not, do we not?
    Well, it seems that the answer is no, we do not…”

    Although more specific in nature than in point 2) above, it is a further admission that the level of understanding of the complex energy balances is poor. (as is the current ability to measure them)

    If they do not understand the current plateau, (or BTW, the similar one, followed by prolonged cooling after 1940), it is absolute nonsense to assert that they had a better understanding in the 1990‘s, or that there were no unknowns within probably inferior data collections. (unknowns perhaps opposite in sign back then relative to today.)

    BTW, there is no uncertainty shown in the K & T 2009 update image at the head of this thread, despite the authors being aware of the “bad news”, and presumably ditto for the 2007 version.

    • Bob: you made the comment from Trenberth a stumbling block for the energy budgets indicated in the IPCC reports. Energy budgets that *did not deal with the current plateau*. It is nonsense that the inability to get a good insight over the current plateau automatically means that you can’t know anything about the period prior to that. It is very well possible events occur that suddenly make an analysis break apart. It does not mean the prior analysis is wrong. And THAT is the context of the discussion.

      Regarding the “models”: are you really this dishonest? Most climate models are specifically aimed at long-term forecasts. Short-term variability is thus smoothed out, for the simple reason that we cannot (yet) predict the short-term variations and that these would thus provide a useless mathematical burden on the models (they already have to deal with many other factors). There are, however, several models that DO have short-term forecasts, and which DO show the current stasis. Take the frequently misinterpreted paper by Keenlyside et al (Mojib Latif is one of the co-authors).

      Oh, and try to put uncertainty in a figure like that, and everyone complains it is unreadable. Damned if you do, damned if you don’t.

    • Clearly they are discussing the current “bad news” of the global average warming plateau, which does not agree with the models etc.

      But it does fall within model behavior. Any individual run generates internal variability.

      BTW, there is no uncertainty shown in the K & T 2009 update image at the head of this thread, despite the authors being aware of the “bad news”, and presumably ditto for the 2007 version.

      Well, for one thing, there is no specification in the diagram of where the 0.9 W/m2 heat storage rate is actually going. In so far as that goes, this agrees with “…Perhaps all of these things are going on? But surely we have an adequate system to track whether this is the case or not, do we not?
      Well, it seems that the answer is no, we do not…”

      • Patrick027 and Bob_FJ

        Pardon me for cutting in to your very interesting exchange on the K+T “cartoon” of the Earth’s global annual energy budget, etc., but a statement by Patrick caught my eye:

        there is no specification in the diagram of where the 0.9 W/m2 heat storage rate is actually going

        If you look at the diagram more closely, Patrick, you will see that the 0.9 W/m^2 is identified as “net absorbed”. This translates to net energy absorbed by the system resulting in a change of equilibrium, i.e. theoretical warming of the system (atmosphere, surface, upper ocean).

        Now, admittedly, we have established that the 0.9 W/m^2 is a “plug number”, which K+T have simply taken over from a paper by Hansen et al.

        We have also seen that this number is poorly substantiated, based on circular logic, so can be ignored.

        According to AGW theory, the real theoretical “net imbalance” in the Earth’s global annual energy budget is the anthropogenic forcing from annual changes in atmospheric CO2 concentrations (IPCC tells us that all other anthropogenic forcing factors essentially cancel one another out and that natural forcing factors are negligible).

        Mauna Loa tells us that this is around 2.5 ppmv increase per year, on a 2008 baseline of around 380 ppmv.

        The theoretical GH forcing is based on the logarithm of the ratio: 382.5/380

        This equals 0.00656.

        According to IPCC (Myhre et al.) the theoretical radiative forcing is 5.35 times this number or 0.0351 W/m^2

        This is the net imbalance if we assume only the theoretical GH effect of CO2 alone, without any assumed feedbacks, i.e. a 2xCO2 impact of around 1K and RF of around 3.71 W/m^2

        Even if we assume the exaggerated 2xCO2 CS of 3.2K, the net imbalance in the Earth’s annual energy budget is only 0.1123 W/m^2.

        Please explain why the above is incorrect.

        If your answer is “hidden in the pipeline”, please specify where “in the pipeline” it is “hidden” and how this can be empirically verified.

        Thanks.

        Max

      • Assuming BobFJ’s provided definition of ‘waffle’ is correct, I can now say that you are waffling, manacker.

        ——

        1. if we have no idea what the climate sensitivity is, then we cannot know the lag time of the response to a forcing, because the lag time is proportional to the sensivitivity * the effective heat capacity.

        You compute a 0.0351 W/m2 net imbalance based on the assumption that the lag time is approximately 1 year. Why assume that if you don’t know? Maybe the lag time is 1 s, in which case the net imbalance would be just over 0.000 000 001 W/m2, or it could be 100 years, in which case it would be 3.5 W/m2, except in that case, the analysis is likely to be quite inaccurate because it depends more on the past history of forcing changes.

        The point, though, is that in light of your implicit arbitrary assumption, it is ironic that you ask me for empirical evidence.

        A little math would show that there is always something ‘hidden in the pipeline’ when there is recent or ongoing change in forcing. This is just applied physics. For the general existence of such, it would be like me insisting you prove that gravitational acceleration near the Earth’s surface is ~ 9.8 m/s2. What requires evidence is a specific number or range of values. Well, there are measures of oceanic heat content, etc.

        2. Will return with lag times for various assumed climate sensitivities.

  164. Patrick027

    We have drifted into a philosophical discussion.

    When you write:

    Don’t forget the immense uncertainty in the studies whose conclusions you like

    I could say the same applies for you.

    The main point (which you have not addressed, but the above sentence underscores) is that we (science) know far less about what makes our climate act the way it does than we do not know, and to claim otherwise is arrogant. To make predictions for the future based on our limited knowledge is not only arrogant, it is foolish, for it is not what we know that will get us into trouble but rather what we do not know.

    I suggest you read The Black Swan, by Nassim Taleb, for an explanation why this is so.

    But let’s break off this discussion. It has become repetitive.

    Max

    • It has become repetitive, but I must say this:

      In as far as science is concerned, of course we don’t know everything and are trying to learn more.

      However, you don’t really learn more by insisting that a few studies trump everything else when those studies are flawed.

      In as far as policy/action is concerned, we can’t wait for 100 % certainty to do most of what we do.

      To make predictions for the future based on our limited knowledge is not only arrogant, it is foolish, for it is not what we know that will get us into trouble but rather what we do not know.

      It is not foolish to make predictions with some level of confidence when some level of confidence has been gained; and the IPCC is honest about the presence of uncertainties (whereas I get the impression that folks like Lindzen, or at least many of their fans, are very, perhaps arrogantly, certain of a low climate sensivity). Far more foolish is to ignore such predictions. There may be and generally is some risk in acting on predictions, but what about the risk of NOT acting on predictions?

      A ‘prediction’ that all swans are white (in adult plumage, not feet, beaks, etc.) turns out to be incorrect, but while the categorical error is large, the quantitative error … well, what fraction of the world’s swans are black, and to what extent are white swans a good approximation for some regions?

      Do you think 100 % of swans are all black? Even 90 % ?

      If the weather forecast predicts rain tomorrow, do you plan for a dry day?

      ——-

      And by the way, yes, CLIMATE can be in principle predicted many years in advance – depending on the details of the prediction, even millenia or millions or even billions of years (ie we know the sun is getting brighter over 100s of millions of years), etc.) The amount of detail that can be reliably forecast shrinks with time, as the changing arrangements of continents and mountains and ocean geometry obviously affect the existence of such things as ENSO, and CO2 is significantly influenced by geological process over long times – so we’d need to predict mantle convection, among other things. Given the slowness of that process, I’d guess that the time scale of the butterfly effect for mantle ‘weather’ forecasts is on the order of many millions – maybe 100s of millions (?) of years. But note that while the specifics of physical geography would become hard to predict beyond some time horizon, the general character – the ‘climate’ of mantle convection and it’s crustal manifestations – can be predicted farther out, for example, by noting the trends in very predictable forcing of radioactive decay, and considering the latent and sensible heat loss from the core, and the effects of changing temperature on viscosity, etc, and modulation of the effects on flow by phase transition regions, such as the ~ 660 km phase transition, the thermodynamics of which tends to favor layered convection as it is an impediment to convection across that level, but nonethess can be overcome depending on various factors. … (PS There may be an interesting coupling of climate and plate tectonics in the case of the Andes and the Atacama desert, wherein, if i remember correctly, erosion or lack-thereof affects the sea floor sediments near the subduction zone, which affects the behavior of the subduction zone, perhaps causing a higher mountain range to develop, which will reinforce or intensify the desert) … Anyway, there are also biotic factors in determining erosion, so biological evolution would need to be predicted in order to predict topography, chemical weathering rates, etc, but … how likely is a major change in a given time frame?

      The time scale of the butterfly effect for the variables of most day-to-day weather predictions is something like 2 weeks. However, there is a sort of short term climatic state that can change over somewhat longer time scales but may have some predictability beyond 2 weeks, if they are due to SST anomalies (for example, ENSO); however, the butterfly effect will catch up eventually, and on that time scale, the short-term climatic state becomes a sort of intermediate-term weather.

      Conceivably there could be a stochastic nature to certain feedbacks/responses such that the specific timings in response to a 100-year scale climate change might not be easily predictable (ice sheets?, permafrost feedback?, methane clathrate/hydrate eruptions? changes in thermohaline circulation? – if there are larger events associated with these things, as opposed to gradual respones). For such phenomena, the climate is analogous for the tendency of a sand pile to form a conical shape with some particular slope as a narrower flow of sand falls onto the center; the ‘weather’ are the many small slides and fewer larger slides that occur as the slope becomes unstable and then stabilizes again. The calving of an ice berg is a small scale example – many individual events can’t be predicted very well but the overall rate of events over time can be predicted based on larger-scale factors. And so on for the individual updrafts in cumulus clouds, etc.

      If you move to Florida because of the warm winters, you’d be relying on a climate prediction. If you start a cacao tree plantation in the Arctic, you’d be ignoring a climate prediction (unless you have an actual greenhouse – and there might be lighting issues, but anyway…).

      Consider two winters with similar or same climatic characteristics. The storm track activity, with number of windy days, cold snaps, etc, was the same, the amount of snow fall and melt of each 30-day period is similar. If a prediction were made for such winter conditions and it was born out, you’d say the prediction was correct, for either winter.

      BUT you would not expect the same blizzard, with the same snow fall, winds, etc, to occur on the same date. That would be an unlikely coincidence.

  165. Patrick 027, Reur response inserted in my; January 16, 2010 @ 5:29 pm

    “…First manacker and now you have said I have waffled. Pardon me but I thought waffling implied some indecisiveness. Certainly I have expressed uncertainty on some points, but that’s just being honest, and the points on which I have been said to waffle, I have actually done something quite different.

    A classic if minor example of your waffle is immediately demonstrated in our following exchange. (Your quote of me is in italics)

    Furthermore you [Patrick] should know that two black bodies at T1 and T2 each have their own unique emission spectra as a bell curve, the centroid of which correlates the mean T at the quantum level.
    Yes. (But it is also worth noting, very importantly, that the black body radiant intensity at any portion of the spectrum increases with increasing temperature. Towards long wavelengths, the relationship is almost linear. At shorter wavelengths, a small percentage change in T results in a large percentage increase in radiant intensity. At the peak per unit wavelength, a 1 % increase in T results in about a 5 % increase in intensity.)

    Yes, (or agreed) is all you need say because the topic is about long-wave EMR at the surface and in the lower atmosphere, and the excursions you make are impertinent. Furthermore, I discussed (T1^4 -T2^4) in the same post which makes it clear for example that the sun (having a high temperature/ largely “short-wave” EMR), emits at a much higher rate than the surface of the earth. But so what? What has that got to do with the price of cheese? BTW, my italics above were PART of a response to some earlier waffle of yours.

    In the following exchange, mine in italics:

    I’m more and more thinking that you introduce your complicated and often extremely lengthy irrelevances here to try and awe some of the impressionable that may not even understand what you write.
    No, when people keep asking questions, then more questions, then more questions, or bringing up real or perceived issues, it is almost inevitable that any satisfactory response goes into the complexities of what could otherwise be described more briefly.

    My opinion is unchanged, and your relatively minor irrelevances above only add to my opinion. For instance in the second quote, if you are implying (in its bulk) that you were responding to a question, the fact is that you are talking to an issue or question that has NOT been asked and which is impertinent.

    Most analogies, even good ones, fail at some point, but so long as that point is beyond the intended purpose, it’s okay. A potential problem with the voltage analogy is that any nonzero voltage through a resistor (with finite resistance that allows some current) would cause heating, whereas a nonzero net LW flux of opposite direction causes cooling instead of heating or heating instead of cooling, etc.

    More waffle! ALL analogies are imperfect by definition. Your wording; A potential problem… [blah blah blah] …etc is also rather confusing. Are you expressing uncertainty, but trying to imply perhaps; (j1 – j2) = net EMR flux power, is compromised in some way?

    OH, and here is the MS Works dictionary definition of waffle, pasted:
    speak irrelevantly at length: to speak or write at length without saying anything important or interesting

    • Then please don’t ask any more questions about EMR or heat flux. Don’t mention anything about (T1^4 – T2^4) (Really, isn’t the exact value of the exponent quite uncessary. All we need to know is that each term is a function of T and increases with increasing T. The importance of discussing the effect at individual wavelengths is: 1. the actual emission may not vary in proportion to the fourth power because of spectral variation of optical properties, and 2. there are no wavelengths at which the net flux is from cold to hot emitters/absorbers.) Don’t bother speaking about how the horizontal EMR does _________, etc – don’t waffle, BobFJ.

  166. The models are clearly too sensitive. The fact that any given model run with slightly different input parameters generates completely different results is proof. Models with positive feedbacks will invariably be unstable. Models with negative feedbacks will invariably be more stable. Al models with positive feedback will generate a long-term linear trend upwards, which has a statistical 50% probablility of being wrong in any system that has negative feedbacks.

    http://www.americanthinker.com/2009/11/the_mathematics_of_global_warm.html

    • Models with positive feedbacks will invariably be unstable. Models with negative feedbacks will invariably be more stable. Al models with positive feedback will generate a long-term linear trend upwards, which has a statistical 50% probablility of being wrong in any system that has negative feedbacks.

      That analysis doesn’t make sense.

      You have to remember, please, that a net negative feedback in engineering jargon is not the same as a net negative feedback in climate jargon. Climatologists take the Planck response (increased LW emission as a function of temperature) as a given, and refer to all the other feedbacks as being net positive or negative. Labelling the LW emission as a function of temperature as a feedback and counting it with the rest, the climate system is thought to be stable, with net negative feedbacks – at least in general within a significant range of temperatures (between Snowball and runaway water vapor), at least in so far as Charney sensitivity goes. It is only a labelling issue, that’s all; the math is the same either way. If the climate system were unstable, the expected climate sensitivity would NOT be 3 K/doubling CO2, or 1 K, or 6 K, or even 10 K, – it would be INFINITY.

      The models go up because of forcing. Run the model with more CO2, or more solar radiation, etc, and the temperature warms up towards a new equilibrium. Take such forcing change away at any point in time, and the temperature will fall back down.

  167. Patrick027

    Most studies I have seen put the “before PETM” CO2 level at 400 to 500 ppmv, not 1200.

    But this is all very dicey info, Patrick.

    Suffice it to say that PETM does not provide compelling evidence for a 2xCO2 climate sensitivity of 3K, but rather somewhere between 1 and 2K, if at all.

    That was actually my point to Marco.

    Max

    • Most studies you have seen? In that case you have deliberately been looking only at studies that give a low level. Most are above 500 ppmv. Which means a factor 9 increase, as you claimed, gives 4500 ppmv during the PETM. Of course, Max has a reference to that level AND has a mechanism to explain this enormous increase in CO2 levels. Right? Yes?

      No? How surprising…

      • Marco

        You are truly beating a dead horse.

        Whether it was 400 or 600 ppmv CO2 prior to PETM is totally irrelevant.

        Whether there were 2,000, 6,800 or 10,000 GtC released is also very conjectural.

        In any case it was several orders of magnitude greater than the carbon contained in all the optimistically estimated fossil fuel reserves on our planet today.

        Whether this was in the form of CO2 or CH4 is also anyone’s guess.

        How quickly the CH4 converted to CO2 may be easier to guesstimate, but if the CH4 release was greater than its conversion to CO2 there would have been an atmospheric CH4 buildup theoretically causing GH warming.

        Was the CH4 concentration 1 ppmv prior to PETM or 0.5 ppmv? Did the release increase this 20-fold or 40-fold to 20 ppmv over a longer period?

        If so, what was the GH effect from CH4 as opposed to that of CO2?

        Who knows?

        The whole PETM calculation is so full of unknowns that it is a very poor example to “prove” any 2xCO2 climate sensitivity (as many studies acknowledge).

        Almost all of the recorded PETM extinctions were marine rather than land-based. Were the marine mass extinctions caused by a gradual depletion of oxygen in the ocean resulting from massive submarine CH4 releases rather than the temperature increase?

        Again, who knows?

        It is far better to look empirical data from today’s actual physical observations (such as those from Spencer et al.) rather than to rely on dicey paleoclimate reconstructions.

        We see how wrong these can be just from the much more recent reconstructions resulting in the since discredited “hockey stick” (which purported to eliminate a historically well-documented Medieval Warm Period, so that the claim could be made that “the warmth of the last half century is unusual in at least the previous 1,300 years”).

        Let’s cap off this discussion, Marco. It is not headed anywhere, since you only have highly conjectural data on which to base your assumptions.

        Max

  168. Marco

    You wrote to Bob_FJ

    Most climate models are specifically aimed at long-term forecasts. Short-term variability is thus smoothed out, for the simple reason that we cannot (yet) predict the short-term variations and that these would thus provide a useless mathematical burden on the models (they already have to deal with many other factors).

    Are you actually claiming that the models can do a better job of predicting 80 to 100 years in advance than they can predicting 10 years in advance?

    I hope not, since this suggestion would defy all logic, because unknown outliers are much more likely to kick the whole prediction in the head the longer the time period is.

    There is no reason to assume that the “natural variability” (a.k.a. natural forcing) that the Met Office is now blaming for the current cooling despite record increases in CO2 will have a “sell by” time limit after which it ceases to affect our climate. And there is also no guarantee that some other natural forcing factor (such as an unusually inactive sun) may not suddenly begin to influence our climate in a quite unexpected way.

    Remember that the prediction was made in 1860 that Manchester would be covered by two meters of horse manure by the year 1920, as a result of the rapidly increasing number of horse carriages.

    Max

    • Max, I think you know fully well what I meant. Most models do not include short-term variation, such as PDO, AO, NAO, volcanoes, etc, nor the solar cycle, simply because they are difficult to predict on such short time scales. Hence, they are removed to make the calculations faster.

      The obvious problem is that we do not control the solar output, and that the models cannot predict variations in solar output. However, I find it to be rather questionable to put our faith in a reduction in solar output, such that it will offset the warming by the greenhouse effect. Excluding the possible effects of a dimming sun, the long-term predictions are thus more accurate than the short-term.

      Just as an example: I can quite accurately predict the time it takes me to get from my home to my work (on bike), especially when I have an idea of the direction of the wind. However, my predictions for the time spent on certain sections of the route can vary wildly. A traffic light not cooperating here or there, people not letting me pass there or elsewhere, it’s all variable around the route.

      • Very nice analogy at the end there,

        a quick clarification –

        it isn’t that models exclude PDO, AO, etc, but rather, they don’t try to predict specific instances/episodes/phases of these things. Rather, the climatic state that could be more easily predicted includes the general character of these things – not the specific geometry, but the texture.

      • Marco

        You are not seriously comparing your bike ride to work with the complexity of our planet’s climate, are you? That would be as silly as using the “heads and tails” analogy.

        It is clear that in such complex systems as our climate, where much is still unknown, the longer the prediction time period the greater is the possibility of unknown “outliers” kicking the whole prediction in the head.

        Landesman has shown this from the mathematical standpoint.

        Taleb has shown it from the practical standpoint.

        The past decade’s temperature record has shown it from the actually observed standpoint.

        To argue otherwise is foolish.

        Max

    • And there is also no guarantee that some other natural forcing factor (such as an unusually inactive sun) may not suddenly begin to influence our climate in a quite unexpected way.

      Of course most projections are made assuming solar forcing doesn’t change much.

      If people can start to predict significant changes in solar forcing, then projections will be made based on that.

      In the meantime, there have been climate modelling studies of what would happen if solar forcing changes. So we do have something to go on in that event.

      If you don’t know that the solar forcing will change, how do you intend to plan for it? There is a difference between having appropriately broad planning to cover a probability range, and just planning for only the unlikely outcomes while having little preparation for the expected outcomes. Like I asked before, do you plan for a dry day and only a dry day because the forecast is rain?

      Remember that the prediction was made in 1860 that Manchester would be covered by two meters of horse manure by the year 1920, as a result of the rapidly increasing number of horse carriages.

      If that was a prediction of horse carriage use, than it was wrong. If it was a prediction of horse manure CONTINGENT on assumed horse carriage use, then it was not shown to be wrong.

      Likewise, AGW predictions would not be falsified by a large reduction of CO2 emissions that limits warming.

      Climate predictions are contingent on factors, some within our control – and most likely those within are control are dominant in so far as potential for change is concerned within the next 100 – 1000+ years.

  169. Blouis79

    You have hit the nail on the head when you write:

    Models with positive feedbacks will invariably be unstable. Models with negative feedbacks will invariably be more stable

    One of the basic weak points in James E. Hansen’s model simulations is that they have been programmed to result in a highly unstable climate.

    As he testified before the U.S. Congress:

    Crystallizing scientific data and analysis reveal that the Earth is close to dangerous climate change, to tipping points of the system with the potential for irreversible deleterious effects.

    And:

    The Earth’s history shows that climate is remarkably sensitive to global forcings. Positive feedbacks dominate. This has allowed the entire planet to be whipsawed between climate states. Huge natural climate changes, from glacial to interglacial states, have been driven by very weak, very slow forcings, and positive feedbacks.

    While there have been many major swings in our climate it is pure conjecture to claim that these “have been driven by very weak, very slow forcings, and positive feedbacks”.

    Hansen then goes on to proclaim:

    The dangerous level of CO2 is at most 450 ppm, and it is probably less.

    The point is quite simply that model simulations can be made to show anything one wants them to show. Hansen has demonstrated that he is a AGW activist hiding in the cloak of an objective (tax-payer funded) scientist.

    As such, his dire warnings should be taken with a large grain of salt.

    It is far more likely, as Richard Lindzen and others have suggested, that our planet has several natural “checks and balances” (i.e. negative feedbacks) that have kept it habitable for ourselves plus the many other species.

    For example, Roy Spencer et al. have found based on CERES observations (rather than simply model simulations) that outgoing SW + LW radiation increases by 6.5 W/m^2 per degree C warming, versus 3.3 W/m^2 for the temperature effect alone, resulting in a net negative feedback with warming.
    http://www.weatherquestions.com/Recent-Evidence-Reduced-Sensitivity-NYC-3-4-08.pps

    The “whiplash effect” with imminent “tipping points” and runaway warming is a figment of the imagination of Hansen’s climate models, which does not pass the “reality test”.

    Max

    • Max, both Spencer and Lindzen are remarkably silent when you ask them to explain the massive temperature increases during interglacials, the PETM, and a host of other events in the history of the earth. If the earth is so well-balanced by negative feedbacks, temperatures should hardly EVER go up. The temperatures during the Eocene should not have been possible either. The supposed strong negative feedbacks are falsified by the history of the earth!

      • No Marco.

        As it actually turns out the “supposed strong negative feedbacks are” NOT “falsified by the history of the earth” (as you claim).

        Exactly the opposite is true. There has not been an irreversible “tipping point” with runaway warming as some (such as James E. Hansen) conjure up. This is because there are “checks and balances” (i.e. negative feedbacks) in our climate system.

        That is why we (and the other inhabitants of our planet) are still here and doing quite well, thank you.

        Max

      • There has not been an irreversible “tipping point” with runaway warming as some (such as James E. Hansen) conjure up.

        If there were indeed Snowball Earth episodes in the Paleoproterozoic and Neoproterozoic, those would have involved runaway feedbacks in both directions (around a hysteresis loop). That is inapplicable for the temperature range we are facing with AGW (or regular ice ages, for that matter). So far as I know(*?*), a runaway water feedback is also outside the range of AGW and PETM; this is certainly true *if* the hot aftermath of a ‘Snowball’ (perhaps 50 deg C ?) didn’t trigger such runaway water vapor feedback. (Note that a runaway water vapor feedback doesn’t actually have to result in permanent conditions- it still has to be forced externally to be maintained; a planet could stray into such territory and then come back out of it).

        One idea pertaining to AGW and possibly to the end-Permian extinction involves warming-induced methane releases. There is also potential positive feedback from the carbon-cycle (at least relative to now, with the accumulation in the atmosphere being less than emissions).

        But we don’t really ‘need’ these things to be concerned, and these things could happen without being ‘runaway’. But it must also be pointed out that runaway feedbacks and tipping points are not always about going all the way to frozen to fried and back. A runaway feedback can be any feedback wherein within some range of temperatures the climate would not be stable. A ‘tipping point’ might be anything that is hard to reverse once it happens – hysteresis – one idea I’ve heard, for example: if a large part of the Arctic sea ice is lost for enough time, the fresher meltwater may dispurse more than otherwise and the freezing point of the sea would drop, making refreezing more difficult.

        I don’t actually know what specifically Hansen was talking about, but we really don’t need runaway feedback to be concerned, though it doesn’t hurt to be aware of possibilities. The candidates for reality are not limited to Hansen’s and Lindzen’s.

      • – If Hansen was refering to any significant loss of ice sheets, that would be a good example of – neither the word tipping point (?) nor runaway is really applicable, but a good example of something that is hard to reverse quickly. Once an ice sheet thins, the surface of the ice, due to lower elevation, will tend to be warmer even for the same climate. It takes time for snow to accumulate.

        Scientists have trouble predicting just how fast ice sheets will respond. I am aware that faster-than expected sea level rise from more rapid ice sheet response is one of Hansen’s concerns.

    • For example, Roy Spencer et al. have found based on CERES observations (rather than simply model simulations) that outgoing SW + LW radiation increases by 6.5 W/m^2 per degree C warming, versus 3.3 W/m^2 for the temperature effect alone, resulting in a net negative feedback with warming.

      Show me this. (I’m not going to download a file from an unfamiliar source).

  170. Blouis79,
    I would be a little more carefull when reading from an internet site, even if the author defines himself as an expert.. Check twice, at the very minimum.

    • Riccardo

      You wrote to Blouis79 regarding Peter Landesman, author of The Mathematics of Global Warming:

      I would be a little more carefull when reading from an internet site, even if the author defines himself as an expert.. Check twice, at the very minimum

      Good advice.

      I checked out Peter Landesman.

      He is a PhD in Mathematics from City College of New York, author of Generalized Galois theory of differential equations published by The American Mathematical Society (his PhD thesis)
      http://www.ams.org/tran/2008-360-08/S0002-9947-08-04586-8/home.html

      He has also been guest lecturer on classification of generalized Gm-extensions and the Galois theory of linear ordinary differential equations at the Kolchin Seminar of Differential Algebra, Division of Science, City College of New York
      http://www.sci.ccny.cuny.edu/~ksda/gradcenter2005.html

      Looks like his qualifications stack up well against many of the mathematicians who are acting as computer programming scientists, etc. on the GCMs (without any of the biases that may come from being an “insider” and supporter of the “mainstream paradigm”).

      I would say that, when it comes to the complexities and the wide divergence in results obtained when working with several parallel differential equations, which may be non-linear, Landesman should know what he is talking about.

      As he points out in his essay, The Mathematics of Global Warming, even if the problem can be overcome that

      “it may be too difficult to collect enough data to accurately determine the initial conditions of the model”

      “the equations of the model may be non-linear”

      This means that no simplification of the equations can accurately predict the properties of the solutions of the differential equations. The solutions are often unstable. This means that a small variation in initial conditions will lead to large variations some time later. This property makes it impossible to compute solutions over long time periods.

      Coming at it from a different angle of uncertainty, probability and common sense, Nassim Taleb makes exactly the same point in his book, The Black Swan, in which he discusses what he calls “the scandal of prediction”, i.e. “we forget about unpredictability when it is our turn to predict”.

      A quote:

      We’ve seen that a) we tend to both tunnel and think “narrowly” (epistemic arrogance), and b) our prediction record is highly overestimated – many people who think they can predict actually can’t.

      I’d vote with Blouis79 and say Landesman’s paper should be taken seriously, especially by those trying to make climate predictions based on very complex computer model simulations of as yet poorly understood processes, and (even more so) those who are making policy decisions based on these predictions.

      Max

      • A rather unimpressive comment.

        If (your interpretation of?) Nassim Taleb is correct, should we not then discount everything Nassim Taleb has written on the subject of prediction, since he apparently has studied it and has narrowly focussed on it, etc.?

  171. Marco, Reur response inserted in my January 16, 2010 @ 8:58 pm

    Bob: you made the comment from Trenberth a stumbling block for the energy budgets indicated in the IPCC reports. Energy budgets that *did not deal with the current plateau*. It is nonsense that the inability to get a good insight over the current plateau automatically means that you can’t know anything about the period prior to that. It is very well possible events occur that suddenly make an analysis break apart. It does not mean the prior analysis is wrong. And THAT is the context of the discussion.

    Sorry Marco, but you are asserting that the unknowns involved in the “bad news plateau” are unique to that plateau, which is pure speculation. (= unscientific). The UK Met office and others have described it as a natural cycle, and this seems a much more plausible hypothesis. For instance, it looks remarkably similar to that recorded around 1940. If it is a cycle, then by definition one might expect a period of opposite sign (in whatever causes the plateau), to precede it. (Please study the 1940 condition)

    Regarding the “models”: are you really this dishonest? Most climate models are specifically aimed at long-term forecasts. Short-term variability is thus smoothed out, for the simple reason that we cannot (yet) predict the short-term variations and that these would thus provide a useless mathematical burden on the models (they already have to deal with many other factors).

    AND Patrick 027 wrote

    But it does fall within model behavior. Any individual run generates internal variability.

    1) See comments by Manacker and Blouis79
    2) Well actually, I was wanting to show the IPCC straight line projection as seen by policy-makers, but couldn’t be bothered to go find it, so instead referred to models. (and as I said; they don’t show it)
    3) How could the models possibly show the “bad news” if we don’t know what data to input to them?
    4) OK, the IPCC straight line is significantly empirically wrong with an unexpected (UNKNOWN cause) dip-step early on. It may well be that this will be “corrected” by an UNKNOWN bump-up-step, however, we will have to wait and see.

    • The ‘straight line’ you are refering to is an average over multiple runs. RealClimate has a post showing what individual model runs look like, and I think you can find that within the IPCC report too. If a model produced a single run that was a straight line – no El Ninos or La Ninas, etc, – well, then that model would not be considered a good model (unless the forcing conditions were such as to eliminate such things, for example, a model of the climate of a flat piece of rock in space constantly facing the sun … there would be variability but not of the internally-generated sort).

      Averaging over multiple runs tends to eliminate the ‘noise’ of weather variability to bring out the signal of climate change with respect to longer time-averages. However, the texture of that ‘noise’ can also be analyzed for trends, etc, – they are a part of the climate.

      Averaging over multiple runs is a bit like, using a shorter-time period analogy, taking the snowfall for a particular winter, without looking at all the individual snowfall events. For complete understanding, you’d want to know if the character of those events were changing, but for some purposes you might want a graph showing how total snowfall changes over time.

  172. Riccardo, while I linked an American Thinker post, I have stated my own opinion from common sense science and control system and computer model application.

    manacker, you have found the problem. [url]https://chriscolose.wordpress.com/2008/12/10/an-update-to-kiehl-and-trenberth-1997/#comment-1493[/url]

    From the dumb mistakes department….
    Forcing is related [b]logarithmically[/b] to change in GHG concentration. Myhre et al 1998 [url]htp://folk.uio.no/gunnarmy/paper/myhre_grl98.pdf[/url]
    ΔF= α ln(C/C0)

    Forcing is assumed to be [b]linearly[/b] related to temperature change.
    (ΔTs): ΔTs = λRF, where λ is the climate sensitivity parameter (IPCC-AR4/Ramaswamy 2001)

    So a log relationship that delivers an annual incremental change of 0.035 W/m^2 is morphed into a linear CO2 doubling temperature of 3.7W/2, back projected linearly to today to arrive at the magical 0.9W/m2 in Hansen’s computer model and ratified by the correction to the 6.4W/m2 TOA out of balance error in the Kiehl/Trenberth Global Energy Budget to match the 0.9W/m2 assumed RF required to produce the right temperature change.

    Pure genius.

    • So a log relationship that delivers an annual incremental change of 0.035 W/m^2 is morphed into a linear CO2 doubling temperature of 3.7W/2,

      The actual science makes more sense than your analysis of it. The 3.7 W/m2 and 0.035 W/m2 values come from the same relationship (Change in radiative forcing ~= 5.35 * ln(CO2 final amount / CO2 initial amount).

      Radiative forcing imposed with no response is then equal to a net imbalance; Equilibrium is reached when the climate response has changed the tropopause-level flux to cancel the forcing; until then, the remaining uncanceled forcing is equal to the imbalance.

    • Blouis79

      Yeah. Circular logic at its best, as I tried to explain to Patrick027, who hasn’t quite grasped it yet (maybe he just doesn’t want to).

      Max

  173. Patrick 027, Re our exchange that you inserted in my January 16, 2010 @ 8:58 pm

    BTW, [Patrick] there is no uncertainty shown in the K & T 2009 update image at the head of this thread, despite the authors being aware of the “bad news”, and presumably ditto for the 2007 version.
    [1] Well, for one thing, there is no specification in the diagram of where the 0.9 W/m2 heat storage rate is actually going.

    AND Marco, has written; see my January 16, 2010 @ 8:58 pm:

    “…As Trenberth also notes in his Daily Camera opinion, [2] the energy budget from 2003-2008 is a problem, [3] NOT the energy budget for the prior period…”

    [1] Where the heat storage rate of 0.9 is going in the 2009 update is no more vague than in the following values: 161, 396 & 333. (all W/m^2). They are ALL portrayed as global-year averages, land and sea, at the surface. The 0.9 is also drawn ON the surface for land and sea. Neither is it shown as lost in the rising 17, (thermals), or 80, (E-T.), that are both shown as a constant fixed lower atmosphere flux loops.
    [2] If the recent energy budget is a problem, (because of the “bad news plateau”), then what is that 2009 depiction at the head of this thread? (showing no caveats)
    [3] The 1997 version does not show any heat storage rate, but because the surface temperature record shows increasing temperatures, energy-in must be greater than energy-out at the surface, and intuitively, it should be more than 0.9, because 0.9 relates to the plateau. Also, the 2007 version IS WITHIN the “problem period” of 2003 – 2008.
    So how can you (Marco) assert that the IPCC versions of 1997 and 2007 are OK, but that the 2009 version is not OK?

    • Heat storage can occur without surface warming via increased heating in the depths of the ocean.

    • … and I would expect the 0.9 W/m2 value is an intermediate (decadal?) scale average; it won’t be the same every year.

      • Patrick027

        We laid the 0.9 W/m^2 value to rest some time ago.

        It’s not “an intermediate (decadal?) scale average”, but rather an unsubstantiated “plug value”.

        It’s dead.

        Let it “rest in peace”.

        Max

  174. The point is quite simply that model simulations can be made to show anything one wants them to show.

    No, the known physics constrains allowable behavior. For parameterized relationships (sub-grid scale processes), observed relationships or values are used to constrain the parameterizing. To the extent that GCM models are tuned, they are tuned for the average climate, NOT to reproduce a trend.

  175. Marco

    We got a bit sidetracked with your bicycle ride to work, but let me ask you again specifically what I asked earlier (but you have so far evaded):

    Are you actually claiming that the models can do a better job of predicting 80 to 100 years in advance than they can predicting 10 years in advance?

    If your answer is “yes”, please explain why this should be so in your opinion.

    Please try to be specific in your answer, avoiding oversimplified analogies.

    Thanks.

    Max

  176. manaker,
    i can read by myself and anyone would see that hs competence is mathematics. Indeed, he made only general claims on the solution of sets of differential equation anyone with a scientific degree knows. But he apparently do not know very well how climate models work and indeed he didn’t make any real check on the them, there’s nothing specific to climate models in that artcle. He also repeatedly makes confusion between climate and weather which is a further indication that he didn’t study the problem.

    But i know, you’re not interested in these details, you only need whatever thing you find over the internet that fits your needs, we’re taking about nothing.

    • Riccardo

      You wrote:

      But i know, you’re not interested in these details, you only need whatever thing you find over the internet that fits your needs, we’re taking about nothing.

      Blah, blah.

      Landesman turned out to be more qualified to discuss the pitfalls of trying to make model predictions with several parallel non-linear differential equations than you thought (because you did not check his credentials, but simply assumed he was unqualified)).

      This was his point, Riccardo.

      The model predictions are worthless.

      It’s pretty simple actually.

      But, hey, you don’t even need a mathematical expert like Landesman to point this out. All you have to do is look at the temperature trend after 2000 (cooling of 0.1degC) as compared to the IPCC projection (warming of 0.2degC).

      Ouch! That tells it all.

      Max

  177. Bob_FJ

    You wrote:

    I was wanting to show the IPCC straight line projection as seen by policy-makers, but couldn’t be bothered to go find it, so instead referred to models. (and as I said; they don’t show it)

    It is rather humorous, when you look at it a bit more closely.

    IPCC SPM 2007 Figure SPM.5. (p.14) shows these projections to year 2100. This is dicey enough, but for a chuckle, you can also refer to AR4 WG1 Chapter 10, Figure 10.4 (p.762), where this same crystal ball prediction is extended to the year 2300. Seriously, Bob, this is no joke. And for a real “roll on the floor” belly laugh you can find the projections extended to the year 3000 (believe it or not!) in Figure 10.34 (p.823).

    The ignorance of these guys is only exceeded by their arrogance, keeping in mind they could not even forecast the observed cooling after 2000.

    But leaving these preposterous projections aside, there are several virtual model runs, using different “scenarios” and “storylines” (and using the questionable 2xCO2 equilibrium climate sensitivity of 3.2K).

    A closer look shows that two “scenarios” involve the combustion (by 2100) of more fossil fuels that are very optimistically estimated to exist on our planet today.

    These are the two “scenarios” (A2 and A1F1), which show the highest increase in temperature by 2100, i.e. 3.4°C and 4.0°C above 1980-1999 average, or 3.1°C to 3.7°C above today’s average.

    They can be tossed out, because there is no anthropogenic way to reach the projected atmospheric CO2 levels (around 1300 and 1600 ppmv, respectively).

    So let’s look at the projections, which are physically possible (B1, A1T, B2 and A1B).

    These show projected temperature increase of 1.8°, 2.4°, 2.4° and 2.8°C, respectively, over 1980-1999 average, or 1.5°, 2.1°, 2.1° and 2.5°C above today’s average.

    The assumed CO2 compounded annual growth rate (CAGR) for these cases is: 0.48%. 0.65%, 0.80% and 0.86%, respectively.

    For comparison, the 1989-2008 CAGR was 0.45%; this has slowed down a bit more recently, 2004-2008 CAGR was 0.42%.

    Case B1 (0.48% CAGR) sounds reasonable, with case A1T (0.65%) stretching the imagination a bit. Cases B2 and A1B (0.80% and 0.86%) are not reasonable and can also be tossed out.

    So we have a “reasonable” projection (all other things being equal) of 1.5° to 2.1°C, provided the 2xCO2 equilibrium CS of 3.2°C, as assumed by the models is also “reasonable”.

    As you know, this is based on very doubtful model simulations of strong net positive feedbacks from water (as vapor, liquid droplets and ice crystals), which have been put into serious doubt based on recent physical observations.

    As you also know, these observations, from CERES and ERBE satellites, showed a net increase in total outgoing SW + LW radiation with warming, rather than a decrease, as assumed by all the climate models. In other words, we have seen a net negative rather than positive feedback.

    Taking this into account, the calculated theoretical GH warming by 2100 would be well below 1°C (yawn).

    So the EU ministers that “promised no more than 2°C warming by 2100” can breathe easy. They do not have to do anything to reach this. (Besides, they won’t be around, anyway.)

    In fact, if the current cooling continues, they may need to find “mitigation” proposals to stop a 1+°C drop in temperature by 2100.

    How about imposing a tax on carbon? Would that help?

    Max

    • The ignorance of these guys is only exceeded by their arrogance, keeping in mind they could not even forecast the observed cooling after 2000.

      In other words, you are a lost cause. You just don’t get it.

      • Sorry, Patrick.

        I got it.

        You (and the IPCC forecasters) just didn’t, as was evidenced by their lousy prediction

        Do better next time?

        How about for the next 100 or even 1,000 years?

        I don’t think so, Patrick. Do you?

        Max

      • But the prediction has not been shown to be lousy.

        Somewhat tangential to that, but:

        Bear in mind, it is a predicted relationship, and thus the actual results are contingent on input. It would not falsify the models if anthropogenic emisssions change or if the solar forcing changed either.

        Hence, some would object to the term prediction and refer to it instead as a projection. But for me it works either way, because I know what is meant.

  178. Marco and Patrick
    Concerning your various comments on climate models and how various unknowns may or may not be embraced in them…. For instance the current “bad news plateau”….. the following comments by an hydrological engineer, concerning his experience in hydrological modelling and his interactions with climate change models and factors should be of interest to you. (at comment #9097, page 61)


    This writer is new to the site but also has relevant earlier posts including his experience at # 9071 and 9076 etc.
    If you can’t be bothered to read it, perhaps you could at least read this peer reviewed paper:
    http://www.itia.ntua.gr/en/docinfo/900/

    Oh, and you might find the photo etc at #9081 interesting

    • Bob, a poster is ‘peer reviewed’ to about as much extent as an op-ed in a magazine is peer reviewed. It’s quite interesting our dear hydrologist modeller presents it as “peer reviewed”…
      On top of that: several GCMs are KNOWN to still have problems with regional climate change (notably a focus area for the next IPCC report). They are actually quite good for the global climate. It seems our Greek scientists are a bit…errr….fast on the trigger?

      And I can’t be bothered with one picture as supposed evidence for prolonged droughts. Hint: it isn’t.

  179. Blouis79
    Your html links did not work because you need to use less than and greater than symbols (angle brackets) not these guys: [ and ]. I demonstrated the latter as substitutes earlier on because otherwise the angle brackets would have activated the commands and disappeared.
    See, if necessary:

    An update to Kiehl and Trenberth 1997


    And below that January 10, 2010 @ 11:48 pm

  180. Marco, I sense that recently, Patrick has become rather testy, and somewhat incoherent in response to some posts to him. Whilst giving him time to relax for a while, I thought I’d catch-up on some lesser points of yours. For instance, you wrote:

    3. Gavin Schmidt may have been on the cc list many times, been mentioned in the e-mails many times, but as an author he is there a ‘whopping’ 5-6 times.

    You entirely miss the point. I would not expect Gavin to lead for Mann at RC, or Hansen at GISS; would you? However, he featured in various ways 131 times in the Climategate Emails which clearly shows that he was a member of the cabal, very well in touch with their programme on the American side. (outside of CRU itself)

    For instance, see the following Climategate Email which describes planned censorship at RC offered to CRU, (my bold added), and which clearly is a CC instruction to Schmidt from Mann. (as you may be aware, Gavin at NASA, is Mann’s wordsmith at RC)

    From: “Michael E. Mann” To: Tim Osborn, Keith Briffa
    Subject: update. Date: Thu, 09 Feb 2006 16:51:53 -0500
    Cc: Gavin Schmidt

    guys, I see that Science has already gone online w/ the new issue, so we
    put up the RC post. By now, you’ve probably read that nasty McIntyre
    thing. Apparently, he violated the embargo on his website (I don’t go
    there personally, but so I’m informed).

    Anyway, I wanted you guys to know that you’re free to use RC in any way
    you think would be helpful. Gavin and I are going to be careful about
    what comments we screen through, and we’ll be very careful to answer any
    questions that come up to any extent we can. On the other hand, you
    might want to visit the thread and post replies yourself. We can hold
    comments up in the queue and contact you about whether or not you think
    they should be screened through or not, and if so, any comments you’d
    like us to include.

    You’re also welcome to do a followup guest post, etc. think of RC as a
    resource that is at your disposal to combat any disinformation put
    forward by the McIntyres of the world. Just let us know. We’ll use our
    best discretion to make sure the skeptics dont’get to use the RC
    comments as a megaphone…

    Mike

    Any comments?
    There are other Emails discussing control of the peer review process and control of journals. Might you be interested in these too?

    • Bob: care to explain the problem with a group of people trying to counter misinformation? And please provide evidence that Gavin is “Mann’s wordsmith at RC”. ANY blog has EVERY right to censor comments (Steve McIntyre has done so on multiple occasions, and Watts does as well), and in this case it was particularly required: I have seen the McIntyre and Watts cheering crowd at work; within 15 comments the discussion becomes one major mudslinging match with loads of complete nonsense thrown in. You will have seen it, too, but perhaps never recognised it. Just look at the fraud allegations against Briffa in several newspapers: JOURNALISTS already further ‘interpreting’ McIntyre’s claims in a direction he himself claims (albeit in my opinion rather weakly) he never implied. Add the religious anti-AGW zealots, and Gerlich & Tscheuschner will be brought up as evidence Mann committed fraud by not correcting for the UHI that Anthony Watts has proven…

      No need to discuss the “control”-issues. We will probably never agree on that either. Suffice it to say that I’m not a big fan of journals where editors put politics before science (as Chris de Freitas apparently did), nor of putting references to faulty papers in a review without expressly noting that the field does not consider the analysis valid or significant.

      • Marco;

        “Bob: care to explain the problem with a group of people trying to counter misinformation?…”

        For a start, if it is misinformation that is to be posted, it is best to face it head-on, and show that it IS indeed misinformation rather than inconvenient information.

        “…ANY blog has EVERY right to censor comments (Steve McIntyre has done so on multiple occasions, and Watts does as well), and in this case it was particularly required: …”

        Agreed, however, there are big differences in scope. For instance, BRIEFLY, even George Monbiot’s blog at the Guardian, (which is a rather extreme pro AGW site) actually identifies MANY posts that have been deleted by the moderator, whereas RC are known to simply remove posts at the moderation stage. This gives the impression to the faithful, that theirs has silenced any counter argument.
        I have had various inconvenient posts vanish in this way only at RC; for instance see items 1 & 2) above at:

        An update to Kiehl and Trenberth 1997

        “…And please provide evidence that Gavin is “Mann’s wordsmith at RC”…”
        The word ‘wordsmith’ whilst subjective, may not be the best choice. Any time spent at RC shows that Mann who is in “high office” at RC, keeps a low profile, whereas Gavin Schmidt is all over the place. Got it?

        I’ve kept it brief, and I’m not going to flog through the rest of your comments. Thus, any rational readers can form their own opinions

      • Whoops, sorry, in the last blockquote above, it should only embrace the first two lines.
        The next six lines should be separate and are my comment on the two above

      • Bob: when Gavin relaxed the moderation on RC, the threads became inundated with repetitions of old and debunked claims. Within 15-20 posts the discussion wasn’t related to the actual topic, it went all over the place with loads of RTFFAQ. WITH the moderation, this chaos formation takes a lot longer. I don’t see the benefit of the “this post has been removed by the moderator” remark you will see at the Guardian.

        Of course, your “inconvenient comments” are likely to have been repetitions of previous posts that were already debunked or answered, as Patrick also noted previously.

  181. Of greater relevance here than so-called ‘climate-gate’ (if that looks suspicious to you, try looking into Exxon and compare).

    the e-folding time of disequilibrium = heat capacity * climate sensitivity

    Heat capacities:

    Top 70 m of ocean ~=

    Contributions to effective heat capacity of climate system and climate lag time

    Global average estimates, MJ/(m2 K)

    Input:

    ** Land surface – 20 m depth, density 2600 kg/m3, specific heat 733 J/(kg K) (last two values correspond to inorganic portion of soil from Hartmann, p.85; first value corresponds to a penetration depth for a forcing with time scale near 25.4 years for typical thermal diffusivity 5e-7 m2/s of soil – using a typical 2 W/(m K) thermal conductivity typical for rocks, the thermal diffusivity is about 1e-6 m2/s, in which case the same penetration depth corresponds to a time scale of about 12 years; for such thermal diffusion, the penetration depth is proportional to the square root of (thermal diffusivity * time scale).)

    *** Assuming 20 % increase in water vapor over 3 K increase, where present water vapor is about 33 kg/m2

    H2O Latent heat of vaporization: 2.5 MJ/kg
    H2O Latent heat of fusion (melting): 334 kJ/kg

    Water specific heat = 4186 J/(kg K)
    specific heat of air = 1004 J/(kg K)

    Land area ~= 29 % of global area.
    Assuming troposphere is ~= 85 % of atmosphere by mass

    Atmospheric mass ~= 10,070.6 kg/m2 (but not computed to more than 3 significant figures)

    ———————————-

    ** obviously, the land heat capacity is just to get a sense of what it could be; the amount depends on the time scale itself…

    *** water vapor increase: this would increase for higher temperatures

    Fortunately those are small contributors

    Total organic C at the surface averages to ~ 4 kg/m2 (4.4 or 4.0 kg/m2 from two different sources at end of paragraph); living biomass organic C ~ 1 kg/m2 global average; even assuming living matter is ~ 5 % C by mass (with about 90 % H2O), living biomass probably has a very small heat capacity relative to other components.
    (http://carboncycle.aos.wisc.edu/index.php?page=global-carbon-cycle, Hartmann p.322)

    Rivers and lakes would make a small contribution relative to the ocean.

    ——————–

    Contributions to heat capacity of climate system, MJ/(m2 K)

    ** Land surface, 20 m depth:
    11.1

    *** 20 % increase in water vapor per 3 K warming:
    5.6

    Atmosphere: 10.1
    (troposphere: 8.6)

    melting ice, in m liquid over area of ocean per K warming:
    1/15 m / K: 15.8
    1/6 m / K: 39.5
    1/3 m / K: 79
    2/3 m / K: 158
    5/3 m / K: 395

    Top ___ m of ocean:
    70: 208
    100: 297
    200: 594
    300: 892
    500: 1486
    3800 (whole ocean): 11,294

    ————————–

    Contributions to lag time of climate response (the e-folding time of disequilibrium)
    = heat capacity * climate sensitivity
    (remember 3600 * 24 * 365.25 s per year)

    At climate sensitivities (K / (W/m2)): 0.811, 0.270, 0.135

    (corresponding to approx. 3 K, 1 K, and 0.5 K equilibrium response to doubling CO2 with 3.7 W/m2 forcing)

    (Note that this calculation assumes that the same temperature change occurs in all thermal masses, including the stratosphere. Fortunately, the special case of the stratosphere makes little difference to the total; nonetheless it is included below to give a rough sense of it’s potential.)

    Years:

    ** Land surface, 20 m depth:
    0.284, 0.095, 0.047

    *** 20 % increase in water vapor per 3 K warming:
    0.143, 0.048, 0.024

    Atmosphere: 0.260, 0.087, 0.043
    (troposphere: 0.221, 0.074, 0.037)
    (stratosphere: 0.039, 0.013, 0.006)

    melting ice, in m liquid over area of ocean per K warming:
    1/15 m / K: 0.406, 0.135, 0.068
    1/6 m / K: 1.02, 0.338, 0.169
    1/3 m / K: 2.03, 0.677, 0.338
    2/3 m / K: 4.06, 1.35, 0.677
    5/3 m / K: 10.2, 3.38, 1.69

    Top ___ m of ocean:
    70: 5.35, 1.78, 0.891
    100: 7.64, 2.55, 1.27
    200: 15.3, 5.09, 2.55
    300: 22.9, 7.64, 3.82
    500: 38.2, 12.7, 6.36
    3800 (whole ocean): 290., 96.7, 48.4

    TOTAL, excluding stratosphere and melting ice, and including top ___ of ocean:
    70: 5.99, 2.00, 0.999
    100: 8.28, 2.76, 1.38
    200: 15.9, 5.31, 2.65
    300: 23.6, 7.85, 3.93
    500: 38.8, 12.9, 6.47
    3800 (whole ocean): 291, 96.9, 48.5

    ————————–

  182. Hansen’s supervisor, John Theon in Jan 2009 before climategate
    http://epw.senate.gov/public/index.cfm?FuseAction=Minority.Blogs&ContentRecord_id=1a5e6e32-802a-23ad-40ed-ecd53cd3d320

    From: Jtheon [mailto:jtheon@XXXXXXX]
    Sent: Thursday, January 15, 2009 10:05 PM
    To: Morano, Marc (EPW)

    Subject: Climate models are useless

    Marc, First, I sent several e-mails to you with an error in the address and they have been returned to me. So I’m resending them in one combined e-mail.

    Yes, one could say that I was, in effect, Hansen’s supervisor because I had to justify his funding, allocate his resources, and evaluate his results. I did not have the authority to give him his annual performance evaluation. He was never muzzled even though he violated NASA’s official agency position on climate forecasting (i.e., we did not know enough to forecast climate change or mankind’s effect on it). He thus embarrassed NASA by coming out with his claims of global warming in 1988 in his testimony before Congress.

    My own belief concerning anthropogenic climate change is that the models do not realistically simulate the climate system because there are many very important sub-grid scale processes that the models either replicate poorly or completely omit. Furthermore, some scientists have manipulated the observed data to justify their model results. In doing so, they neither explain what they have modified in the observations, nor explain how they did it. They have resisted making their work transparent so that it can be replicated independently by other scientists. This is clearly contrary to how science should be done. Thus there is no rational justification for using climate model forecasts to determine public policy.

    With best wishes, John

  183. Patrick027

    Your detailed estimations of heat capacities, latent heats, etc. to arrive at an “equilibrium time lag” are very nice, but they still do not answer the question.

    If the energy has already been absorbed by the total planetary climate system, but just has not warmed everything to an equilibrium state, exactly where is this energy that is “hidden in the pipeline”?

    What empirical data do we have based on actual physical observations, which show us this “hidden energy”?

    The postulated energy is either in the system somewhere (where it can be detected and measured) or it is not in the system (i.e. does not exist, except in someone’s imagination or computer model).

    Recent measurements show us that (a) the (globally and annually averaged land and sea) surface temperature is cooling (HadCRUT), (b) the troposphere is cooling (UAH) and (c) the upper ocean is cooling (Argo). Where is the “hidden energy” hiding?

    Max

    • “The postulated energy is either in the system somewhere (where it can be detected and measured) or it is not in the system (i.e. does not exist, except in someone’s imagination or computer model).”

      Bear in mind that the phrase ‘in the pipeline’ might be called colloquial in this context – whatever – it is only a phrase.

      It is not ‘energy’ that is hidden. Nothing is really hidden (well, there is data that is hard to obtain, but that’s somewhat of a different can of worms – now, please don’t ask me where these worms are hidden 🙂 ).

      The energy is neither in the system nor is it imaginary in that sense. It is a physical prediction. When a disequilibrium is introduced, it doens’t imediately vanish; the climate response takes time. During this time, heat (for a heat flux disequilibrium) continues to accumulate at the rate of the disequilibrium. If the disequilibrium has not yet dissappeared, then more heat is accumulating. Given the heat capacity of the system and the required change to climate to bring about equilibrium, heat will tend to accumulate for some time into the future if the disequilibrium is not infinitesimal.

      It is the remaining climate change necessary to balance out the as-of-yet remaining imbalance, and the associated heat accumulations (and anything else) that is ‘in the pipeline’.

      • Patrick027

        What you have written sounds logical, except for the fact that the “net imbalance” (which K+T show on their cartoon) is supposedly coming from the GH impact from the annual increase in atmospheric CO2 PLUS an “imbalance” that is still “in the pipeline”.

        The first part makes perfect sense to me, in accordance with the GH theory.

        The second part sounds like a bit of “voodoo” science to me, Patrick.

        If the added energy (W/m^2) has already entered our climate system, but is “hiding under a rock” somewhere, we should be able to locate and measure it.

        Since there has been no recent increase in either the global surface temperature (HadCRUT), the global tropospheric temperature (UAH) or the upper ocean temperature (Argo), the energy cannot be found in our system. The observations actually show a slight decrease in all of these temperatures at least since 2005, so it seems likely that there is less energy in the climate system today than in 2005, rather than more.

        Since atmospheric temperature has not increased, there has been no increase in atmospheric water vapor content, so the missing energy cannot be “hidden” there as latent heat.

        What about the energy in melting ice? A high estimate of all the net ice melting per year from the Antarctic and Greenland ice sheets, non-polar glaciers and Arctic plus Antarctic sea ice is around 824 Gt per year. The total energy from latent heat of fusion represents less than 0.02 W/m^2, so this is not where it is “hidden”.

        So where is the “hidden energy”?

        You wrote:

        The energy is neither in the system nor is it imaginary in that sense. It is a physical prediction.

        A “physical prediction” sounds rather hypothetical to me, Patrick. As a rational skeptic (in the scientific sense) I am looking for empirical data based on actual physical observation to confirm that the hypothesized “physical prediction” is, indeed, valid (and not simply based on “circular logic” or poorly substantiated model simulations).

        So far you have been unable to convince me.

        Keep trying.

        Max

  184. … For constant climate sensitivity and effective heat capacity, and where the lag time (e-folding time of disequilibrium) is r, the radiative forcing relative to some baseline level is RF, and the radiative disequilbrium is Rde,

    For RF = RF1 after t = t1, and RF = 0 before t = t1, then:

    Rde = 0 before t = t1
    Rde = RF1 at t = t1
    after t = t1, Rde = RF1*exp[ -(t-t1)/r ]

    And where RF is changing over time, the Rde at time t0 is equal to a sum of decaying disequilibriums, each originating from a change in RF at some prior time:

    Rde
    = integral (from t= -infinity to t0) of [ ( d(RF)/dt ) * exp[ -(t0 – t)/r ] * dt ]
    = exp(-t0/r) * integral (from t= -infinity to t0) of [ ( d(RF)/dt ) * exp[ t/r ] * dt ]

    This is approximately true if d(RF)/dt is replaced by an average of d(RF)/dt over time periods sufficiently smaller than r.

    —–

    Notice that if d(RF)/dt ~= constant going back to t where t0 – t is large relative to r, then

    Rde ~= d(RF)/dt * exp(-t0/r) * integral (from t= -infinity to t0) of [ (exp[ t/r ] * dt ]

    = d(RF)/dt * exp(-t0/r) * r * exp(t0/r)

    = d(RF)/dt * r

    See http://www.realclimate.org/index.php/archives/2009/12/unforced-variations/comment-page-24/#comment-152435 for some background.

  185. from
    “Biogenic Methane, Hydrogen Escape, and the Irreversible Oxidation of Early Earth” – David C. Catling, Kevin J. Zahnle, Christopher P. McKay
    http://www.sciencemag.org/cgi/content/abstract/sci;293/5531/839

    lower stratosphere at present has about 3 ppm H2O, 1.7 ppm CH4, 0.55 ppm H2

    from class handout:
    stratospheric H2O: 3 ppm;
    residence time: 1.6 years
    sources of stratospheric H2O, trillion g/year:
    oxydation of CH4: 0.11
    injection from below:
    Hadley cell: 0.22
    severe storms 0.8

    However, Hartman p.325: “Methane oxidation by OH is the dominant source of water vapor in the stratosphere.” – which disagrees with the handout cited above.

    Hartmann, p.325: “The primary removal mechanism [of CH4] is oxidation by hydroxyl (OH) in the atmosphere.”

  186. Marco, further to my January 19, 2010 @ 9:13 pm :
    You should be aware that hydrological engineers are members of the earth sciences community that are subjected to significant pressures from ecological and AGW activists or others in authority/business that may be concerned about these considerations. Whilst hydrologists are generally pressured by this, it would seem that at least some tend to be more practical than the academics. Have you ever noticed that applied scientists such as other engineers and geologists, even some biologists, are often notably highly sceptical of AGW?

    Concerning your comment inserted in my post:

    [1] Bob, a poster is ‘peer reviewed’ to about as much extent as an op-ed in a magazine is peer reviewed. It’s quite interesting our dear hydrologist modeller presents it as “peer reviewed”…
    [2] On top of that: several GCMs are KNOWN to still have problems with regional climate change (notably a focus area for the next IPCC report). They are actually quite good for the global climate. It seems our Greek scientists are a bit…errr….fast on the trigger?
    [3] And I can’t be bothered with one picture as supposed evidence for prolonged droughts. Hint: it isn’t.”

    [1] If you are claiming that the published paper from Athens University is an op-ed in a magazine, then here are a few details of it:
    Anagnostopoulos, G. G., D. Koutsoyiannis, A. Efstratiadis, A. Christofides, and N. Mamassis, Credibility of climate predictions revisited, European Geosciences Union General Assembly 2009, Geophysical Research Abstracts, Vol. 11, Vienna, 611, European Geosciences Union, 2009.
    BTW, did you check-out their rather nice poster version as presented at the general assembly in Vienna 2009? That wouldn’t have been more than a flick, if you did, I guess!
    I think the apparent sneer that you firstly levelled at an experienced hydrological engineer, does not reflect well on you. I find his posts to be very lucid and rational . For example, you should find his latest on modelling to be interesting:

    [2] Well, as you may know, last June, Gavin wrote this; my bold:

    “…In my case, [I’m busy] it is because of the preparations for the next IPCC assessment and the need for our group to have a functioning and reasonably realistic climate model with which to start the new round of simulations.

    The trouble is that mainstream politicians etc have been conned into believing that the earlier less realistic models are for real.

    [3] Unless you actually read things properly, you would be more credible if you did not comment at all. Obviously you did not notice the following, extracted from the history link that was also given.
    The photo that you spurn is thus from the time of the so-called “Great Eastern Drought”
    :
    “…Drought occurs constantly and a part of Australia is always being affected by it (1).
    Australia has been struck by 13 major droughts since 1860 excluding the current drought of
    2002 – 2007 (2). These major droughts include that of 1895 – 1903, often referred to as The
    Federation Drought, and The Great Eastern Drought of 1914 – 1915, which were among the
    most devastating in terms of lack of rainfall and the effects on primary production and the
    Economy(3). It is through these major droughts that many lessons have been learnt…”

    Neither would you have noticed the other photos and many references therein?

    • Have you ever noticed that applied scientists such as other engineers and geologists, even some biologists, are often notably highly sceptical of AGW?

      So what? Have they studied it? What are their reasons?

      The trouble is that mainstream politicians etc have been conned into believing that the earlier less realistic models are for real.

      That’s just BS.

      No model besides the thing itself (anything can be considered a computer model of itself, at least philosophically) will ever be perfect. But must every last atom be counted before you buy food? Just because a model is an improvement on an earlier model does not mean that everything about the earlier model was junk. 80 % does not equal 0 %. So please be reasonable.

    • Sigh. Have you ever been involved in research at all? I have done my share of abstract/poster reviewing, and there are in essence three questions such a review needs to answer:
      1. Which section should the poster be? (compare to op-ed: what topic is the op-ed discussing?)
      2. Does it appear to be novel? If so, is it novel enough to perhaps use it as an oral presentation? (compare to op-ed: is it something new, where shall we publish it, page 2 or page 159?)
      3. Are there any glaring mistakes in the abstract? (compare to op-ed: do we see any immediate factual mistakes, like discussing Clinton while in actuality Bush was the president, or something similar?)

      Compare that to a REAL review of a paper, where one looks closely at the methodology, the figures, and whether the conclusions fit the data. I will wait until our Greek hydrologists get their paper published in a proper journal. Let’s see how much their poster stands real scrutiny.

      2. Schmidt is working on new models, yes, which are as far as I can tell more focused on regional climate. Do I need to repeat that we already know the GCMs have problems with regional climate?

      3. Not knowing all the facts doesn’t seem to bother you when reacting…
      The fact still remains that the droughts in the past don’t take anything away from the current drought, and most certainly not from climate change caused by anthropogenic CO2 emissions.

  187. PETM – it also occurs to me that some estimates ‘floating around’ of the amount of C released might be based on the temperature change.

    • Patrick027

      Regarding C release in PETM, I thought these estimates were based on isotope ratios. At any rate, if they vary from 2,000 to 10,000 GtC, they are very dicey and not worth much.

      And if, as you say, they have been estimated based on the temperature increase they are supposed to have caused, then it is circular logic to use them to estimate 2xCO2 climate sensitivity.

      Marco may disagree, but II think we agree that PETM is a poor example for estimating climate sensitivity.

      Max

      • The PETM is just one of MANY examples I mentioned. And no, temperature changes have not been determined from carbon release. The problems are in the determination of the latter, since the 2xCO2=3C climate sensitivity indicates we’re missing a positive forcing, or we severely underestimate C-release during the PETM. See also Zeebe et al. http://www.nature.com/ngeo/journal/v2/n8/full/ngeo578.html

      • And if, as you say, they have been estimated based on the temperature increase they are supposed to have caused, then it is circular logic to use them to estimate 2xCO2 climate sensitivity.

        Yes, of course, and by the same logic, if the climate sensitivity were falsified by such an estimate, it would be ‘circular falsification’.

        I brought it up because it occured to me that some scientists might offer an estimate of the CO2 level change based on the paleoclimate record, and based on the baseline CO2 level. If the baseline CO2 level is different, that would affect the results.

        And the emission estimate would in that case be based on some understanding of the carbon cycle.

        No, of course there are scientists who are looking for data to estimate the C changes without assumed climate sensitivities.

        From IPCC AR4 Ch6, PETM warming occured over a 1,000 to 0,000 year time scale.

        p.442

        Approximately 55 Ma, an abrupt warming (in this case of the
        order of 1 to 10 kyr) by several degrees celsius is indicated by
        changes in 18O isotope and Mg/Ca records (Kennett and Stott,
        1991; Zachos et al., 2003; Tripati and Elderfi eld, 2004). The
        warming and associated environmental impact was felt at all
        latitudes, and in both the surface and deep ocean. The warmth
        lasted approximately 100 kyr.

        and

        The estimated magnitude of carbon release for this time
        period is of the order of 1 to 2 × 1018 g [1 to 2 * 10^18 g = 1000 to 2000 Gt ] of carbon (Dickens et al.,
        1997), a similar magnitude to that associated with greenhouse
        gas releases during the coming century. Moreover, the period of
        recovery through natural carbon sequestration processes, about
        100 kyr, is similar to that forecast for the future. As in the case
        of the Pliocene, the high-latitude warming during this event was
        substantial (~20°C; Moran et al., 2006) and considerably higher
        than produced by GCM simulations for the event (Sluijs et al.,
        2006) or in general for increased greenhouse gas experiments
        (Chapter 10). Although there is still too much uncertainty in the
        data to derive a quantitative estimate of climate sensitivity from
        the PETM, the event is a striking example of massive carbon
        release and related extreme climatic warming.

        The IPCC agrees with you on the point of too much uncertainty to estimate a climate sensitivity from PETM; however, it is interesting that GCM simulations didn’t produce as much warming as was indicated at least over part of the Earth.

  188. Patrick027

    Thanks for theoryand link to RC blurb, but you have still not answered my question.

    Where is the “hidden energy” hiding?

    How can it be detected and measured by actual physical observations (not model simulations)?

    If we cannot detect or measure it, how do we know it exists?

    Max

    • Except wherein data coverage is incomplete, there is no hidden energy.

      The ‘pipeline phrase’ is simply refering to this reasonable expectation: if there is still a radiative imbalance (persistent through interannual variabilitiy, etc.), then heat is still accumulating (or more generally, being ‘drained’ if the imbalance were of the other sign). In a linearized approximation to climate behavior with a single lag time (constant equilibrium sensitivity and time-scale invariant heat capacity), the imbalance would, if forcing were left unchanged since now, decay exponentially into the future (averaging over shorter-term internally-driven fluctuations); it would not go to zero instantly, thus there is some heat left to be accumulated, which is not yet accumulated.

      And, the prior history of radiative forcing changes can be compared to heat accumulation (mainly in the ocean) to estimate what the e-folding time of imbalance is and thus just how much heat has yet to accumulate before reaching equilibrium.

      There are some caveats of course – climate sensitivity might not be constant over the range of temperatures, and it is known that the heat capacity ‘acting’ on the surface and atmospheric temperatures depends on the timescale involved.

      Considering that last point: over sufficient time (thousand+ years?), nearly the whole ocean will be involved, whereas over a year or so, it might just be the top 70 m of ocean; also, ice melting (including sea ice) has, so far as I know, recently (not past year per se – I’m refering to decades) accelerated. Ice sheets might respond erratically (?) but likely the full effect of their effective heat capacity (latent heat per unit temperature acts like a heat capacity for climate) probably won’t be realized on multidecadal timescales.

      Which implies that the portion of present day radiation imbalance which is the remnants of significantly older forcing changes may be decaying with a longer e-folding time than those imbalances due to more recent changes. If radiative forcing were, starting at present, held constant, then the rate of exponential decay of the imbalance would tend to slow down over time, so that the total heat accumulation yet to occur may be greater than what is implied by the recent heat accumulation rate’s implied decay time.

      Aside from that, it would be odd for heat accumulation to stop on a dime without some explanatory forcing change. A sharp decrease in the e-folding time of imbalance could be accomplished by a ‘withholding of heat capacity’, such as due to a change in the thermohaline circulation or a completion of ice melt for the time being – in which case, the surface and atmospheric temperature change would accelerate to approach equilibrium more quickly – OR by entering a range of conditions in which climate sensitivity is much smaller, which doesn’t seem likely, at least not with such sharpness.

      Aside from those two possibilities, a sharp change in heat accumulation could be accomplished by some internal variability, in which case, the longer term externally-forced trend would tend to be a ‘best fit’ to the data with imperfect correlation…

    • See http://www.realclimate.org/index.php/archives/2005/05/planetary-energy-imbalance/ .

      Note the graph of of heat accumulation in the ocean. The rate of heat accumulation in the ocean (which should be most of heat accumulation) was about 0.60 +/- 0.10 W/m2 from 1993 to 2003 (including both end years, apparently(?)).

      The relative lack of heat accumulation in the early part of the graph is presumably due to the Pinatubo eruption.
      I suspect the dip in heat content around 1999 was due to the 1998 El Nino.

      The rate of heat accumulation is equal to the imbalance. The complex (due to volcanic eruptions) history of forcing makes it hard to determine the lag time necessary to result in that imbalance just from looking at the graphs, and I wouldn’t know how to account for variations in heat capacity for different timescales. However, the model considered (which I presume is not a ‘slab ocean’ model but one which has three dimensional oceanic circulation, and thus in principle can model the complexity of the climate system’s heat capacity) did fairly well compared to observations for both heat and temperature. That both temperature and heat match well suggests (with possible caveats with that information alone, but that’s not the only information the scientists have from either observations or models) that the heat capacity was in total modelled well. That the heat content matches well suggests that the imbalance was also well-modelled. The imbalance alone doesn’t determine the climate sensitivity; if forcing is known and the heat capacity matches, then the sensitivity should tend to match. There is still some uncertainty in the actual forcing history, though.

      The overall global surface temperature is also well modelled in this and other studies. While impressive, this may be due to an error in the forcings combined with compensating errors in the climate sensitivity (2.7 C for a doubling of CO2 in this model) or the mixing of heat into the deep ocean. Looking at the surface temperature and the ocean heat content changes together though allows us to pin down the total unrealised forcing (the net radiation imbalance) and demonstrate that the models are consistent with both the surface and ocean changes. It is still however conceivable that a different combination of the aerosol forcing (in particular (no pun intended!)) and climate sensitivity may give the same result, underlining the continuing need to improve the independent estimates of the forcings.

    • … Looking at IPCC AR4 WGI Ch 2 pp. 131-133

      Adding the solar RF to the anthropogenic RF:

      RF since 1750 : 1.72 W/m2, range: 0.66 to 2.70 W/m2, note an extra decimal place beyond significant figures given (because the probability of two events is generally less than the probability of either, simply adding the uncertainties tends to exagerate the uncertainty of the total, but the solar uncertainty range is only -0.06 to + 0.18 W/m2 from the estimated 0.12 W/m2 RF. Offhand I’m not sure if these ranges are 95 % confidence intervals but I’m sure this is specified somewhere.

      Considering the climate model with a sensitivity of 2.7 K/doubling CO2, with heating rates and temperature matching observations well; assuming this model was run with the solar and anthropogenic forcing of 1.72 W/m2; if the forcing were actually 2.7 W/m2, that *could suggest an actual sensitivity of 1.72 K/doubling CO2 (still quite a bit higher than some values suggested by Lindzen, etc.), whereas if the forcing were actually 0.66 W/m2, that *could suggest an actual sensitivity of 7.04 K/doubling. (This isn’t quite the whole story because some surface condition forcings of convection or _ could indirectly cause changes in radiative fluxes, but… well, with 70 % of the surface being ocean and only some fraction of land being irrigated or desertified or deforested or cultivated, etc… I’m guessing not a large effect
      (“Other surface property changes can affect climate through processes that cannot be quantified by RF; these have a very low level of scientific
      understanding.”
      ); but also, from the same part of IPCC AR4 Ch2: “Other aspects
      of aerosol-cloud interactions (e.g., cloud lifetime, semi-direct
      effect) are not considered to be an RF (see Chapter 7).”
      )

      *if the climate sensitivity is lower or higher, then for the same heat capacity, the response time would be faster or slower, and thus the imbalance would be smaller or larger, but with the forcing larger or smaller, the imbalance and thus the heat accumulation could occur at the same rate.

      • but the solar uncertainty range is only -0.06 to + 0.18 W/m2

        meaning a range (W/m2) of 0.12 – 0.06 = 0.06 to 0.12 + 0.18 = 0.30.

      • And of course the above calculated sensitivities for different forcings to match the observations and model assumes a constant proportionality – ie that the total efficacy-weighted forcing has, for different estimates of total forcing change since 1750, varied in time at rates proportional to the total ~ 2005 – 1750 forcing, which of course won’t be true (not exactly, anyway), but that’s as far as I can go on my own at this time. I suspect scientists have gone into greater depth with these things.

      • Two further points here

        – note that the IPCC AR4 WGI expresses an awareness of potential climatic effects of land cover changes besides via direct radiative forcing or radiative forcing via greenhouse gases, and also an awareness of forcing efficacy variation and regional effects besides that associated with global average forcing; while there are uncertainties, the position regarding the importance of global average forcing is not in such complete ignorance of complexities as some would assume. (Don’t assume that brief summary statements describe everything of which scientists are aware.)

        – I illustrated above how observations could be used to imply a range of climate sensitivities for a range of forcings, by comparison to a model with a particular climate sensitivity and forcing input. This is not the full story; the modelled sensitivity cannot simply be varied ad hoc without potentially becoming unrealistic in the physics and parameterizations. (Also, a sufficient history of forcing might be used to farther constrain the possible combinations of forcings and sensitivities that would still match the observations…)

      • Patrick027

        the solar uncertainty range is only -0.06 to + 0.18 W/m2 from the estimated 0.12 W/m2 RF.

        The IPCC “level of scientific understanding” of solar forcing is stated to be “low”.

        But, more importantly, the number you cited includes only the forcing from measurable direct solar irradiance, ignoring studies by several solar scientists (based on pre-industrial climate change versus solar activity). These studies attribute roughly half of the observed 20th century warming to the unusually high level of solar activity (highest in several thousand years), although the exact mechanism for this empirically observed solar warming has not been determined.

        IPCC demonstrates a bit of a “myopic fixation” on “anthropogenic forcing” and essentially ignores “natural forcing”. Yet today the Met Office is attributing the cooling after 2000 (despite record CO2 levels) to “natural variability” (a.k.a. natural forcing), essentially refuting the extremely low IPCC estimate of natural forcing from 1750-2005.

        Max

      • The IPCC “level of scientific understanding” of solar forcing is stated to be “low”.

        Yes, and even given that, the IPCC position is that some large fraction of 20th century warming has been caused by anthropogenic forcing, etc…

        It shouldn’t be necessary to get every detail right before any conclusions can be made. Consider that if the well-understood mechanisms do account for the lion’s share of something, then the lesser-understood components aren’t so likely to be responsible for most of the same thing.

        If solar forcing were much larger, then the relative lack of understanding of solar effects would have a greater impact on the fuzziness of the whole picture.

        These studies attribute roughly half of the observed 20th century warming to the unusually high level of solar activity (highest in several thousand years), although the exact mechanism for this empirically observed solar warming has not been determined.

        But has it been empirically observed, or was it assumed and then the data interpreted to fit the assumption? I think it’s been the later for those studies in general. Assuming 100 % of the variability within a certain range of frequencies is due to solar forcing could yield an overestimate of the strength of the solar effects. The correlations between clouds and cosmic rays in particular tend not to hold up over time.

        IPCC demonstrates a bit of a “myopic fixation” on “anthropogenic forcing” and essentially ignores “natural forcing”. Yet today the Met Office is attributing the cooling after 2000 (despite record CO2 levels) to “natural variability” (a.k.a. natural forcing), essentially refuting the extremely low IPCC estimate of natural forcing from 1750-2005.

        Natural variability on one time scale that reverses itself doesn’t contribute to a longer-term trend – at least not in such a direct way as by natural forcing trends.

        Attributing some portion of recent slowing of warming to the sun (which, by the way, Trenberth (or was it Kiehl – I think it was Trenberth) himself includes in a paper on the subject) does not refute the relative smallness of the natural forcing from 1750 to 2005. It’s not as if (as some very myopic-minded people/person like to say) the entire 20th century warming has been erased – all years since 2000 have still been warmer than almost all years prior to 2000 within the historical record at least.

        (this seems related to your problem with CO2 correlation with temperature over the last 100 years. You seem to focus on segments of time, which, when taken in complete isolation, reveal a lack of correlation or correlation in the wrong sign – this is an erroneous method because it ignores the trends among those time segments. (And knowing that there is unforced variability as well as other forcings at play, a less than perfect correlation is expected – it is NOT generally expected that a single factor correlates perfectly with climate, even if that factor is quite important. Correlation is not an all-or-nothing thing.) Of course, it is for the same reason risky to come to conclusions from a one time change, as the last 100 years have been, but looking out farther (the last few hundred years, the last 1000 years, the last 20,000 years, etc.), the correlation remains; of course, we also have knowledge of physical mechanisms for greenhouse forcing of climate.)

        Do you think the MET office suddenly has a large disagreement with IPCC consensus, most scientists involved, etc.?

        The IPCC does not ignore natural forcing. Period.

        Models can’t take cosmic rays into account because the physical mechanisms and/or robust correlations (for parameterization) that would make them significant relative to other factors have not been worked out. And there just isn’t generally good reason to expect that a newly discovered importance would dramatically cut the relevance of factors now thought important.

  189. Blouis79 you provided in your January 20, 2010 @ 4:56 pm, a number of links concerning NASA Emails obtained under FOI, that are to me, intuitively, somewhat parallel to the Climategate Emails. Whilst I have not yet found time to go through all of them and their additional internal links fully, the following is of considerable first sight importance from what I‘ve seen so far:

    Click to access 783_nasa_docs.pdf

    For instance, just the first few pages that I’ve managed to read so far alone, out of the 215 pages, (or 783 documents), in that file, clearly shows Gavin’s deep involvement in the GISS temperature record as fathered by his boss; Hanson.

    However, Gavin has variously denied such involvement, such as talked of here:
    http://www.climatechangefraud.com/climate-reports/5510-gavin-schmidt-wants-it-known-he-has-no-connection-with-the-giss-temperature-record

    That’s the same guy who currently wants to develop; “a more functioning and reasonably realistic climate model“, BTW!
    I wonder if Marco, Riccardo, and Patrick, (and even Ian Forrester, whom maybe is still looking down here from his fleecy clouds above), will be brave enough to RATIONALLY go through ALL of your primary links, AND importantly their internals!

    • Bob, you may want to check the comments in the link you gave. Christopher Booker *is* a liar, and once again has decided to (deliberately, in my opinion) misinterpret what someone has said. Please, for once, come with a credible source.

  190. Claims of long disequilibrum times seem founded in measurement error and not science. Every patch of earth heats and cools on a 24hour cycle. Sun, wind, clouds, and rain can all change within that cycle – so all the major large positive and negative forcing elements change by two orders of magnitude higher than the net annual assumed net positive forcing. It is likely that the equilibrium response to perturbation is complete within a few cycles/days.

    The physical response to absorbed radiated heat at the surface is conduction away from it, which only reduces the net surface temperature effect, not increases it, and does not take years.

    A biological response to CO2 could take years, since growing plants in response to higher CO2 takes that sort of time frame. The resulting increased water vapour and clouds and carbon capture and storage all provide some negative feedback to the climate system. The system response to perturbation will be to return to equilibrium by negative feedback.

    All effects caused by radiative disequilibrium only serve to reduce the measured effect.

    It is impossible to measure a very small difference in equilibrium state (less than 1degK) when large excursions due to daily and annnual season cycles cover it up.

    Interstingly, climate models assume the biosphere carbon sink *reduces* rather than increases in response to warming. IPCC AR4 Ch8 8.2.3.1 p605: “Friedlingstein et al. (2006) found that in all models examined, the sink decreases in the future as the climate warms.” and “However, it is not clear how well current climate models can capture the impact of future warming on the terrestrial carbon balance. A systematic evaluation of AOGCMs with the carbon cycle represented would help increase confidence in the contribution of the terrestrial surface resulting from future warming.”

    • Blouis79

      Your analysis, showing that long disequilibrium times are unlikely to be real, makes sense.

      I believe these have only been conjured up because without them the warming from increased CO2, as observed over the long term modern temperature record (HadCRUT) is minor, and the postulated strongly net positive feedbacks are not supported by the actual observations.

      The logic goes a bit like this:

      Based on various paleoclimate studies and model simulations the estimated 2xCO2 climate sensitivity is around 3°C. IPCC AR4, p. 633 puts this at 3.2°C±0.7°C.

      The theoretical 2xCO2 GH impact from CO2 alone, without feedbacks, is around 1°C.

      To arrive at 3.2°C requires the postulation of strongly net positive feedbacks.

      Including these postulated net positive feedbacks (2xCO2 impact = 3.2°C) the CO2 increase from 1850 to 2005 should have caused a temperature increase of 1.3°C.

      However, we only observed a warming of 0.65°C, of which solar scientists attribute around half to the unusually high level of solar activity in the 20th century.

      Without any feedbacks, the increase in CO2 would have caused a theoretical warming of around 0.4°C, whereas around 0.35°C of the observed warming can be attributed to CO2.

      This gives a fairly good check.

      From this we can conclude that the 2xCO2 climate sensitivity is a bit under 1°C, rather than 3.2°C, as postulated by the climate models.

      But this is unsatisfactory to the alarmists, because it means that AGW is not a serious threat.

      So instead of accepting the lower CO2 climate sensitivity, as observed, the assumption is made that a major portion of the GH warming from 1850 to 2005 is still “hidden somewhere in the pipeline”, until “climate equilibrium” is reached.

      This postulation has been refuted by the recent cooling of the atmosphere at the surface (HadCRUT) and troposphere (UAH), plus cooling of the ocean (Argo), as pointed out earlier.

      I believe we can lay to rest the claims of long disequilibrum and the “hidden in the pipeline” hypothesis. This is “voodoo science”.

      Max

      • Your analysis, showing that long disequilibrium times are unlikely to be real, makes sense.

        Please, you two, think before you post. Because I’m getting the impression that you might be trying to be stupid.

        However, we only observed a warming of 0.65°C, of which solar scientists attribute around half to the unusually high level of solar activity in the 20th century.

        You say that as if it’s all solar scientists. I presume many solar scientists know better. Not all study results are correct for ever study – if they were, you couldn’t be taking the position you are.

        But anyway, note that there have been changes in CO2 AND other well-mixed greenhouse gases AND stratospheric ozone AND tropospheric ozone AND the sun AND aerosols, natural and anthropogenic. If you tally up the effects from all positive forcing contributions, you would (accounting for disequilibrium) get greater warming than observed.

        From this we can conclude that the 2xCO2 climate sensitivity is a bit under 1°C, rather than 3.2°C, as postulated by the climate models.

        Here’s a thought – pretend this low sensitivity is the consensus. Now introduce all the other studies, experiments, modelling, and data. Your consenus position would crumble.

        So instead of accepting the lower CO2 climate sensitivity, as observed, the assumption is made that a major portion of the GH warming from 1850 to 2005 is still “hidden somewhere in the pipeline”, until “climate equilibrium” is reached.

        What’s wrong with that – it makes perfect sense to think that.

        This postulation has been refuted by the recent cooling of the atmosphere at the surface (HadCRUT) and troposphere (UAH), plus cooling of the ocean (Argo), as pointed out earlier.

        There has been a slowdown in warming, but it’s not outside the range of what’s expected from internal variability. Furthermore, are YOU now ignoring solar forcing?

        And a general comment, in case it might be relevant – don’t assume that the raw data paints the correct picture. Just as there are urban heat islands and issues with station relocation, there have also been issues with satellite data calibration, changing radiosonde and ocean data biases, and changing amounts of data actually taken.

  191. Marco

    You wrote to Bob_FJ that Christopher Booker *is* a liar.

    Can you be a bit more specific? Exactly which statement in the Booker article cited by Bob is false, and what specific evidence do you have to support your statement that he is lying.

    Please try to be specific, since it otherwise looks as though you are evading the topic under discussion by simply making an “ad hom” attack on Booker.

    Max

    • Booker’s claims about Gavin Schmidt are a lie. The evidence is Gavin Schmidt himself. Ask *him* if Booker is right. He’ll set Booker straight.

      Of course, you of all people shouldn’t be claiming “evasion”. You have consistently tried to evade any and all evidence that doesn’t fit your pre-conceived conclusion. When I point out all the events in the earth’s history where 2xCO2 climate sensitivity needs to be invoked to come even close to the increase in temperature…you decide to attack the PETM and based on uncertainties there claim there is no evidence for 2xCO2.

      Oh, and calling somebody who has been proven to lie on many occasions a liar is NOT an ad hominem. Please learn about logical fallacies.

      • Marco

        You are digging yourself a hole.

        First you justify your calling Booker a “liar” with::

        Booker’s claims about Gavin Schmidt are a lie. The evidence is Gavin Schmidt himself. Ask *him* if Booker is right. He’ll set Booker straight.

        Who is telling the truth here? What specific claims were made by Booker that have been refuted by Schmidt? What evidence has either provided to support the statement made? Get specific, Marco, otherwise you have nothing to talk about.

        Then you proclaim another generality:

        You have consistently tried to evade any and all evidence that doesn’t fit your pre-conceived conclusion.

        Is this statement true? Or is this a lie? If so, would it make you a “liar”?

        Marco, my advice to you: DO NOT USE THE WORD “LIAR” in describing those who may have a different opinion on AGW than you do.

        Instead, rebut or refute what these individuals are actually claiming, if you can, rather than attacking their character or persona by calling them a “liar”.

        And yes, calling someone a “liar” is, by definition, an “ad hominem” attack, while rebutting or refuting what they said or wrote is not.

        Got it?

        Max

  192. Claims of long disequilibrum times seem founded in measurement error and not science. Every patch of earth heats and cools on a 24hour cycle. Sun, wind, clouds, and rain can all change within that cycle – so all the major large positive and negative forcing elements change by two orders of magnitude higher than the net annual assumed net positive forcing. It is likely that the equilibrium response to perturbation is complete within a few cycles/days.

    1.
    The effective heat capacity for such rapid cycling is less than that for longer-period cycling. This is quite true for land in particular; the diurnal temperature cycle typically fades through a shallow surface layer, for a typical soil thermal diffusivity, on the order of 10 cm, which means that the effective heat capacity acting to damp the diurnal cycle is on the order of the heat capacity of the top 10 cm. Even the annual cycle has limited penetration.

    2.
    Half the Earth is in daytime and half is at night at any one moment in time. One hemisphere is in winter when the other is in summer. The heat capacity of the system (even the small amount of the land surface) stores heat from the daytime and from the summer and releases it during the night and during the winter, and the temperature variation is significantly less than if the Earth were held fixed relative to the sun so that night and day would be permanent whereever found. (Also, seasonal and annual-average horizontal variations in solar heating, horizontal fluxes of heat in the air and in the ocean make the horizontal temperature variation significantly less than what it would otherwise be.)

    3.
    so all the major large positive and negative forcing elements change by two orders of magnitude higher than the net annual assumed net positive forcing. It is likely that the equilibrium response to perturbation is complete within a few cycles/days.

    Absolutely not. For a given heat capacity, the response of temperature to a more rapid cycling is simply a reduced amplitude of temperature cycling. The fact that the diurnal cycle is completed over 24 hours has nothing to do with whether temperature or anything else ever gets near instantaneous or hourly-averaged equilibrium at noon or at night – of course for longer-time-averaged equilibrium, there tends to be two times in the day when the temperature is near equilibrium with the forcing – at some point between sunrise and noon, and at some point between noon and sunset.

    Consider pushing a mass on a spring back and forth with some force. If the mass is large enough, the mass might not budge much at all if the force cycling is rapid, even though the mass completes it’s displacement cycles with the same frequency. The fact that the forced displacement completes a cycle with the same period as the forcing cycle says nothing of whether the spring has shifted sufficiently to balance the forcing; the amplitude of displacement could get much larger if the cycling period is lengthened.

    The physical response to absorbed radiated heat at the surface is conduction away from it, which only reduces the net surface temperature effect, not increases it, and does not take years.

    Conduction into or out of the crust is only significant in the short term; as the temperature variation forced at the surface penetrates deeper, the temperature gradient necessarily shrinks, slowing the heat flux. Consider that a few tens of K per km drives most of the geothermal heat flux with a global average less than 0.1 W/m2. A temperature change forced at the surface for a time scale on the order of 10,000 years would only penetrate through the crust on the order of 150 m or maybe 200 m or a bit more, depending on soil and rock type (see Hartmann, p.85, and discussion above about heat capacity), while essentially the whole ocean (average depth about 3800 m) should have changed temperature to near equilibrium with surface conditions within such time.

    The heat loss (or gain upon surface cooling) becomes insignificant with sufficient time, and the rest of the heat (accumulated from radiative imbalance, and thus, besides LW emission) from the surface or atmosphere goes where? To different places within the surface or atmosphere or down into the ocean, to melt ice, etc.

    And that heat capacity doesn’t reduce the equilibrium response, it just lengthens the time to reach equilibrium.

    It doesn’t take years for heat to accumulate from diurnal cycles BECAUSE diurnal cycles don’t last that long.

    A biological response to CO2 could take years, since growing plants in response to higher CO2 takes that sort of time frame.

    In so far as rates of CO2 uptake or release go, CO2 fertilization would be immediate, but increased photosyntheic fixation should, if continued, eventually lead to increased respiration – or increased transport to another reservoir (as a consequence of more C available to respire); (PS a decrease in respiration would increase the residence time in the organic state and cause a build-up of C that would tend to lead to an increase in respiration) – in either case, the net change in stored C results from an accumulation/depletion over the time period in which imbalances occured.

    The effects of climate change on the C cycle will not be fully realized at the same time.

    The resulting increased water vapour and clouds and carbon capture and storage all provide some negative feedback to the climate system.

    water vapor – most likely positive feedback
    clouds – maybe small positive feedback in total, but maybe not, and maybe with both positive and negative contributions from different changes of different cloud types in different places and times.
    carbon capture and storage – right now, a negative feedback, I think mainly to the CO2 itself, and in so far as it is, it is not a feedback to climate. The CO2 uptake is accounted for – obviously, scientists are not assuming that CO2 in the atmosphere has risen to much higher than it actually has. CO2 feedback can’t be part of the climate sensitivity to CO2, but it can be a part of the feedback to anthropogenic CO2.

    The system response to perturbation will be to return to equilibrium by negative feedback.

    Yes, if you are including the increased LW emission as a function of temperature. The return to equilibrium is not a return to the previous climatic state if the forcing remains changed.

    All effects caused by radiative disequilibrium only serve to reduce the measured effect.

    No. Positive feedbacks reduce the amount that the disequilibrium can decay per unit change, thus prolonging the disequilibrium and increasing the accumulated change necessary to reach equilibrium.

    It is impossible to measure a very small difference in equilibrium state (less than 1degK) when large excursions due to daily and annnual season cycles cover it up.

    Um, no, have you seen the graphs (and not just temperature)?

    At any one moment it may be sunny and dry or there may be heavy rain, nonetheless the average rainfall per unit time over a month or year is VERY important to the water supply.

    Interstingly, climate models assume the biosphere carbon sink *reduces* rather than increases in response to warming. IPCC AR4 Ch8 8.2.3.1 p605: “Friedlingstein et al. (2006) found that in all models examined, the sink decreases in the future as the climate warms.” and “However, it is not clear how well current climate models can capture the impact of future warming on the terrestrial carbon balance. A systematic evaluation of AOGCMs with the carbon cycle represented would help increase confidence in the contribution of the terrestrial surface resulting from future warming.”

    If the assumed model output were used to create the model that produced the assumed output, then there was no point in using a model, was there? I disagree with your use of the word ‘assumed’.

    • Clarification:

      The heat loss (or gain upon surface cooling) *to the underlying crust*becomes insignificant with sufficient time,

  193. Marco, Reur comment inserted in my January 21, 2010 @ 2:53 am

    Bob, you may want to check the comments in the link you gave. Christopher Booker *is* a liar, and once again has decided to (deliberately, in my opinion) misinterpret what someone has said. Please, for once, come with a credible source.

    Ooh, that’s a tad strong! (You are upset; I can tell)
    If you don’t like the link I gave, here is the original from the UK Daily Telegraph, which I’ve read through again, and can’t see anything sus’.
    http://www.telegraph.co.uk/comment/columnists/christopherbooker/6475667/Gavin-Schmidt-a-correction.html

    Gavin Schmidt: a correction
    Dr Schmidt wants it known he has no connection with the GISS temperature record,
    writes Christopher Booker
    Published: 7:07PM GMT 31 Oct 2009 (in the UK Daily Telegraph)

    Relevant extract:
    Dr Schmidt wishes us to point out that he is not “involved” in Dr Hansen’ s GISS temperature record, which is one of the four official sources of global temperature data relied on by the UN’s Intergovernmental Panel on Climate Change and by governments all over the world. I am of course happy to publish the correction he asked for, but I am intrigued that Dr Schmidt should want to dissociate himself from this increasingly controversial source of temperature figures.
    Like others, it seems I was misled by the fact that twice in the past two years, when GISS has come under fire for publishing seriously inaccurate data, it was Dr Schmidt who acted as its public spokesman. The first was in 2007

    Note that Booker is not exhibiting “lying behaviour” when he publishes a correction, and at the same time admits/ describes how he might have got it wrong in the first place; with a logical/rational explanation!

    If you think I misunderstood what you wrote, perhaps you could elaborate, where in your opinion, Booker is (deliberately) lying?

    (BTW, your adding of the word ‘deliberately’ to ‘lying’ is an interesting presumably tautological slip on your part. Is it some kind of conditional or caveat?)

    I see that you are silent on the other points that I raised

    • Gavin Schmidt is not running GISTEMP. He’s not involved in its day-to-day use. He is, however, very much aware of what it does, considering he uses it for the climate modeling. The LIE of Christopher Booker is to claim that Schmidt tries to distance himself from GISTEMP. I added the word “deliberately” to lying, since Booker has frequently lied simply because he is too stupid to understand what he is talking about. In this case he does know what he has been talking about, and tries to twist and turn to create a controversy which isn’t there.

      Just so you know, you would find me on your way when one of my direct colleagues is attacked. I’m not afraid to put myself out in the open. Neither is Gavin Schmidt when people talk crap about GISTEMP.

      • Marco, Reur:

        “…The LIE of Christopher Booker is to claim that Schmidt tries to distance himself from GISTEMP… …I added the word “deliberately” to lying, since Booker has frequently lied simply because he is too stupid to understand what he is talking about…”

        Maybe Marco, your mother tongue is not English, (?), but anyway, please be advised that ‘to lie’ means to ‘deliberately say something false’. IF you think that Booker has said something that is wrong elsewhere, (?), as in your allegation of him not understanding the topic, then it may be classed as a misunderstanding/error, but NOT as a lie.

        So let’s review what Booker actually wrote:
        1) Subject: Gavin Schmidt: a correction. Dr Schmidt wants it known he has no connection with the GISS temperature record, writes Christopher Booker
        2) Extract a) Dr Schmidt wishes us to point out that he is not “involved” in Dr Hansen’ s GISS temperature record,
        3) Extract b) I am of course happy to publish the correction he asked for, but I am intrigued that Dr Schmidt should want to dissociate himself from this increasingly controversial source of temperature figures.
        4) Extract c) Like others, it seems I was misled by the fact that twice in the past two years, when GISS has come under fire for publishing seriously inaccurate data, it was Dr Schmidt who acted as its public spokesman. The first was in 2007

        Please note that the word ‘intrigued’ is conditional in the context, meaning that he is uncertain of the reasons. Please also note that the word ’should’ is also conditional in the context as in ‘might’ or ‘is likely’.

        Given that Gavin requested the correction, then the inference is very apparent to me that he does not want it to be believed that he has anything to do with the preparation of the controversial GISSTEMP data. (and that Booker has not lied in his article)

      • a) English is indeed not my mother tongue. I needed to make some type of distinction between the various errors in Booker’s writings. Mistakes (ever heard of “honest mistakes”?), lies, which are mainly due to his inability to understand the explanations he gets (repeating falsehoods despite getting an explanation is NOT a mistake or error, but here it is because of stupidity), and deliberate lies (where he DOES understand the explanation, but still alters what it says).

        b) Booker claims GISTEMP is “increasingly controversial”. It’s only “controversial” in the mind of the denioidiots who don’t understand, or are not willing to understand, how it works. This is a (deliberate) lie.

        c) Brooker claims he was misled, but he knows very well (a career in journalism at the very least should teach you that) that people who use something, without being the actual authors or source, are frequently ‘spokesmen’. That is, they correct falsehoods. He wasn’t misled, he just wanted to link Gavin Schmidt to GISTEMP, so he can dismiss Schmidt simply by linking him to GISTEMP (by calling the latter controversial). This I consider a deliberate lie.

    • Regarding your claims that the NASA e-mails show he is very much involved in GISTEMP: the desire to pint something on Gavin must be REALLY big. Gavin was contacted by reporters (in particular Andy Revkin). He makes it (also) clear to Andy Revkin that he isn’t directly involved in GISTEMP. That does not mean he does not know what GISTEMP does and how it works, and is willing to defend it against the often-bogus attacks.

      I’d go after anyone making false claims about the programmes I use on a daily basis, too, despite not being involved in any of them.

      • Marco Reur January 26, 2010 @ 1:51 am penultimately above (Reply button was missing)
        a) Please understand that a lie by dictionary definition can only be a ‘deliberate untruth‘, and it is nonsensical to describe an error or misinformation as an ‘unintended deliberate untruth‘ be it for any reason. Furthermore, there is no evidence of lying behaviour in the article, and various things that you have said appear to be mistakenly speculative.
        It is very bad behaviour on your part to assert so emphatically in a public forum that Booker is a LIER, without supplying any evidence. Similarly, you also libellously assert that he is stupid, whereas his writings are of a popular and high standard that demonstrate the opposite. If you were to say; I think…., that would not be quite so bad.
        Oh, and BTW the colloquial, “honest mistakes” is also bad English; because a mistake cannot be deliberate.

        b) It is increasingly controversial because of several provenly wrong data published by GISS that were identified by observant outsiders, and may well have remained unnoticed internally by GISS; suggesting poor quality control at GISS. There have also been a number of mysterious “corrections” to other data that have stirred debate. Why do you claim: This is a (deliberate) lie?

        c) Well here again is the relevant paragraph that Booker wrote:
        “…Like others, it seems I was misled by the fact that twice in the past two years, when GISS has come under fire for publishing seriously inaccurate data, it was Dr Schmidt who acted as its public spokesman. The first was in 2007…”
        Notice his use of the conditional ‘seems’, which can express uncertainty or add politeness to a statement.
        Also, the MS Works dictionary gives for ‘mislead’: 3) lead somebody in wrong direction: to lead somebody in a wrong direction
        What you say appears to be speculation without any foundation whatsoever.

  194. Marco, Reur comment inserted in my January 20, 2010 @ 2:34 am

    Bob: [1] when Gavin relaxed the moderation on RC, the threads became inundated with repetitions of old and debunked claims. Within 15-20 posts the discussion wasn’t related to the actual topic, it went all over the place with loads of RTFFAQ. WITH the moderation, this chaos formation takes a lot longer. [2] I don’t see the benefit of the “this post has been removed by the moderator” remark you will see at the Guardian.
    [3] Of course, your “inconvenient comments” are likely to have been repetitions of previous posts that were already debunked or answered, as Patrick also noted previously.

    [1] But the moderator should prevent growth of OT posts by perhaps referring offenders to a different thread, cautioning them, closing the thread down, or using other options that a moderator has. Other sites seem to cope with this moderation process OK.
    Here is one example of how silly and biased some RC moderation has been for me:
    On one thread, I demonstrated how EMR from the Earth’s surface is not simply upwards, (as illustrated only in K & T depictions), but is invisibly hemispherical, by making an analogy with the sun. (although the sun’s plasma only has an apparent/imaginary surface). I commented that the fundamental reason why the sun appears to be a disc of uniform brightness is that EMR radiates equally in ALL directions. I was then told that I was wrong because there is “limb darkening” because of opacity effects in the plasma viewed tangentially, nearing the rim/limb of the sun. An argument then developed as to the relative importance of this WRT the point I was making, and, among other things, I was told to go look at the sun at sunset. My response included photos that showed NO evidence of limb darkening. (within the normal definition capability of the observation)
    My post was deleted with no comment, so I emailed Gavin enquiring as to why, and he replied, in full:
    No. Limb darkening is real, well understood and completely off-topic. I have no interest whatsoever in your inability to understand it. Sorry.
    So, it would seem that suddenly, OT was the reason for deletion in a multi authored exchange, with no one cautioned. However, it continued after the deletion from the other side, with a typically rather silly Hank Roberts post that was allowed. Presumably, when there was no apparent response from me, they all assumed that I had retired from the debate!

    Oh, and BTW, contrary to Gavin’s assertion, I DO understand limb darkening, it’s simple theory, and declared so, but it can’t be seen through my arc welding mask or at sunset so it is unimportant in visibly demonstrating my point about hemispherical radiation from a flat surface!.

    [2] The Guardian is far from ideal in its moderation, but it is better than RC and it can be surprising to count the number of posts that don’t toe the line with the moderator, especially recently.

    [3] The speculations that you and Patrick have made are wrong. The posts that I’ve had deleted at RC were unique and challenging responses to others. It seems to me that Gavin found them to be too inconvenient.

    • You propose they close a thread down when it digresses too much? Apart from the fact that they have done so a few times, it would be an easy way to get the whole idea of having a blog informing about the science becoming one big screaming match. Just put in a lot of nonsense comments, and the thread will be shut down! Great.
      Of course, when you indicate that a post is deleted, no one knows why, just that it has been. And that adds what information exactly? (beyond perhaps others noting your post has been deleted)
      And Gavin indeed found them inconvenient: inconveniently off-topic. That moderation may be biased and that some people are not told to stop being off-topic is likely because they have often provided substantial and constructive comments in the past.

      • Marco, you wrote:

        “You propose they close a thread down when it digresses too much? Apart from the fact that they have done so a few times…”

        No, I do not propose/favour that approach but mentioned that it was one of a bunch of options available to moderators. (in that case most likely only if they lose control on topic). BTW, Gavin has also closed down some threads prematurely for OTHER reasons. The record short runs that I’m briefly aware of is after only three days, and another of ten days.

        “…Of course, when you indicate that a post is deleted, no one knows why, just that it has been. And that adds what information exactly?…”

        If you had payed attention you would have noted that I consider this practice at the Guardian to be superior to blind deletions at RC, but still NOT good enough. The reason that it is better than NOTHING is that there is at least an indication of dissent from the views of the massed faithful, whereas at RC the impression is arguably that there are no/few dissenters, or those that have been allowed a brief flourish have seemingly ended up retiring with their tail between their legs, when this is simply not true. (They have been invisibly excommunicated) Moderators at many other sites do a good job of advising posters that offend their site rules or go OT, by providing appropriate explanations that are visible to the other posters. This is what OUGHT be done at RC!

        “…And Gavin indeed found them inconvenient: inconveniently off-topic. That moderation may be biased and that some people are not told to stop being off-topic is likely because they have often provided substantial and constructive comments in the past…”

        I’m not sure if I understand what you wrote, but if a poster is significantly OT or infringing any other rules, it does not matter if s/he was previously of good behaviour. The moderator should in the first instance at least caution the offender, (if deviation is significant), and then take further action if necessary in the event of repetition or unsatisfactory related responses.

        You have NOT responded to the bulk of my post above starting with this line:
        Here is one example of how silly and biased some RC moderation has been for me:
        Did it not register with you that Gavin was aggressively biased in his Email response to me? Could you perhaps read it ALL again with diligence and then offer your wisdom?
        BTW, I could also accuse Gavin of LYING about my understanding, although maybe as an excuse for him, he did not properly read and remember my earlier posts.

  195. Patrick027

    Thanks for your response. It was rather lengthy, but I will try to cover the points you raised.

    You opined to Blouis and myself:

    I’m getting the impression that you might be trying to be stupid.

    Wrong impression, Patrick. I could say the same about you, but will refrain, as it would contribute nothing to our conversation.

    You say that as if it’s all solar scientists [which support the premise that around half of the 20th century warming can be attributed to the sun]. I presume many solar scientists know better. Not all study results are correct for ever study – if they were, you couldn’t be taking the position you are.

    Regarding the number of solar scientists who have concluded that around half of the 20th century warming can be attributed to the unusual high level of solar activity, I have seen around 10 studies (by more than 10 solar scientists) to that effect. That is why I “take this position”. I will be glad to cite references, if you’d like. In addition, IPCC concedes that its “level of scientific understanding” of solar forcing is “low”, so it is quite natural that I would look elsewhere for this information.

    But anyway, note that there have been changes in CO2 AND other well-mixed greenhouse gases AND stratospheric ozone AND tropospheric ozone AND the sun AND aerosols, natural and anthropogenic. If you tally up the effects from all positive forcing contributions, you would (accounting for disequilibrium) get greater warming than observed.

    IPCC (SPM 2007, p.4) tells us that the radiative forcing of ALL anthropogenic components (1750-2005) was 1.6 W/m^2, while that for CO2 alone was 1.66 W/m^2. This tells me that all other anthropogenic factors (which you mentioned) essentially cancel one another out and there is no “greater warming” from “tallying up the effects from all positive forcing contributions” as you surmise.

    Here’s a thought – pretend this low sensitivity [a bit under 1°C] is the consensus. Now introduce all the other studies, experiments, modelling, and data. Your consenus position would crumble.

    I am not particularly interested in “consensus” (a “political” rather than a “scientific” concept). Data derived from model simulations or paleoclimate reconstructions are less interesting to me than empirical data derived from actual physical observations of today, such as the studies I cited earlier, which do not support a strong net positive feedback resulting in a 2xCO2 climate sensitivity of 3+°C. If you can provide links to “all the other studies” you cite, I’ll be glad to look at them. I have not seen any studies providing empirical data based on physical observations from today, which support this high climate sensitivity.

    Concerning the postulated “time lag” and the “assumption that a major portion of the GH warming from 1850 to 2005 is still “hidden somewhere in the pipeline”, until “climate equilibrium” is reached., you write:

    What’s wrong with that – it makes perfect sense to think that.

    It also makes perfect sense to think otherwise (Occam’s razor), inasmuch as there are no empirical data based on actual physical observations to support this postulation.

    To the recent cooling of the surface and tropospheric temperature (HadCRUT and UAH) plus the ocean (Argo) you wrote:

    There has been a slowdown in warming, but it’s not outside the range of what’s expected from internal variability. Furthermore, are YOU now ignoring solar forcing?

    First of all, there has been a “cooling”, rather than a “slowdown in warming”. The models predicted a warming of 0.2°C and we have seen a cooling of around 0.1°C. Whether this is “outside the range of what’s expected” is uninteresting to me.

    No, Patrick, I am NOT ignoring solar forcing and, yes, solar activity has dropped to a very low level as Solar Cycle 24 is having a hard time getting started. In addition the cooling after 2000 is being attributed to “natural variability” (a.k.a. natural forcing factors), which I can fully accept. I find it a bit hard to accept, however, that these same natural forcing factors have been able to more than offset the GH impact of record CO2 concentrations since 2000, but were deemed by IPCC to be essentially insignificant from 1750 to 2005. This does not make sense.

    Finally you wrote:

    And a general comment, in case it might be relevant – don’t assume that the raw data paints the correct picture. Just as there are urban heat islands and issues with station relocation, there have also been issues with satellite data calibration, changing radiosonde and ocean data biases, and changing amounts of data actually taken.

    You are changing the subject here. The data we have are the data we have. I am not questioning the HadCRUT surface temperature record here, even though there are many indications that the data collection may have been dicey and that it gives an exaggerated picture of 20th century warming due to the UHI effect, particularly in the second half of the century. The more comprehensive satellite record was corrected for orbital decay a few years back and checks well with radiosonde data. Ocean temperature data were pretty dicey prior to the installation of the Argo system. But, Patrick, I am not questioning the record; that is another argument.

    I am just saying that the record as physically observed does not support the premise of a climate sensitivity of 3+°C, nor does it support the premise of “energy hidden in the pipeline”.

    And you are not bringing the arguments to convince me otherwise.

    Max

    • I will be glad to cite references, if you’d like.

      Go ahead.

      In addition, IPCC concedes that its “level of scientific understanding” of solar forcing is “low”, so it is quite natural that I would look elsewhere for this information.

      No, not “ITS understanding is low”, but rather, “THE understanding is low”.

      IPCC (SPM 2007, p.4) tells us that the radiative forcing of ALL anthropogenic components (1750-2005) was 1.6 W/m^2, while that for CO2 alone was 1.66 W/m^2. This tells me that all other anthropogenic factors (which you mentioned) essentially cancel one another out and there is no “greater warming” from “tallying up the effects from all positive forcing contributions” as you surmise.

      That forcing from CO2 roughly equals total forcing is just a numerically convenient situation. It could as easly be said that 3/4 of CO2 and half of CH4 and 1/2 of solar forcing and … (I’m not going to go through the numbers, you should be able to get the point without it) would add up to the total. The fact is that the cooling effects of anthropogenic aerosol forcing and stratospheric ozone depletion and (not significantly, at least globally) land cover albedo don’t selectively cancel* out portions of only other anthropogenic forcings – if only the solar forcing had increased, that increase could be canceled* out by sufficient aerosol forcing.

      (*taking into account efficacy, and realizing that some aspects of climate may vary in different proportion to global average temperature for different forcings.)

      I am not particularly interested in “consensus” (a “political” rather than a “scientific” concept).

      Is it only politics that has physics textbooks describing General Relativity?

      Data derived from model simulations or paleoclimate reconstructions are less interesting to me than empirical data derived from actual physical observations of today, such as the studies I cited earlier, which do not support a strong net positive feedback resulting in a 2xCO2 climate sensitivity of 3+°C.

      But those studies haven’t convinced a lot of people. If you think you’ve found a solid study, please bring it up, describe the logic and method of it.

      If you can provide links to “all the other studies” you cite, I’ll be glad to look at them. I have not seen any studies providing empirical data based on physical observations from today, which support this high climate sensitivity.

      Try looking at references for IPCC AR4 WGI – try perusing chapters 3 through 5, and then 9.

      Concerning the postulated “time lag”
      It also makes perfect sense to think otherwise (Occam’s razor), inasmuch as there are no empirical data based on actual physical observations to support this postulation.

      Well, yes there is. Oceanic heat accumulation, for one. And Occam’s razor doesn’t say that you shouldn’t expect the physics of heat capacity to apply to some ‘new’ situation just because it’s a ‘new’ situation. (On a similar note, Occam’s razor actually backs the concept of (in the absence of evidence for a tesselated or closed universe (finite without boundaries within the dimensions of space, etc.)) an infinite universe rather than a finite universe with edges, because ‘edges’ would be something entirely new and also very strange).

      On a related note, their is a problem with constant complaints that predictions are not scientific. Of course they are not proven or falsified until the time comes, but predictions nonetheless are made based on OTHER scientific knowledge. The use of scientific knowledge to project potential climate changes and the use of those for policy making is not pure science and shouldn’t be pure science – it is engineering, it is an application of knowledge. But of course, science on some level must be done or have been done to make such projections and continue to improve on them.

      First of all, there has been a “cooling”, rather than a “slowdown in warming”. The models predicted a warming of 0.2°C and we have seen a cooling of around 0.1°C.

      Trend depends on the timescale.

      Whether this is “outside the range of what’s expected” is uninteresting to me.

      It should be interesting to you.

      No, Patrick, I am NOT ignoring solar forcing and, yes, solar activity has dropped to a very low level as Solar Cycle 24 is having a hard time getting started. In addition the cooling after 2000 is being attributed to “natural variability” (a.k.a. natural forcing factors), which I can fully accept. I find it a bit hard to accept, however, that these same natural forcing factors have been able to more than offset the GH impact of record CO2 concentrations since 2000, but were deemed by IPCC to be essentially insignificant from 1750 to 2005. This does not make sense.

      OF course it doesn’t make sense, because they HAVE NOT more than offset the CO2 forcing, etc, change since 1750 (that you would think otherwise I suspect is related to your trouble seeing the obvious CO2 – temperature correlation over the last century or two). Even if it cools 0.1 K, that’s still a small fraction of the warming over the last century or two.

      there are many indications that the data collection may have been dicey and that it gives an exaggerated picture of 20th century warming due to the UHI effect, particularly in the second half of the century.

      Well, I’ve heard that one before.

      And you are not bringing the arguments to convince me otherwise.

      I probably shouldn’t try; I suspect you’ll turn out to be one of those many people who can’t be convinced. Which has little to do with whether the arguments make sense or not.

      • Patrick027

        I will get back to you on the rest of your post, but one point needs comment now.

        IPCC states that ITS “level of scientific understanding” of solar forcing is “low”, not THE “LOSU”.

        By definition, since IPCC is not “omniscient”, IPCC cannot comment on others’ LOSU, only on ITS OWN LOSU.

        Or do you maybe believe that IPCC is “omniscient”?

        Get it?

        Max

      • Patrick027

        I have already commented on your rather strange statement on the IPPC “low” “level of scientific understanding” of solar forcing, but will repeat it to make sure you understand. You wrote:

        No, not “ITS understanding is low”, but rather, “THE understanding is low”.

        It would be utterly arrogant of IPCC to assume that it is “omniscient”; therefore it cannot make the statement that the “LOSU” of solar forcing is “low” – only that it’s own “LOSU” is “low”.

        Here are links to a few of the solar studies I cited, which conclude on average that around half of the 0.65°C global warming of the 20th century can be attributed to the unusually high level of solar activity (highest in several thousand years).

        Scafetta and West
        http://www.scribd.com/doc/334163/Phenomenological-solar-contribution-to-the-19002000-global-surface-warming

        We estimate that the sun contributed as much as 45-50% of the 1900-2000 global warming, and 25-35% of the 1980-2000 global warming

        Lean et al.

        Click to access lean1995.pdf

        A new reconstruction of annual solar irradiance accounts for 74% of the variations in NH surface temperature anomalies from 1610 to 1800 and 56% of the variance from 1800 to the present. Our results indicate that solar variability may have contributed a NH warming of 0.51°C from the seventeenth century to the present, in good agreement with a general circulation climate model simulation. About half of the observed 0.55°C warming from 1860 to the present may reflect natural variability arising from solar radiative forcing, although since 1970 less than one third of the 0.36°C surface warming [0.11°C] is attributable to solar variability.

        Solanki et al.

        Click to access nature02995.pdf

        http://www.ncdc.noaa.gov/paleo/pubs/solanki2004/solanki2004.html

        According to our reconstruction, the level of solar activity during the past 70 years is exceptional, and the previous period of equally high activity occurred more than 8,000 years ago.

        Soon et al.
        http://adsabs.harvard.edu/abs/1996ApJ…472..891S

        If the solar irradiance profiles found from the climate simulations are required to be consistent with recent satellite observations, then the composite solar profile reconstructed by Hoyt & Schatten, combined with the anthropogenic greenhouse forcing, explains the highest fraction of the variance of observed global mean temperatures. In this case, the solar and greenhouse combination accounts for 92% of the observed long-term temperature variance during 1880-1993. The simulation implies that the solar part of the forcing alone would account for 71% of the global mean temperature variance, compared to 51% for the greenhouse gases’ part alone.”

        Shaviv and Veizer
        http://www.gsajournals.org/perlserv/?request=get-static&name=i1052-5173-14-3-e4&ct=1

        Once this solar amplification is included, the paleoclimate data is consistent with a solar (direct and indirect) contribution of 0.32 ± 0.11 °C toward global warming over the past century.

        Geerts and Linacre
        http://www-das.uwyo.edu/~geerts/cwx/notes/chap02/sunspots.html

        Recent research indicates that the combined effects of sunspot-induced changes in solar irradiance and increases in atmospheric greenhouse gases offer the best explanation yet for the observed rise in average global temperature over the last century. Using a global climate model based on energy conservation, Lane et al constructed a profile of atmospheric climate “forcing” due to combined changes in solar irradiance and emissions of greenhouse gases between 1880 and 1993. They found that the temperature variations predicted by their model accounted for up to 92% of the temperature changes actually observed over the period – an excellent match for that period. Their results also suggest that the sensitivity of climate to the effects of solar irradiance is about 27% higher than its sensitivity to forcing by greenhouse gases.

        Stott et al.

        Click to access StottEtAl.pdf

        It is found that current climate models underestimate the observed climate response to solar forcing over the twentieth century as a whole, indicating that the climate system has a greater sensitivity to solar forcing than do models. The results from this research show that increases in solar irradiance are likely to have had a greater influence on global-mean temperatures in the first half of the twentieth century than the combined effects of changes in anthropogenic forcings. Nevertheless the results confirm previous analyses showing that greenhouse gas increases explain most of the global warming observed in the second half of the twentieth century.

        E. Palle Bago and C. J. Butler
        http://www.solarstorms.org/CloudCover.html

        Thus, the total global temperature rise derived from the combined effects of an irradiance increase and a decrease in low cloud factor over the past century is 0.35, 0.67 or 0.45 degree C, depending on which of the three activity indices are employed. The mean value 0.49 degree C is close to the observed increase in global temperature 0.55 degree C since 1900 (Lean and Rind, 1998; Jones and Briffa, 1992).
        Thus we find that, subject to the above assumptions, most of the global warming over the past century can be accounted for by the combined direct (solar irradiance) and indirect (cosmic ray induced cloudiness) effects of solar activity without the need for any artificial amplification factor.

        Moving on to “other anthropogenic forcing” beside CO2.

        Patrick, you need to read IPCC SPM 2007, p.4 very closely. It tells us in no uncertain terms that the radiative forcing from increased CO2 over the period 1750-2005 was 1.66 W/m^2, and that the radiative forcing from “total net anthropogenic” factors over the same period was 1.6 W/m^2. This tells me that all the other anthropogenic factors beside CO2 (land use, aerosols, other GHGs, etc.) essentially cancelled one another out, so that there was no “greater warming” from “tallying up the effects from all positive forcing contributions” as you suggested.

        To the two studies I cited, which show that net outgoing SW + LW radiation increases with surface temperature you opined:

        But those studies haven’t convinced a lot of people. If you think you’ve found a solid study, please bring it up, describe the logic and method of it.

        The logic and method of both studies was based on the physical observation of total outgoing SW + LW radiation over a longer period of time with increased surface temperature; both studies showed that with higher temperature this increased rather than decreased (as assumed by the model simulations).

        You have not given me any references to studies based on actual physical observations, which show a positive cloud feedback. If you know of such studies, please do so. Otherwise accept the studies I have cited.

        Instead of citing any references, you write:

        Try looking at references for IPCC AR4 WGI – try perusing chapters 3 through 5, and then 9.

        I have “perused” these ad nauseam. There are no such references, Patrick.

        The studies I cited are solid studies, based on actual ERBE and CERES satellite observations over the tropics. In a separate post I have also cited a model study, based on superparameterization, which concludes that cloud feedback over all latitudes is strongly negative, rather than strongly positive, as assumed by IPCC.

        To the lack of evidence for the “hidden in the pipeline” hypothesis you wrote.

        Well, yes there is. Oceanic heat accumulation, for one. And Occam’s razor doesn’t say that you shouldn’t expect the physics of heat capacity to apply to some ‘new’ situation just because it’s a ‘new’ situation. (On a similar note, Occam’s razor actually backs the concept of (in the absence of evidence for a tesselated or closed universe (finite without boundaries within the dimensions of space, etc.)) an infinite universe rather than a finite universe with edges, because ‘edges’ would be something entirely new and also very strange).

        Oceanic heat accumulation means an increase in oceanic temperature. Readings prior to Argo were quite primitive and unreliable, but more reliable Argo measurements since 2003 tell us thie upper ocean has not warmed, but that it has cooled instead.

        Occam’s razor tells us that the simplest explanation is the best. It only warmed by 0.65°C over the 20th century of which solar scientists tell us around half can be attributed to increased solar activity (see above), leaving around 0.3°C for CO2. The assumed 2xCO2 climate sensitivity of 3.2°C would indicate that the 20th century warming from CO2 should have been 1.3°C. The simplest conclusion here is that the assumed 2xCO2 climate sensitivity is too high, and should probably be a bit below 1°C. A much more complicated conclusion is that the assumed climate sensitivity is correct, but that a portion of the warming which has actually occurred over the entire 20th century due to increased CO2 is “hiding” unseen and not measurable somewhere “in the pipeline”.
        Gimme a break, Patrick. “Occam” would roll over in his grave at such a convoluted postulation.

        You talk briefly about predictions being “proven” or “falsified”. One climate model prediction that has clearly been falsified is that the 21st century will see GH warming at a rate of 0.2°C per decade. The first decade has seen cooling at a rate of about 0.1°C per decade instead. You talk of “timescale”. How long, in your mind, does this cooling have to continue in order for you to concede that the warming prediction has been falsified?

        You wrote:

        Even if it cools 0.1 K, that’s still a small fraction of the warming over the last century or two.

        We have the global HadCRUT record since 1850. It showed a linear warming rate to today of 0.041°C per decade or 0.65°C up to 2000 and a cooling rate of around 0.1°C per decade since then. In other words, in just short of one decade there was around one-sixth of the cooling as we saw in over 15 decades. I’d say that this is not “a small fraction of the warming over the last century or two”, as you stated.

        As I pointed out to you, I have not questioned the HadCRUT or UAH temperature records, the Argo record on oceanic temperature, the Mauna Loa CO2 measurements after 1958 or the IPCC guesstimates of CO2 prior to 1958 based on ice core data.

        The data are what we have, and that would be a totally different discussion, so it does not need to be brought up here, as you did.

        Patrick, you believe your “arguments make sense” (to paraphrase Mandy Rice Davies: of course you would, wouldn’t you?).

        I suspect you’ll turn out to be one of those many people who can’t be convinced. Which has little to do with whether the arguments make sense or not.

        Wrong, Patrick. It has everything “to do with whether the arguments make sense or not”.

        They just aren’t very convincing to me.

        Max

      • Of course the IPCC is not omniscient, but they were out to look at ‘the’ level of scientific understanding.

      • This tells me that all the other anthropogenic factors beside CO2 (land use, aerosols, other GHGs, etc.) essentially cancelled one another out

        Yes, I agree with that (aside from a few caveats about efficacy and effects specific to type of forcing).

        But it is also true that this is just a convenient coincidence; there is nothing so special about the forcing of CO2 such that it cannot be cancelled while other forcings can be.

    • OF course it doesn’t make sense, because they HAVE NOT more than offset the CO2 forcing,

      To put it another way, if you win $ 1,000,000 and then lose $ 100,000, do you have $900,000 left, or are you now bankrupt? Your tendency seems to go towards the conclusion of bankruptcy, as if you only care about the money you’ve made or lost in the last 10 days, regardless of how much you have from before.

      • One climate model prediction that has clearly been falsified is that the 21st century will see GH warming at a rate of 0.2°C per decade.

        THE PREDICTION WAS NOT THAT EACH AND EVERY DECADE WOULD SEE 0.2 K WARMING.

        NO IT HAS NOT BEEN FALSIFIED!

        The simplest conclusion here is that the assumed 2xCO2 climate sensitivity is too high, and should probably be a bit below 1°C. A much more complicated conclusion is that the assumed climate sensitivity is correct, but that a portion of the warming which has actually occurred over the entire 20th century due to increased CO2 is “hiding” unseen and not measurable somewhere “in the pipeline”.
        Gimme a break, Patrick. “Occam” would roll over in his grave at such a convoluted postulation.

        IF THE MECHANISM FOR A LAG TIME TO A RESPONSE IS FIRMLY ESTABLISHED, THEN THERE IS NO SENSE IN SIMPLIFYING THE EXPLANATION TO THE POINT OF REMOVING THAT MECHANISM (WHICH APPLIES TO ALL FORCINGS, NOT JUST CO2).

        The studies I cited are solid studies, based on actual ERBE and CERES satellite observations

        The Lindzen and Choi study only comes to the conclusions it does if the data used is cherry-picked in a certain way. Use all the data and you get a different answer.

        The author (Norris) you cited for broader evidence of negative cloud feedback also coauthored a MORE RECENT paper suggesting that cloud feedback would be positive, and I’m still not sure (honestly, haven’t had the time to go through it) that the first paper truly concluded what you thought it did, because a part of the paper seemed to disagree…

        The link you gave to Spencer’s CERES study required a download and I was not familiar with the site, so if you could explain it or post another link, that would be appreciated. But I have to say based on other things Spencer has done, I’m not holding onto hope to be impressed.

        They just aren’t very convincing to me.

        But you did agree with Blous79’s argument for lack of a lag time based on the fact that the 24 hour diurnal cycle is only 24 hours long. I really shouldn’t expect you to be convinced by a good argument.

  196. The (ΔTs): ΔTs = λRF, λ climate sensitivity parameter surely should be able to be *robustly* derived from analysing mountains of daily point data all over the globe. The arguments about the exact number for “climate sensitivity” may well reflect unaccounted negative feedback effects over the long term. FWIW I think the “climate sensitivity parameter” should be renamed something like “surface temperature radiative forcing index”.

    IPCC AR4 WG1 8.6.3.2 Clouds claims that climate scientists have no idea at all whether clouds cause surface warming or cooling. What planet do they live on????? The surface temperature differences on sunny and cloudy days are manifestly obvious to the other 6 billion inhabitants.

    “By reflecting solar radiation back to space (the albedo
    effect of clouds) and by trapping infrared radiation emitted by
    the surface and the lower troposphere (the greenhouse effect
    of clouds), clouds exert two competing effects on the Earth’s
    radiation budget.
    […] In response to
    global warming, the cooling effect of clouds on climate might be
    enhanced or weakened, thereby producing a radiative feedback
    to climate warming (Randall et al., 2006; NRC, 2003; Zhang,
    2004; Stephens, 2005; Bony et al., 2006).

    In many climate models, details in the representation of
    clouds can substantially affect the model estimates of cloud
    feedback and climate sensitivity (e.g., Senior and Mitchell,
    1993; Le Treut et al., 1994; Yao and Del Genio, 2002; Zhang,
    2004; Stainforth et al., 2005; Yokohata et al., 2005). Moreover,
    the spread of climate sensitivity estimates among current
    models arises primarily from inter-model differences in cloud
    feedbacks (Colman, 2003a; Soden and Held, 2006; Webb et al.,
    2006; Section 8.6.2, Figure 8.14). Therefore, cloud feedbacks
    remain the largest source of uncertainty in climate sensitivity
    estimates.”

    IPCC AR4 WG1 Ch8 on climate models says on p591: “Most AOGCMs no longer use flux adjustments, which were previously required to maintain a stable climate.”

    This suggests to me that the climate models lack sufficient modeling of negative feedbacks that stablize the real climate on earth. The fact that flux adjustments are no longer required reflects that the models are improving and that further modeling of negative feedback effects are likely to further stabilize the models.

    IPCC AR4 WG1 Ch 8:
    8.6.3.2.4 Conclusion on cloud feedbacks

    “Despite some advances in the understanding of the physical
    processes that control the cloud response to climate change
    and in the evaluation of some components of cloud feedbacks
    in current models, it is not yet possible to assess which of
    the model estimates of cloud feedback is the most reliable.
    However, progress has been made in the identification of the
    cloud types, the dynamical regimes and the regions of the globe
    responsible for the large spread of cloud feedback estimates
    among current models. This is likely to foster more specific
    observational analyses and model evaluations that will improve
    future assessments of climate change cloud feedbacks.”

    I’d like to see simple grey body climate model at top of atmosphere with positive and negative feedbacks and responses to cosmic rays, solar activity and geothermal activity. Saw something recently about sulfate effects from volcanoes causing aerosol cooling that explains recent cooling. (I would expect naturally increased volcanic activity with warming.)

    All the attempts to model what goes on inside the grey box are inaccurate (still getting better with every iteration) and are a major distraction to the reality on earth – poor input data, urban/airport heat islands, 1200km radius temperature averaging and a relatively stable climate.

    IPCC knows the models have not been tested appropriately. IPCC AR4 WG1 8.6.4 How to Assess Our Relative Confidence in Feedbacks Simulated by Different Models?

    “A number of diagnostic tests have been proposed since the
    TAR (see Section 8.6.3), but few of them have been applied to
    a majority of the models currently in use. Moreover, it is not yet
    clear which tests are critical for constraining future projections.
    Consequently, a set of model metrics that might be used to
    narrow the range of plausible climate change feedbacks and
    climate sensitivity has yet to be developed.”

    IPCC does not quantify the likely statistical errors caused by modeling, because a realistic attempt to do this would result in huge error bars. The best supercomputer weather models are good for 3 days. The GCMs are really attempting to model similar effects to weather in the total atmospheric response (not a simple grey body model), so the likely accuracy of the GCMs is probably about 3 years.

    • The GCMs are really attempting to model similar effects to weather in the total atmospheric response (not a simple grey body model), so the likely accuracy of the GCMs is probably about 3 years.

      Do you think that in 10 years time, it will snow in the tropics while palm trees grow in Greenland? No? Well congratulations, you just predicted climate!

  197. IGBP report 58 is a bit more emphatic on aerosols and clouds – p22 of report
    http://www.igbp.net/page.php?pid=222

    <blockquote.SHORTCOMING IDENTIFIED in ASSESSABLE MATERIAL FOR
    IPCC AR4:
    Uncertainty in aerosol-cloud interaction and associated indirect
    radiative effects
    SURVEY RESPONSES:
    Major reasons:
    • Lack of understanding of fundamental processes
    • Insufficient model parameterizations
    • Lack of observations and data quality
    Negative consequences:
    • Large uncertainty in GCMs and in estimating climate sensitivity
    • Uncertainty in prediction of regional precipitation
    Possible solutions:
    • Ground-based, balloon-based and aircraft-borne column
    measurements, and collocating measures of clouds, aerosols
    and soil moisture from satellites such as CALIPSO
    • Improved process research (eventually including ice clouds), on
    a more global scale than GEWEX does now; high-resolution
    regional modelling (e.g., in low shallow clouds) and better
    representation in GCMs
    Linked issues:
    • Volcanic forcing uncertainty, solar variability inadequately
    addressed

  198. IGBP report 58 p23 specifically on clouds also uses rather scientifically emphatic wording:

    SHORTCOMING IDENTIFIED in ASSESSABLE MATERIAL FOR
    IPCC AR4:
    Models differ considerably in their estimates of the strength of
    different feedbacks in the climate system; the response of clouds
    to global climate change is particularly uncertain
    SURVEY RESPONSES:
    Major reasons:
    • Lack of understanding of fundamental processes
    • Inadequate model parameterization of processes; model
    resolution issues (e.g., when accommodating poorlyparameterized
    small-scale processes)
    • Response of tropical low clouds particularly uncertain
    Negative consequences:
    • Not understanding feedbacks a key problem of climate models
    Possible solutions:
    • Better observations of clouds, e.g., using CloudSat
    • Constrain radiative forcing
    • Link cloud-resolving models to AOGCM
    • Improve parameterization of convection processes
    • Reduce uncertainties in cloud feedbacks, e.g., through
    collaborative efforts between cloud feedback model
    intercomparison project (CFMIP) and the GEWEX cloud system
    study (GCSS)
    • Develop a proven set of model metrics (before including them in
    a future IPCC assessment), e.g., through working groups,
    comparing for feedbacks, to be validated from observations; use
    perturbed parameter ensembles to indicate sensitivity and
    spread feedbacks
    Linked issue:
    • What will be the use of model metrics, particularly with respect to
    future climate change assessments?
    • Identify where the issue of model performance is (1) scale, i.e.,
    likely to be addressed by simply increasing resolution, (2)
    parameterization, e.g., convective processes (3) more physical
    process understanding/data is required before (2) can be
    achieved, e.g., soil moisture and land use feedbacks

    • IF the glass is half empty, it is at the same time half full.

      • Patrick,
        Please provide mathematical proof concerning the fullness or emptiness of a glass. If possible, please do it in less than 1,000 words, but certainly no more than 2,000 words!

      • Blouis79 and Patrick027

        I realize that this is just a model simulation, but it uses superparameterization in order to get a better picture of cloud feedback with warming than the CGMs cited by IPCC are able to do (with the caveat “cloud feedbacks remain the largest source of uncertainty”).

        The study by Wyant et al., entitled “Climate sensitivity and cloud response of a GCM with a superparameterization” shows that at all latitudes the feedback from clouds with warming is negative, rather than positive as assumed by the climate models without superparameterization cited by IPCC.
        ftp://eos.atmos.washington.edu/pub/breth/papers/2006/SPGRL.pdf

        This provides confirmation of the physical observations at tropical latitudes by Lindzen and Choi plus Spencer et al. covering a wider range of latitudes to give a global cloud feedback.

        [1] The climate sensitivity of an atmospheric GCM that uses a cloud-resolving model as a convective superparameterization is analyzed by comparing simulations with specified climatological sea surface temperature (SST) and with the SST increased by 2 K. The model has weaker climate sensitivity than most GCMs, but comparable climate sensitivity to recent aqua-planet simulations of a global cloud-resolving model. The weak sensitivity is primarily due to an increase in low cloud fraction and liquid water in tropical regions of moderate subsidence as well as substantial increases in high-latitude cloud fraction.

        [11] The global annual mean changes in shortwave cloud forcing (SWCF) and longwave cloud forcing (LWCF) and net cloud forcing for SP-CAM are -1.94 W m-2, 0.17 W m-2, and -1.77 W m-2, respectively.

        And

        Shortwave cloud forcing becomes more negative at all latitudes, except for narrow bands near 40N and 40S, indicating more cloud cover and/or thicker clouds at most latitudes. The change in zonal-mean longwave cloud forcing is relatively small and negative in the tropics and stronger and positive poleward of 40N and 40S, where it partly offsets the shortwave cloud forcing change. Thus the net cloud forcing change is negative at most latitudes, and it is of comparable size in the tropics and the extra-tropics.

        The study points out that the overall negative feedback from clouds in both the tropics and the extra-tropics is primarily due to increases in low cloud fraction with warming.

        For a 2K temperature increase, the cloud forcing from increased reflection of incoming SW radiation (-1.94 W/m^2) is far greater than the increased absorption of outgoing LW radiation (+0.17 W/m^2), for a net negative feedback (–1.77 W/m^2).

        This equals (-1.77/2 =) -0.89 W/m^2°K, compared to a positive feedback, as assumed by the GCMs cited by IPCC of +0.69 W/m^2°K (AR4, Ch.8, p.630).

        IPCC estimates that that cloud feedback would be positive, to increase the 2xCO2 climate sensitivity by +1.3°C (from +1.9 to +3.2°C).

        This study shows that the net impact on CS would be negative, resulting in a 2xCO2 CS of below +1.0°C.

        Max

      • Very interesting paper Max. Given the IGBP says poor parameterization of clouds is a problem, then superparameterization is a fix. Wyant et al claim the first GCM+superparameterized model simulation ever done.

        Further research by the same group suggests their previous study overstimates clouds (net cloud feedback is still negative), but that the required resolution is computationally infeasible for a Global Cloud Model but may be feasible for a correctly superparameterized GCM.
        ftp://eos.atmos.washington.edu/pub/breth/papers/2009/SPCAM-LowCldSens-CRM-JAMES.pdf

        It remains for more observations to be done to confirm the net negative feedback of clouds. (Despite all one really has to to is walk outside on a partly cloudy day for a few minutes.)

        I still suspect part of the root problem is buried in the search for model solutions to demonstrate warming caused by surface temperature measurement errors (undercorrection for urban/airport heat islands). When the mesaurement problems are clarified, then the modeling brains will be able to see clearly.

      • manacker – this is very interesting, thank you (I mean it).

        I estimated from the Soden et al 2008 graph at https://chriscolose.wordpress.com/2009/10/08/re-visiting-cff/
        from the LW emission as a function of temperature response (rough estimate -3.76 W/m2 per K (this is for a 255 K blackbody; since LW fluxes at the tropopause level come from a range of temperatures below and are counteracted by LW fluxes from above, actuality could be a little different, but not much since that value gives about 1 K per doubling CO2 which is close to the actual value without other feedbacks) and the various feedbacks given in the graph:

        A range of climate sensitivities per doubling(calculated as -3.7 W/m2 / (net W/m2 response to a 1 K increase))
        of:
        1.28 K/doubling to 4.33 K/doubling
        with cloud feedback removed:
        1.32 K/doubling to 1.95 K/doubling
        with -1.77/2 = -0.885 W/m2 per K cloud feedback put in:
        1.00 K/doubling to 1.33 K/doubling

        Interesting. Still not as low as 0.5 K/doubling CO2, though. And the initial low-end values were lower than what the models actually produced, so I must not have included something, or maybe I should adjust the Planck response value down a bit?

        Just to see: changing the Planck response (LW emission as function of temperature) to -3.3 W/m2:

        1.52 K/doubling to 9.37 K/doubling
        with cloud feedback removed:
        1.57 K/doubling to 2.57 K/doubling
        with -1.77/2 = -0.885 W/m2 per K cloud feedback put in:
        1.14 K/doubling to 1.59 K/doubling

        Now the low-end for the initial values seems about right but the high end is too high.

        Well, something’s missing in there. Anyway, this is interesting.

        Questions – is the -1.77 W/m2 per K the actual cloud feedback in the model – because sometimes changes in cloud forcings are estimated differently than cloud feedbacks. For example, if cloud forcing is always evaluated after water vapor, then an increase in water vapor by itself reduces the LW warming effect of many clouds and reduces the cloud forcing without any actual cloud feedback.

        And of course, we’ll have to see if these results hold up…

        Blous79:It remains for more observations to be done to confirm the net negative feedback of clouds. (Despite all one really has to to is walk outside on a partly cloudy day for a few minutes.)

        The first part is true, but your paranthetical statement is almost a non-sequitor. It is an observation of an effect of clouds as they are. You have to know how cloud amounts and distributions and characteristics WILL CHANGE before you can base a feedback estimate on such observations.

      • Patrick 027: “IF the glass is half empty, it is at the same time half full.”

        The null hypothesis is that climate is stable, because of negative feedbacks.

        Anyone wishing to disprove this needs to be able to demonstrate:
        * observations that support that the climate is unstable in the long term
        * observations that support that negative feedbacks don’t exist

        At this point, both the observations and the best cloud models on the planet do not reject the null hypothesis.

        The fact that low clouds block sunlight and are obviously cooling in effect is unquestionable. The fact that this mechanism operates to at least a surface temperature of 40degC is plainly evident.

        So perhaps someone can show me some observational data of low clouds not causing cooling at a higher temperature than 40degC.

      • Blouis79 –

        Generally, when climatologists refer to a net feedback as being positive or negative, they are not including the Planck response (the change in LW emission due to temperature without any changes in optical properties or composition or physical state). This is just a matter of labelling; the Planck response is included in the math and the modelling.

        With only the Planck response, the climate sensitivity to a doubling of CO2 would be about 1 K. In climatology lingo, net positive feedbacks (except for the Planck response) increase the climate sensitivity. However, as long as the total feedback (including Planck response) is negative, the sensitivity remains finite and positive.

        Stating that the climate sensitivity is likely between 2 and 4+ K/doubling CO2 implies that the climate is stable; just not as stable as if it were smaller.

        If the null hypothesis is that the climate is stable (at least outside of snowball and runaway water vapor territory and for Charney sensitivity), the IPCC and most scientists are not arguing otherwise.

        The alternative null hypothesis that additional CO2 has no effect is in trouble, though, and has been for a long time.

        I think you misunderstood my point about cloud feedbacks being less obvious than what you imply.

  199. Bob_FJ and Marco

    Booker wrote
    http://www.climatechangefraud.com/climate-reports/5510-gavin-schmidt-wants-it-known-he-has-no-connection-with-the-giss-temperature-record:

    Dr Schmidt wishes us to point out that he is not “involved” in Dr Hansen’ s GISS temperature record, which is one of the four official sources of global temperature data relied on by the UN’s Intergovernmental Panel on Climate Change and by governments all over the world. I am of course happy to publish the correction he asked for, but I am intrigued that Dr Schmidt should want to dissociate himself from this increasingly controversial source of temperature figures.

    Like others, it seems I was misled by the fact that twice in the past two years, when GISS has come under fire for publishing seriously inaccurate data, it was Dr Schmidt who acted as its public spokesman. The first was in 2007, when Dr Hansen’s data was revealed to have been systematically “adjusted” to show recent temperatures as higher than those reported by the other three official sources. This embarrassing business, which resulted in GISS having to revise its figures, was exposed by two science blogs, Watts Up With That, run by Anthony Watts, and Steve McIntyre’s Climate Audit.

    The second intervention came this time last year, when GISS had startlingly shown the previous month as the hottest October on record. The same two expert blogs revealed, as the reason for this improbable spike, that GISS had reproduced many of its September figures for two months running. Dr Schmidt may have had no responsibility for this error, but it was he who was wheeled on to explain this hilarious blunder to the world – with the somewhat curious plea that one of the four official sources relied on by the IPCC did not have sufficient resources to maintain proper quality control on its data.

    So Booker first erroneously assumed that Schmidt was “involved” in the GISS temperature record, since Schmidt had twice published explanations for errors found in the record, and then, after objection from Schmidt,
    corrected his position stating,.” I am of course happy to publish the correction he asked for”.

    Sounds like a clear misunderstanding that was cleared up and no “big deal” to me, but maybe I am missing something.

    Max

    • “but I am intrigued that Dr Schmidt should want to dissociate himself from this increasingly controversial source of temperature figures.”

      And that’s the whole issue here: Booker claiming GISTEMP is “increasingly controversial”, and Booker claiming Gavin is ‘dissociating himself’. He isn’t.

      Two lies.

      A third lie is:
      “Dr Hansen’s data was revealed to have been systematically “adjusted” to show recent temperatures as higher than those reported by the other three official sources.”

      GISTEMP was NOT systematically adjusted to show recent temperatures as higher than those reported by the other three. (Booker is probably also mixing up two issues here: Watts misunderstanding of anomalies and base periods, and the USHCN issue). There was an issue with the data from USHCN.

      Face it, Booker’s reporting on climate is a major trainwreck. Mostly wrong, often lying. Delingpole isn’t much different. The latter is now repeating D’Aleo and Smith’s lies about GISTEMP deliberately dropping stations. Neither understand the concept of anomalies either. One wonders whether they have been taught by Watts, perhaps…

      • Marco

        I am not going into a “this guy said this, but that guy said that” debate with you, because that would be a senseless waster of time, but I will address the points you described as “lies” by Booker.

        Booker claimed GISTEMP is “increasingly controversial”, and provided two examples of errors that had to be corrected.

        The two examples are real, so this is not a “lie”.

        If I Google ”problems with GISTEMP temperature record”, I get 35,000 hits (including duplicates/multiples).

        Anthony Watts has pointed out the siting problems with many of the stations in the US GHCN network, which could lead to a spurious warming signal
        http://www.surfacestations.org/

        A rather positive article describing the many problems with GISTEMP:

        GIStemp – A Human View

        NOAA/NCDC: GHCN – The Global Analysis

        A not-so-positive article:
        http://www.americanthinker.com/2010/01/climategate_cru_was_but_the_ti.html

        And another, which talks about non-corrected urban biases
        http://www.climate-skeptic.com/2008/05/urban-heat-bias.html

        And one that talks about “urban adjustments” made “in the wrong direction” or not made at all.

        Positive and Negative Urban Adjustments

        So it is safe to say, based on all the stuff out there, that there is some “controversy” surrounding the GISTEMP record, so Booker has not “lied” here.

        Gavin Schmidt has apparently gone on record to Booker saying that he has nothing to do with putting together the GISTEMP record, i.e. “disassociating himself”, as Booker has written. So, again, Booker has not “lied” here, Marco.

        You wrote:

        A third lie is:
        “Dr Hansen’s data was revealed to have been systematically “adjusted” to show recent temperatures as higher than those reported by the other three official sources.”
        GISTEMP was NOT systematically adjusted to show recent temperatures as higher than those reported by the other three. (Booker is probably also mixing up two issues here: Watts misunderstanding of anomalies and base periods, and the USHCN issue). There was an issue with the data from USHCN.

        Now I know that the record gets adjusted and corrected ex post facto (because I have seen these changes). Some were only local and “forced” (the US “hottest year” adjustment), but many are global and internal (and probably quite harmless).

        So Booker has not “lied” when he wrote that the record has been “adjusted” ex post facto.

        The GISS record does, indeed, show less cooling after 2000 than the other three records (HadCRUT, UAH, RSS), i.e. GISS shows “higher temperatures than those reported by the other three official sources”, so Booker has also not “lied” here.

        If Booker has stated that the record was “systematically adjusted to show recent temperatures as higher than those reported by the other three official sources”, then he should provide evidence that this is the case.

        If he made up this claim without such evidence in order to mislead his readers, he “lied” on this point.

        Max

      • Max, That was a nice lucid comment that gets to the core. Since you seem rather busy in your exchange with Patrick:
        MarcoReur comments:

        My emphasis added

        “[1]but I am intrigued that Dr Schmidt should want to dissociate himself from [2] this increasingly controversial source of temperature figures.
        And that’s the whole issue here: Booker claiming GISTEMP is “increasingly controversial”, and Booker claiming Gavin is ‘dissociating himself’. He isn’t. Two lies.

        I guess you had not seen my post above when you wrote this explaining some finer points in English, which is not your mother tongue:

        An update to Kiehl and Trenberth 1997


        [1] Paraphrasing: Gavin requested correction, (that whilst he obviously uses GISSTEMP); that he is not involved in the creation of the that data. Paraphrasing: Booker says that he is curious as to why Gavin seems to want to be disassociated from being involved in the base data. Sorry Marco there are no lies here except in your imagination.
        [2] It is true that the GISTEMP data has been increasingly controversial in recent times. The fact that you and RC etc disagree with what has been claimed elsewhere is what makes it a controversy. There is no lie here either, regardless of whom may be right or wrong.

        A third lie is:
        “Dr Hansen’s data was revealed to have been systematically “adjusted” to show recent temperatures as higher than those reported by the other three official sources.”
        GISTEMP was NOT systematically adjusted to show recent temperatures as higher than those reported by the other three. (Booker is probably also mixing up two issues here: Watts misunderstanding of anomalies and base periods, and the USHCN issue). There was an issue with the data from USHCN.
        Face it, Booker’s reporting on climate is a major trainwreck. Mostly wrong, often lying. Delingpole isn’t much different. The latter is now repeating D’Aleo and Smith’s lies about GISTEMP deliberately dropping stations. Neither understand the concept of anomalies either. One wonders whether they have been taught by Watts, perhaps…

        There is so much mud slung here that it is hard to know where to start! For instance, are you serious in claiming that Watts, D’Aleo and Smith are unaware that the GISS anomaly baseline (zero) is different? Why does GISS uniquely depress the so-called 1998 super El Nino, etc, and, and…..
        But look, let’s see how you respond to the above before continuing.

  200. Blouis79

    Thanks for link to interesting follow-up study by Blossey et al.

    This study results seem to show a net cloud feedback of –0.1 to –2.8W/m^2°K, with the average of the runs showing: SW: –1.26W/m^2°K; LW: –0.07W/m^2°K; total SW+LW: -1.33 W/m^2°K.

    This is actually marginally higher than the result of the earlier study, and still strongly negative.

    Interesting is that all the results showed a net negative feedback from clouds with warming. Also interesting is that three of the six results showed that even the LW feedback was negative, with two showing positive LW feedback and one showing no LW feedback. So even the LW feedback tends to be negative, somewhat in support of Lindzen’s “infrared iris” hypothesis.

    Whatever else it shows, this study helps to clear up IPCC’s “largest uncertainty” on “cloud feedbacks”.

    Max

  201. Max –

    It will take some time to go through the solar links. Some of the names (Soon, Shaviv and Veizer) sound familiar and in such a way that I’m reluctant to devote any time to those papers because I think they may already have been discounted. It would be interesting to see how many of these papers (prior to 2007, give or take) are referenced by the IPCC.

    An initial response on the rest, based only on the parts in your comment:

    Solanki
    – doesn’t state the magnitude of the climatic effect; could easily be true and still not disagree with IPCC etc.

    Geerts and Linacre
    Their results also suggest that the sensitivity of climate to the effects of solar irradiance is about 27% higher than its sensitivity to forcing by greenhouse gases.
    So try going with a 0.3 W/m2 high-end solar forcing and multiply by 1.27, then divide by 1.6 – still less than 1/4 of the net anthropogenic forcing and of CO2 alone.

    Stott:
    Nevertheless the results confirm previous analyses showing that greenhouse gas increases explain most of the global warming observed in the second half of the twentieth century.

    Lean et al.
    – describes a Northern Hemisphere warming of 0.51 K since the 1600s due to solar effects, in good agreement with a GCM simulation
    -states that less than 1/3 of the 0.36 K warming since 1970 can be attributed to solar variability

    E. Palle Bago and C.J. Butler
    without the need for any artificial amplification factor.
    There’s nothing artificial about it – whether we have the value close to right or not, the potential for feedbacks is obvious.

    • Patrick027

      Thanks for your “initial response” to the solar studies I cited:

      Let’s go through it.

      Solanki
      – doesn’t state the magnitude of the climatic effect; could easily be true and still not disagree with IPCC etc.

      States the observed fact that the level of 20th century solar activity was unusual in at least 8,000 years.

      Geerts and Linacre
      -Their results also suggest that the sensitivity of climate to the effects of solar irradiance is about 27% higher than its sensitivity to forcing by greenhouse gases.

      So try going with a 0.3 W/m2 high-end solar forcing and multiply by 1.27, then divide by 1.6 – still less than 1/4 of the net anthropogenic forcing and of CO2 alone.

      You have a basic error (and circular logic) in your calculation, Patrick. How about going with (IPCC): 1.66 W/m2 CO2 forcing plus 0.98 W/m2 forcing from other GHGs (CH4, N20, halocarbons) = 2.64 W/m2 (from all GHGs). Solar = 1.27*2.64 = 3.35 equals twice the net anthropogenic forcing of CO2 alone (not “less than ¼”, as you erroneously calculated).

      Stott:
      Nevertheless the results confirm previous analyses showing that greenhouse gas increases explain most of the global warming observed in the second half of the twentieth century.

      Oh-oh, Patrick! You are “cherry-picking”
      You have overlooked the previous sentence:

      The results from this research show that increases in solar irradiance are likely to have had a greater influence on global-mean temperatures in the first half of the twentieth century than the combined effects of changes in anthropogenic forcings.

      Stott, as well as most of the studies, agree that only a small portion of late 20th century warming can be attributed to solar forcing (between 25% and 35%), but that a major portion of early 20th century warming can be attributed to solar forcing, with the average over the entire 20th century at slightly more than 50%.

      Read the whole study, Patrick. Not just the sentence you “like”.

      Lean et al.
      – describes a Northern Hemisphere warming of 0.51 K since the 1600s due to solar effects, in good agreement with a GCM simulation
      -states that less than 1/3 of the 0.36 K warming since 1970 can be attributed to solar variability

      Again, Patrick, you are “cherry picking” one sentence, but omitting another. Lean clearly states :

      About half of the observed 0.55°C warming from 1860 to the present may reflect natural variability arising from solar radiative forcing.

      Don’t just cherry-pick out the part “since 1970”, which you like, Patrick. It’s the long-term solar impact that is important here.

      E. Palle Bago and C.J. Butler
      without the need for any artificial amplification factor.
      There’s nothing artificial about it – whether we have the value close to right or not, the potential for feedbacks is obvious.

      “Potential” is a good word, Patrick.

      But is the overall net feedback “strongly positive” (as assumed by the IPCC climate models) or “strongly negative” (as observed from ERBE and CERES satellites) or “insignificant”? You do not have a definitive answer to this question.

      I would have chosen the word “theoretical”, but the author is correct in stating that this “amplification factor” is “artificial” (as it is not supported by empirical data derived from actual physical observations).

      You have by no means refuted the solar studies I cited, which showed, on average, that a bit more than half of the global warming observed over the 20th century can be attributed to the unusually high level of solar activity (highest in several thousand years).

      And that was the point, Patrick.

      Max

      • Regarding ‘cherry picking’, well yes, if that was the end of what I would say than certainly the criticism is deserved, but this was an initial response, and I was just noting some of the limitations placed on solar forcing’s role by the same studies. I guess I should have said – “while other parts indicate a greater role, they still place limitations on recent solar contributions to warming”… But I haven’t even really dug into these yet.

        But regarding
        Solar = 1.27*2.64 = 3.35 equals twice the net anthropogenic forcing of CO2 alone

        Did the study state that solar forcing was 2.64 W/m2? Maybe in an unquoted portion (which would then likely be wrong), but the quoted portion only says that the SENSITIVITY is 1.27 times that for solar forcing what it is for greenhouse gases. That statement by itself indicates nothing of what the solar forcing actually is. So I took a high end estimate of solar forcing and multiplied it by 1.27.

      • I would have chosen the word “theoretical”,

        That works for me, too.

        but the author is correct in stating that this “amplification factor” is “artificial” (as it is not supported by empirical data derived from actual physical observations).

        It would be artificial in so far as if it turns out to be wrong, but the way the author wrote this made it seem as if the concept of an amplification factor was somehow made-up to fit the attribution of some amount of warming to some amount of forcing, rather than derived from physics and – yes – observations on which parameterizations have been based.

        But is the overall net feedback “strongly positive” (as assumed by the IPCC climate models) or “strongly negative” (as observed from ERBE and CERES satellites) or “insignificant”? You do not have a definitive answer to this question.

        But you don’t have a definitive answer either (I’ve provided websites, you’ve provided websites, so we’ve both provided answers indirectly, but you don’t seem to be counting that).

        The new site for Spencer/CERES work still requires me to download a file.

        However, I did find a brief description here:
        http://www.drroyspencer.com/category/blogarticle/
        (Saturday, January 9th, 2010 )

        The data itself is interesting. Spencer’s interpretation seems biased. While noting a lack of warming or cooling during the period means the cloud changes might not be a feedback, it could also be suggested that the small amount of warming which did occur over at least part of the period could imply a large positive cloud feedback. Or maybe there is a large negative cloud feedback. Or that this is a short-term internal variability that could reverse itself.

        There is not much to base a low sensitivity on from that.

        However, Spencer wants to argue that some significant part of the overall warming is due to internal variability with cloud feedback. Of course that’s imaginable, but if it’s possible, why so much in the last 100 years and not so much before? Coincidence? Could be.

        I just don’t see much to go on there – at least by itself.

        Another more recent post showing attribution of warming to AMO, PDO, and ENSO (SOI) illustrates some problems with his reasoning and also some possible misunderstandings on his part…

      • But (if the graph wasn’t accidentally inverted) could that CERES data show either a lack of cosmic ray- cloud forcing, or perhaps suggest the existence of a solar-cloud relationship that actually reduces the effective solar forcing?

        And there again, I’d say maybe not, but if I had greater enthusiam for short term correlations, I might say otherwise.

  202. Max –

    Patrick, you believe your “arguments make sense” (to paraphrase Mandy Rice Davies: of course you would, wouldn’t you?)

    That works both ways.


    I have “perused” these ad nauseam. There are no such references, Patrick.

    I thought they had long lists of references at the end of each chapter.

    • Patrick027

      Yes. IPCC has “long lists of references at the end of each chapter”.

      But none of them cite empirical data based on actual physical observations supporting the climate model estimates of strongly positive feedback from clouds, and that is what I asked you to provide.

      Max

  203. Patrick027

    Thanks for your last post from January 26, 2010 @ 2:40 pm. Let’s go through it.

    My statement:

    One climate model prediction that has clearly been falsified is that the 21st century will see GH warming at a rate of 0.2°C per decade.

    Your response:

    THE PREDICTION WAS NOT THAT EACH AND EVERY DECADE WOULD SEE 0.2 K WARMING.
    NO IT HAS NOT BEEN FALSIFIED!

    The prediction reads:

    For the next two decades, a warming of about 0.2°C is projected for a range of SRES emission scenarios. Even if the concentrations of all greenhouse gases and aerosols had been kept constant at year 2000 levels, a further warming of about 0.1°C per decade would be expected.

    “The next two decades” sounds quite clear to me, Patrick. And they missed the prediction pretty badly, as it has turned out so far, so the prediction is clearly “falsified”.

    Your next statement:

    IF THE MECHANISM FOR A LAG TIME TO A RESPONSE IS FIRMLY ESTABLISHED, THEN THERE IS NO SENSE IN SIMPLIFYING THE EXPLANATION TO THE POINT OF REMOVING THAT MECHANISM (WHICH APPLIES TO ALL FORCINGS, NOT JUST CO2).

    A mighty big “IF”, Patrick. The “hidden in the pipeline” postulation only makes sense if there is something really there in the “pipeline” (which can be found and physically measured).

    The record shows that it’s not in sensible heat of the atmosphere (at surface or troposphere) or the upper ocean, it’s not in latent heat from higher atmospheric water vapor content and its not in latent heat of fusion from the relatively small amount of melting ice.

    So it’s NOT THERE, and a theoretical “established mechanism” is meaningless; in fact, it has been refuted by the observed facts.

    To my statement:

    The studies I cited are solid studies, based on actual ERBE and CERES satellite observations

    You opined:

    The Lindzen and Choi study only comes to the conclusions it does if the data used is cherry-picked in a certain way. Use all the data and you get a different answer.

    Sorry, Patrick. That’s just what you say, but you have not brought any evidence to back up your statement.

    Then you stated:

    The link you gave to Spencer’s CERES study required a download and I was not familiar with the site, so if you could explain it or post another link, that would be appreciated. But I have to say based on other things Spencer has done, I’m not holding onto hope to be impressed.

    Here is the link to the Spencer et al. study.
    http://www.weatherquestions.com/Recent-Evidence-Reduced-Sensitivity-NYC-3-4-08.pps

    Read it and “be impressed”.

    In addition, there has been a cloud superparameterization study, which shows a fairly strong net negative feedback from clouds, not only over the tropics (as observed by Spencer et al.), but at all latitudes. It may be of interest to you as just another bit of evidence supporting the premise of a net negative feedback from clouds with warming (this one shows this to be around –0.89W/m^2°K as compared to the IPCC estimate from model simulations without superparameterization of +0.69 W/m^2°K.
    ftp://eos.atmos.washington.edu/pub/breth/papers/2006/SPGRL.pdf

    Blouis has referred me to an even later study by the same authors, which also confirms the net negative cloud feedback.

    Patrick, to summarize, the case for a 3.2°C climate sensitivity looks pretty bleak, I’m afraid, primarily because of the erroneous model assumptions made for cloud feedbacks.

    These now appear to be strongly negative with warming, primarily as a result of increased low-level clouds (which reflect incoming SW radiation) with warming plus, apparently, a slight decrease in high-level clouds (which absorb and slow down outgoing LW radiation.

    So the statement in early 2007 by IPCC, “Cloud feedbacks remain the largest source of uncertainty”, may be getting cleared up (not exactly as IPCC had anticipated), and the overall 2xCO2 climate sensitivity of 3.2°C seems not to be supported by the latest findings on “cloud feedbacks”

    Max

  204. 1.’hidden in the pipleline’ is just a phrase. It isn’t there yet. But, “So it’s NOT THERE, and a theoretical “established mechanism” is meaningless; in fact, it has been refuted by the observed facts.” – it seems you are doubting the mere existence of heat capacity, and what observed facts are you looking at:
    http://www.realclimate.org/index.php/archives/2009/12/updates-to-model-data-comparisons/
    http://www.realclimate.org/index.php/archives/2010/01/2009-temperatures-by-jim-hansen/
    (see the 5 and 11-year running means. Seems the last decade is about 0.2 K warmer than the prior decade. And shorter term trends are within the range of shorter term trends produced by models).

    2.
    Sorry, Patrick. That’s just what you say, but you have not brought any evidence to back up your statement.
    Well I thought I had already posted these sites:
    http://www.realclimate.org/index.php/archives/2010/01/lindzen-and-choi-unraveled/
    http://www.realclimate.org/index.php/archives/2010/01/first-published-response-to-lindzen-and-choi/
    http://www.realclimate.org/index.php/archives/2010/01/lc-grl-comments-on-peer-review-and-peer-reviewed-comments/

    3.
    (this one shows this to be around –0.89W/m^2°K as compared to the IPCC estimate from model simulations without superparameterization of +0.69 W/m^2°K.
    Yes, that was very interesting and I’m glad you posted the link.
    For comparison: Suppose a climate model has 0.69 W/(m2 K) cloud feedback and a sensitivity of 0.730 K/(W/m2) or 2.7 K/doubling CO2, where doubling CO2 has a forcing of 3.7 W/m2. The feedback including the Planck response (the Planck response is about -3.76 W/(m2 K) for a 255 K blackbody, and similar for the climate system) would be -1/climate sensitivity = -1.37 W/(m2 K). Removing a cloud feedback of 0.69 W/(m2 K) then gives the feedback -2.06 W/(m2 K) and a sensitivity of 0.485 K/(W/m2) or 1.80 K/doubling CO2; then adding -0.885 W/(m2 K) cloud feedback yeilds a feedback -2.95 W/(m2 K) and a sensitivity of 0.340 K/(W/m2) or 1.26 K/doubling CO2.

    But the actual SP-CAM model (superparameterized clouds) of
    ftp://eos.atmos.washington.edu/pub/breth/papers/2006/SPGRL.pdf :
    (my emphasis added)

    [6] SP-CAM, described in detail in K05, is based on the
    NCAR CAM which is the atmosphere component of
    the NCAR Community Climate System Model (CCSM)
    [Blackmon et al., 2001]. A development version of CAM 3
    with the semi-Lagrangian dynamical core
    is configured to
    run at T42 horizontal grid (2.8  2.8 spacing) with
    30 levels (domain top at 3.6 hPa), and a time-step of
    30 min. In each of the 8192 grid-columns of the MMF,
    the conventional moist-physics parameterizations including
    all cloud parameterizations are replaced with a CRM. The
    CRM also replaces CAM vertical diffusion and PBL
    parameterizations.

    has a climate sensitivity of 0.41 K/(W/m2) or (ignoring sig.figs for the moment) 1.517 K/doubling CO2, which implies a feedback of -2.44 W/(m2 K); *if* this model would have produced a sensitivity of 2.7 K/doubling with 0.69 W/(m2 K) cloud feedback and the cloud feedback is -0.885 W/(m2 K), then the other feedbacks would have to have increased by 0.506 W/(m2 K).

    • strong>Patrick027

      We seem to be dancing around in circles here.

      To the “hidden in the pipeline” postulation you state:

      ’hidden in the pipleline’ is just a phrase. It isn’t there yet. But, “So it’s NOT THERE, and a theoretical “established mechanism” is meaningless; in fact, it has been refuted by the observed facts.” – it seems you are doubting the mere existence of heat capacity, and what observed facts are you looking at

      Not doubting “heat capacity” at all, just doubting the “voodoo science” of “hidden energy” that cannot be seen or measured anywhere in the “observed facts”: the atmosphere at the surface as well as the troposphere is cooling (HadCRUT, UAH), as is the ocean (Argo), so there is no added “sensible heat”. There is no increase in water vapor (so there is no added “latent heat”) and the tiny bit of ice that has melted hardly contains enough latent heat to be measurable. So where, Patrick, is this “hidden energy”? Has it disappeared into outer space or somehow been miraculously transferred to the depths of the deep ocean? (If so, we’ll never see it again, so we can forget about it). Or is it “hiding under a rock” somewhere? Your “heat capacity” argument disproves the “hidden in the pipeline” postulation.

      As far as recent temperatures go, you opined:

      (see the 5 and 11-year running means. Seems the last decade is about 0.2 K warmer than the prior decade. And shorter term trends are within the range of shorter term trends produced by models).

      Patrick, just look at the linear trend after 2000. All records show cooling. The “last decade is warmer than the previous” is a meaningless argument, Patrick. It is the trend that counts. If you want to compare years, the last five years are cooler than the previous five years (see monthly HadCRUT record)::
      The average temperature anomaly in 2000 through 2004 (the first five years) was 0.462degC.
      The average temperature anomaly in 2005 through 2009 (the last five years) was 0.414degC.

      Now I do not claim that this is a trend analysis anymore than your “ninetees”/”noughties” comparison), but it tells me that the period 2005-2009 was around 0.048degC cooler than the preceding period 2000-2004. Big deal.

      The “models” predicted that “the next two decades” would see “a warming of about 0.2°C per decade; instead the first nine years have seen a cooling of about 0.1°C. This is lousy prediction, Patrick, no matter how you slice it.

      Patrick, when you cite RealClimate as “scientific evidence” to support the “hidden in the pipeline” hypothesis, you must be kidding!

      Show me a scientific study based on empirical data from actual physical observations (not questionable paleoclimate reconstructions) to (a) support the “hidden in the pipeline” hypothesis or (b) provide evidence that the total net feedback is positive, as assumed by the model simulations. If you cannot, then we have to accept the studies that are out there (Spencer et al. and Lindzen and Choi) and discount the “hidden in the pipeline”postulation as “voodoo science”.

      The superparameterization study I cited showed a fairly strong negative global feedback from clouds of –0.89W/m2K.

      IPCC assumed a strong positive feedback of +0.69 W/m2K and stated (AR4 Ch. 8, p.633) that this feedback would increase the 2xCO2 climate sensitivity by +1.3°C from +1.9°C±0.15°C to +3.2°C±0.7°C, so it is clear that removing this cloud feedback would put the 2xCO2 CS back at +1.9°C and adding in a negative cloud feedback slightly greater than the positive one that was removed would result in a CS that is significantly below +1.9°C, most likely somewhere around +0.7°C to +1.0°C, which would all make sense, in light of the Spencer et al. observations for the tropics.

      Patrick, you can wiggle and squirm as much as you want to and postulate all sorts of theoretical calculations, but the model-generated “hidden in the pipeline” hypothesis and with it the strong net positive feedback plus the 2xCO2 climate sensitivity of 3.2°C have been refuted by the facts on the ground.

      Max

  205. … in order to make the climate sensitivity what it is in this model.

    ———
    (PS the paper defines climate sensitivity as (using global means) surface temperature change / change in net outgoing radiation.

    Whether the radiation flux is taken at TOA or tropopause level is not specified. With respect to forcing, climate sensitivity is often or typically given for the tropopause level forced net flux change with equilibrated stratosphere.

    But since (in the global time average, after equilibration) the stratosphere is almost in radiative equilibrium, the net flux (SW and LW) at the tropopause is about the same as at TOA (above the stratosphere, etc.).

    The study left enough spin up time to allow stratospheric equilbration (Holton p.410, refering to the stratosphere: “the radiative relaxation time is short compared to the annual cycle“), so a change in net radiation flux at TOA should be the same as that at the tropopause level. Thus the tropopause level forcing (equilibrated stratosphere) would be the forcing that balances such a climate-dependent flux in order to sustain the climatic state.
    ———-

    Interesting point though: other climate sensitivities from other models put through the same test (a fixed 2 K SST increase):

    model _ K/(W/m2) _ implied K/doubling CO2

    (cloud-resolving model):
    SP-CAM _ 0.41 _ 1.517

    CAM 3.0 semi-lagrangian dynamical core
    ________ 0.41 _ 1.517
    CAM 3.0 eulerian dynamical core
    ________ 0.54 _ 1.998
    GFDL AM2.12b
    ________ 0.65 _ 2.405

    SP-CAM was a CAM 3 with semi-lagrangian dynamical core with a cloud resolving superparameterization. If that was the same CAM 3 as the CAM 3.0 without the superparameterization, then the superparameterization didn’t chang the global average outgoing radiation and thus the climate sensitivity. Maybe that’s just a coincidence, or maybe CAM 3.0 by itself has decent cloud parameterization? It would be interesting to see how superparameterization improves other GCM models (and how the other models listed for comparison have performed under more complete simulations of climate forcing), and:

    A next step with SP-CAM is to couple it to a slabocean
    model so that cloud responses in more realistic
    climate change scenarios can be evaluated.

    … and then an ocean and atmosphere GCM, because:

    we
    analyze the climate change as the SST is uniformly
    increased by 2 K.

    This won’t include the direct radiative effect of CO2 on the heating/cooling of cloud layers (how significant would that be?) (or perhaps some other things related to stratospheric cooling, etc.)

    Some large scale dynamics and the distribution of feedbacks might be affected by variations from global mean SST increase.

    BUT:

    This combined model, the
    SP-CAM, shows substantial improvement over CAM in
    simulating important aspects of the modern climate,
    including producing a realistic Madden-Julian oscillation
    and a more realistic diurnal cycle of convection over land
    (K05).

    That’s something!

  206. About the more recent superparameterization work:
    ftp://eos.atmos.washington.edu/pub/breth/papers/2009/SPCAM-LowCldSens-CRM-JAMES.pdf
    (Blossey et al 2009)

    … haven’t gone through more of it yet, but:

    Hence, the negative low cloud feedbacks in SP-CAM may be
    exaggerated by under-resolution of trade cumulus boundary layers.

    So the cloud feedback (or some part of it) might be less in the negative direction than in the ftp://eos.atmos.washington.edu/pub/breth/papers/2006/SPGRL.pdf
    (Wyant et al 2006)

    • Patrick027

      On net cloud feedback with 2K warming the 2006 study using suoerparameterization concludes:

      The global annual mean changes in shortwave cloud
      forcing (SWCF) and longwave cloud forcing (LWCF) and
      net cloud forcing for SP-CAM are -1.94 W m2, +0.17 W
      m2, and -1.77 W m2, respectively.

      This translates to a global net negative cloud feedback of -0.89W/m2K, as compared to the net positive feedback of +0.69W/m2K as assumed by the IPCC.models withour superparameterization.

      The more recent paper lists SW + LW results between -5.2W/m2 and -0.2W/m2, with an average of -2.26W/m2 (for 2K) (a bit higher than the first study) or -1.13W/m2K.

      In both cases, the superparameterization results show a strong net negative cloud feedback, which essentially changes the 2xCO2 climate sensitivity from 3.2degC (IPCC) to somewhere below 1degC.

      Max

      • No manacker, I haven’t gone through the more recent study yet so maybe I misunderstood something there, but I don’t have confidence in your assertions because you are wrong about the 2006 study: In spite of the change in cloud feedback there, the sensitivity only drops to 0.41 K / (W/m2) (clearly stated in the paper), which implies 1.5 K/doubling CO2, NOT less than 1 K/doubling.

  207. Patrick027

    The original study using superparameterization to determine the net global cloud feedback found this to be a strongly negative –0.89W/m2K, compared to a strongly positive net feedback of +0.69W/m2K (IPCC models with no superparameterization).

    The later study you and Blouis both cited shows (Table2) the results from the various runs.

    For the 2K increase in temperature, SW cloud feedback varies from –0.2 to –5.4W/m2 (average of –2.5W/m2), and LW cloud feedback varies from +0.2 to –0.5W/m2 (average of –0.2W/m2). The average total SW+LW cloud feedback is –2.7W/m2 for 2K increase or –1.3W/m2K (slightly higher than, but in the same range as, the result from the first study).

    It appears that this improved methodology is helping clear up IPCC’s “largest source of uncertainty” on “cloud feedbacks”. The SW portion is strongly negative and even the LW portion is marginally negative.

    These new data also point to a 2xCO2 climate sensitivity of below 1°C, rather than 3.2°C, as estimated by the IPCC models without superparameterization.

    Max

  208. Patrick027

    To your January 28, 12:55pm.

    We are beginning to beat a dead horse on the solar studies, which concluded on average that around half of the observed warming over the 20th century can be attributed to the unusually high level of solar activity (highest in several thousand years).

    Geerts and Linacre concluded

    “the sensitivity of climate to the effects of solar irradiance is about 27% higher than its sensitivity to forcing by greenhouse gases”.

    To this conclusion you opined:

    Did the study state that solar forcing was 2.64 W/m2? Maybe in an unquoted portion (which would then likely be wrong), but the quoted portion only says that the SENSITIVITY is 1.27 times that for solar forcing what it is for greenhouse gases. That statement by itself indicates nothing of what the solar forcing actually is. So I took a high end estimate of solar forcing and multiplied it by 1.27.

    Your calculation approach is flawed, Patrick.

    If the total observed 20th century warming was 0.65°C and the warming (i.e. sensitivity) attributable to changes in solar forcing was 1.27 times that attributable to GHGs and other anthropogenic factors, then we can calculate the warming from anthropogenic factors = 0.65 / 2.27 = 0.29°C (45% of total) and that from solar forcing = 0.29 * 1.27 = 0.36°C (55% of total).

    So, compared to some other studies, which put the solar portion at around half of the total, G+L were on the higher side, with slightly more than half (55%) attributable to solar forcing.

    Max

    • If the total observed 20th century warming was 0.65°C and the warming (i.e. sensitivity) attributable to changes in solar forcing was 1.27 times that attributable to GHGs and other anthropogenic factors, then we can calculate the warming from anthropogenic factors = 0.65 / 2.27 = 0.29°C (45% of total) and that from solar forcing = 0.29 * 1.27 = 0.36°C (55% of total).

      The problem is, the sensitivity is warming PER UNIT FORCING. My approach stands.

  209. Patrick027

    Let’s cap this discussion off, as it is becoming repetitive.

    I have posted links to several studies which all point to a climate sensitivity of around 1C or less.

    These are based on physical observations (CERES/ERBE) as well as superparameterization studies on clouds, which confirm that the total outgoing SW+LW radiation increases with temperature, thereby resulting in a net negative feedback.

    Then I have posted links to several solar studies, which attribute roughly half of the observed 20th century warming to the unusually high level of solar activity, leaving the other half for anthropogenic factors. These studies state that the solar portion was higher in the early 20th century than it was after 1970. This all fits very well with a 2xCO2 sensitivity of around 1C.

    You have brought no links to studies based on recent physical observations, which confirm the premise of a 2xCO2 sensitivity of 3.2C (as assumed by IPCC model simulations), but simply refer me to IPCC (which does not list such studies).

    So that’s where we stand today. You are unable to bring empirical data based on physical observations from today (not dicey paleoclimate reconstructions) to support the postulation of the 3.2C sensitivity (and its related postulation of energy “hidden in the pipeline” despite a recently cooling planet).

    The case for these strange postulations does not look good, Patrick, but there is no point beating this dog any further.

    Thanks for an interesting discussion.

    Max

    • Criticism of Lindzen and Choi:
      Just please read through this (1 of 3 of the websites I’ve posted before – doesn’t require you to download anything, by the way); I’m sorry the actual scientific paper requires subscription but here’s an author’s explanation:
      http://www.realclimate.org/index.php/archives/2010/01/lindzen-and-choi-unraveled/

      I guess I could sum up as follows:

      LC09 conclusion requires picking a subset of the data to analyze (in the sense of cherry picking, though not directly accusing LC09 of that intention).

      LC09, even given their results, miscalculate the implied sensitivity – it would be higher with the correct calculation

      It is risky to base climate sensitivity on such a subset of the global climate system.

      • Patrick027

        You have cited a report which questions the calculations made by L+C.

        The part of L+C that still stands is the observation from ERBE satellites that total outgoing SW + LW radiation has increased with surface temperature, rather than decreased (as simulated by the models).

        One can argue about the details of the calculations, but this points out that something is happening with warming to increase outgoing radiation. From the numbers it appears that the primary increase is in SW radiation, more than likely being reflected from increased low-level clouds which form at higher surface temperature. In addition, the outgoing LW radiation appears to have increased with warming, as well, possibly providing observational support for Lindzens “infrared thermostat” hypothesis, whereby high-level heat trapping clouds reduce with surface warming.

        Who knows the exact “why” at this time? And maybe the L+C calculation of how much this impacts overall feedback may have errors, but there is no question that total outgoing radiation increases with warming, providing some sort of “negative feedback” rather than a “positive feedback” as simulated by the climate models.

        And that is the key finding of L+C.

        Max

      • The part of L+C that still stands is the observation from ERBE satellites that total outgoing SW + LW radiation has increased with surface temperature, rather than decreased (as simulated by the models).

        One can argue about the details of the calculations, but there is no question that total outgoing radiation increases with warming, providing some sort of “negative feedback” rather than a “positive feedback” as simulated by the climate models.

        Did you read through the criticisms AT ALL?

        And that is the key finding of L+C.

        The key finding was wrong.

  210. Marco

    We exchanged posts on the controversy surrounding the GISS temperature record, Hansen and Schmidt, so I did some more checking.

    If I Google “GISS temperature record controversy”, I get 54,900 hits (including many duplicates and multiples).

    Here is a recent NASA press release
    http://www.giss.nasa.gov/research/news/20100121b/

    Gavin Schmidt, a climatologist at NASA’s Goddard Institute for Space Studies (GISS) in New York City, studies why and how Earth’s climate varies over time. He offered some context on the annual surface temperature record, a data set that’s generated considerable interest — and some controversy — in the past.

    (Bold type by me.)

    And on a blogger post by John Ray on “chartmanship and manipulated data from GISS”:
    http://www.bloggernews.net/113402
    Quoting a comment from F. James Cripwell

    I am suspicious that Jim Hansen and Gavin Schmidt are closely connected with the NASA/GISS data, but they are very competent scientists with impressive credentials. If you ask for a linear least squares regression analysis, you find a linear trend of increasing temperatures. However, if you ask for a non-linear analysis, NASA/GISS shows an increasing trend, but the other three show that temperatures has passed through a maximum, and are now decreasing.

    Quoting a study by D’Aleo and Smith published by SPPI, this report describes GISS data manipulation by eliminating measurements from Canadian Arctic stations from the record and “cherry picking” the rest:
    http://climaterealists.com/index.php?id=4958&linkbox=true&position=9

    Mr. D’Aleo and Mr. Smith say NOAA and another U.S. agency, the NASA Goddard Institute for Space Studies (GISS) have not only reduced the total number of Canadian weather stations in the database, but have “cherry picked” the ones that remain by choosing sites in relatively warmer places, including more southerly locations, or sites closer to airports, cities or the sea — which has a warming effect on winter weather.
    Over the past two decades, they say, “the percentage of [Canadian] stations in the lower elevations tripled and those at higher elevations, above 300 feet, were reduced in half.”

    D’Aleo also points out

    Click to access NOAAroleinclimategate.pdf

    The global data bases have serious problems that render them useless for determining accurate long term temperature trends. Especially since most of the issues produce a warm bias in the data

    There was the September/October 2008 temperature screw-up that led GISS to proclaim a record hot October (until it was discovered that September data were reported for October) plus this hilarious report on WUWT explaining the Russian temperature anomaly on GISS:

    GISS, NOAA, GHCN and the odd Russian temperature anomaly – "It's all pipes!"

    OK. Enough said.

    So there is undoubtedly some controversy surrounding the GISS temperature record, (as NASA itself concedes) plus some indications of data manipulation by station elimination and “cherry picking”.

    So maybe Booker was not that far off

    Max

    • NASA website on satellite temperature from 1997 (prior to the more widespread controversy) demonstrated lack of warming a decade ago.
      http://science.nasa.gov/newhome/headlines/essd06oct97_1.htm

      • Blouis79

        Regarding the UAH satellite temperature record, there was a retroactive slight upward correction to the record around three years ago to correct for “orbital decay”. The report you cited predated this correction.

        The corrected record shows a warming trend since 1979, but at a much lower rate than that shown by the two principal surface temperature records (HadCRUT and GISTEMP).

        This presents a dilemma for those who believe that AGW is the primary cause of post 1979 warming: GH warming should actually show more rapid warming in the troposphere than at the surface. The observed fact that this is not happening either (a) raises doubts concerning AGW as the principal cause of the warming or (b) demonstrates that the UHI effect (and possibly some data manipulation) is causing the surface record to show more warming than has really occurred.

        I have seen no definitive responses to this dilemma to date from those who believe that AGW is the principal cause for post-1979 warming.

        Maybe Patrick027 has some ideas how this dilemma can be explained.

        Max

      • This presents a dilemma for those who believe that AGW is the primary cause of post 1979 warming: GH warming should actually show more rapid warming in the troposphere than at the surface. The observed fact that this is not happening either (a) raises doubts concerning AGW as the principal cause of the warming or (b) demonstrates that the UHI effect (and possibly some data manipulation) is causing the surface record to show more warming than has really occurred.

        1. That presents a dilemma that either solar or greenhouse forcing has caused warming, or perhaps other things as well – because (outside higher latitudes) the level of maximum warming is expected to occur at some height above the surface as a general feature of generic warming. Something special has to occur for this to not be the case.

        However, it’s concievable… (diurnal cycle largest at surface and very high above tropopause, problems with measurements, uncertainty in observations, etc.)

        (Come back to that when time allows, jump to point 2.)

        2. UHI has been effectively ruled out as a cause for the temperature record. There are no cities in the ocean. There are no urban centers floating on the sea ice. And UHI is something scientists look for and try to elimintate in analyzing station data. (And the issue is not so much UHI as changes in UHI at a given station over time.)

  211. FROM
    http://www.realclimate.org/wiki/index.php?title=Roy_Spencer

    “Feedback vs. Chaotic Radiative Forcing: “Smoking Gun” Evidence for an Insensitive Climate System? Roy W. Spencer and William D. Braswell, Climate Science: Roger Pielke Sr. Research Group News, July 17, 2008”

    (nice simple model, e-folding time of disequilibrium)
    http://tamino.wordpress.com/2008/07/28/spencers-folly/

    http://tamino.wordpress.com/2008/07/30/spencers-folly-2/

    http://tamino.wordpress.com/2008/08/01/spencers-folly-3/

    The link to the original Spencer paper doesn’t work, but from Tamino’s 3-part response, it seems this was Spencer’s work showing a graph of radiation changes vs temperature, where there is some long-term drift and some short term variability with a different slope.

    Tamino points out that the short-term responses would make the climate sensitivity appear closer to ~ 1K/CO2 doubling because other feedbacks besides the temperature feedback (Planck response) take time. Of course, water vapor and clouds shouldn’t take much time, so in so far as that part goes, ? – It depends on what the timescale was.

    ——-

    However, other work by Spencer seems to assume that some portion of cloud feedback is not in response to the forced changes in global average temperature and associated changes. Which, of course, is true – there is internal variability, some of which rearrangements of heat, etc, and some of which involves fluctuations in clouds, and presumably humidity, snow cover, etc. However…

    http://www.realclimate.org/index.php/archives/2008/05/how-to-cook-a-graph-in-three-easy-lessons/

    Roy has given himself a lot of elbow room to play around in: you have the choice of any two variability indices among dozens available, you make an arbitrary linear combination of them to suit your purposes, you choose whatever mixed layer depth you want, and you finish it all off by allowing yourself the luxury of diddling the initial condition. With all those degrees of freedom, I daresay you could fit the temperature record using hog-belly futures and New Zealand sheep population. Anybody want to try?

    (See also:
    http://www.realclimate.org/index.php/archives/2007/05/fun-with-correlations/ )

    Spencer did something similar here:

    http://www.drroyspencer.com/category/blogarticle/
    Wednesday, January 27th, 2010

    In which he makes a linear combination of AMO, PDO, and SOI (ENSO), and assumes that a temperature anomaly exists that is a linear combination of these things, and that this is much of the difference between model output (averaged over internal variability, I think) and the historical temperature record.

    Except that he does this for one portion of the time period to establish a possible temperature effect of this combined AMO/PDO/SOI index.

    And then projects the effect onto the remaining time period to show that a portion of the model trend is accounted for by the AMO/PDO/SOI index – and so the model output climate sensitivity is too high.

    Problems:

    1. this approach assumes that the model output was correct in the earlier time period. True that if a hypothesis or theory contradicts itself, then it can’t be true. However, the contradiction is contingent on the explanation of the difference between model output (I think averaged over internal variability), and so the contradiction could just as easily be used to argue that Spencer’s derived relationship of temperature to the linear combination of AMO,PDO,and SOI is wrong. Alternatively, the model could be too sensitive only to the forcings that dominated the later portion of the time period, but there isn’t evidence within this work to specifically point to that conclusion.

    2. None of this is to say that AMO, PDO, and SOI don’t contribute to the internal variability of the global average temperature. However, changes in AMO and PDO and SOI or any other such mode of internal variability can also be part of a feedback to a forced changed. So even if some portion of a longer-term trend can be accounted for by such internal variability modes (including any radiational feedbacks to those modes, such as via clouds), this doens’t automatically remove this portion from part of the climate response to a forcing.

    (PS some time ago, I recall reading that a scientist had found a pattern of temperature change in the paleoclimatic record which resembled that of the 20th century – warming, then cooling, then warming – BUT with an important difference: the warming periods of the 20th centurty had more warming, and the cooling periods had less cooling, compared to the paleoclimatic record. The difference implies an additional warming trend over the 20th century absent in the similar paleoclimatic record. I don’t know what became of this specific finding, though.)

    • Correction:
      However, the contradiction is contingent on the explanation of the difference between model output (averaged over at least some of the internal variability) and observations …

  212. … Some other problems with that post by Spencer:

    ( http://www.drroyspencer.com/category/blogarticle/
    Wednesday, January 27th, 2010 )

    1.

    There are a couple of notable features in the above chart. First, the average warming trend across all 17 climate models (+0.64 deg C per century) exactly matches the observed trend…I didn’t plot the trend lines, which lie on top of each other. This agreement might be expected since the models have been adjusted by the various modeling groups to best explain the 20th Century climate.

    Models do exist based on statistics (matching model output to a data set) and this is entirely appropriate for some applications.

    But climate models are generally based on knowledge of actual physical relationships. Even the parameterizations are based on observations. To the extent that parameterizations are tuned to have model output match observations, this is done for time-averaged climate states, not for trends. Thus it is an actual test of models to match trends.

    2.

    An optimum linear combination of the PDO, AMO, and SOI that best matches the models’ “unexplained temperature variability” is shown as the dashed magenta line in the next graph.

    a. Problem of assuming that those indices are, or are correlated with, the cause of the ‘unexplained temperature variability’ – see above ( http://www.realclimate.org/index.php/archives/2008/05/how-to-cook-a-graph-in-three-easy-lessons/ , http://www.realclimate.org/index.php/archives/2007/05/fun-with-correlations/ )

    b. A labelling issue – specific instances of unforced variability have very low predictability beyond time horizons limited by ‘butterfly effects’, but the existence of and general texture/statistics of such variability is predictable in principle. Models do produce internal variability – not exactly correctly and to varying degrees of accuracy for different modes of variability, but they do produce internal variability and at least most of the observed variability in global average temperature falls within model variability (http://www.realclimate.org/index.php/archives/2008/05/what-the-ipcc-models-really-say/ , see also graphs p.684-686 in IPCC AR4 WGI Ch9 **)

    3.

    The Logical Absurdity of Some Climate Sensitivity Arguments
    This demonstrates one of the absurdities (Dick Lindzen’s term, as I recall) in the way current climate change theory works: For a given observed temperature change, the smaller the forcing that caused it, the greater the inferred sensitivity of the climate system. This is why Jim Hansen believes in catastrophic global warming: since he thinks he knows for sure that a relatively tiny forcing caused the Ice Ages, then the greater forcing produced by our CO2 emissions will result in even more dramatic climate change!

    But taken to its logical conclusion, this relationship between the strength of the forcing, and the inferred sensitivity of the climate system, leads to the absurd notion that an infinitesimally small forcing causes nearly infinite climate sensitivity(!) As I have mentioned before, this is analogous to an ancient tribe of people thinking their moral shortcomings were responsible for lightning, storms, and other whims of nature.

    This absurdity is avoided if we simply admit that we do not know all of the natural forcings involved in climate change. And the greater the number of natural forcings involved, then the less we have to worry about human-caused global warming.

    3a. If a smaller forcing does actually cause a the same change, then the sensitivity was indeed larger. The problem is in assuming that a particular temperature change was caused by a particular forcing or combination of forcings. But effects do have to have causes.

    What this really boils down to is how much of the temperature change is internal variability or due to unidentified forcings.

    The problem with Spencer’s assertion is that it was not just assumed at the outset that the identified external forcings caused all or most of the longer-term trends, etc. Models were developed based on physics and observations, and produced output with some climate sensitivity, and the output has a significant amount of similarity to observed changes. These models do not generally produce absurdly high or negative climate sensitivities with respect to long-term trends; the physical basis should tend to prevent that. More detailed work has been done than just looking at the global average temperature change. And then there’s the paleoclimatic comparisons, etc. Scientists are aware of internal variability and natural variability as a whole, and the natural vs anthropogenic contributions has been an issue that scientists have been looking at, including in work contributing to IPCC documents. This alone is not to say that the IPCC is automatically correct in conclusions, but it just isn’t true that a baseless assumption was made about the importance of anthropogenic forcing.

    The truly external forcing of glacial-interglacial variations was indeed small in the global annual average; orbital forcing has it’s major effects via latitudinal and seasonal rearrangements of available solar radiation. Such changes will affect snow cover and be more or less favorable to formation or growth, or decay or disintegration of ice sheets. The albedo feedback does have a global average feedback. Then there are other feedbacks, notably a positive CO2 feedback. The ice sheet and CO2 feedback and vegetation feedbacks take time to respond and thus are not part of the feedbacks considered in Charney sensitivity, which is the sensitivity that tends to apply more (and perhaps more predictably (ie perhaps a stochastic nature of some feedbacks (lack of predictability of events which nonetheless in principle might be nearly inevitable over sufficient time under sufficient conditions – a possibility that such things exist) , lack of predictability of ice sheet responsesand less contingent on various factors) to shorter term variations such as the next 50 years. To calculate a Charney sensitivity applicable to glacial-interglacial changes, the ice sheet and CO2 changes (and some others) are treated as forcings.

    ——
    **PS also from p.685:

    The observed trend over the entire 20th century (Figure 9.6, top left panel) shows warming almost everywhere with the exception of the southeastern USA, northern North Atlantic, and isolated grid boxes in Africa and South America (see also Figure 3.9). Such a pattern of warming is not associated with known modes of internal climate variability. For example, while El Niño or El Niño-like decadal variability results in unusually warm annual temperatures, the spatial pattern associated with such a warming is more structured, with cooling in the North Pacifi c and South Pacifi c (see, e.g., Zhang et al., 1997). In contrast, the trends in climate model simulations that include anthropogenic and natural forcing (Figure 9.6, second row) show a pattern of spatially near-uniform warming similar to that observed. There is much greater similarity between the general evolution of the warming in observations and that simulated by models when anthropogenic and natural forcings are included than when only natural forcing is included (Figure 9.6, third row). Figure 9.6 (fourth row) shows that climate models are only able to reproduce the observed patterns of zonal mean near-surface temperature trends over the 1901 to 2005 and 1979 to 2005 periods when they include anthropogenic forcings and fail to do so when they exclude anthropogenic forcings. Although there is less warming at low latitudes than at high northern latitudes, there is also less internal variability at low latitudes, which results in a greater separation of the climate simulations with and without anthropogenic forcings.

    … and then some additional discussion of how models only simulate observed warming since 1970 in particular when anthropogenic forcing is included, but then interestingly (p.686):

    Modelling studies are also in moderately good agreement with observations during the first half of the 20th century when both anthropogenic and natural forcings are considered, although assessments of which forcings are important differ, with some studies fi nding that solar forcing is more important (Meehl et al., 2004) while other studies fi nd that volcanic forcing (Broccoli et al., 2003) or internal variability (Delworth and Knutson, 2000) could be more important. Differences between simulations including greenhouse gas forcing only and those that also include the cooling effects of sulphate aerosols (e.g., Tett et al., 2002) indicate that the cooling effects of sulphate aerosols may account for some of the lack of observational warming between 1950 and 1970, despite increasing greenhouse gas concentrations, as was proposed by Schwartz (1993). In contrast, Nagashima et al. (2006) fi nd that carbonaceous aerosols are required for the MIROC model (see Table 8.1 for a description) to provide a statistically consistent representation of observed changes in near-surface temperature in the middle part of the 20th century. The mid-century cooling that the model simulates in some regions is also observed, and is caused in the model by regional negative surface forcing from organic and black carbon associated with biomass burning. Variations in the Atlantic Multi-decadal Oscillation (see Section 3.6.6 for a more detailed discussion) could account for some of the evolution of global and hemispheric mean temperatures during the instrumental period (Schlesinger and Ramankutty, 1994; Andronova and Schlesinger, 2000; Delworth and Mann, 2000); Knight et al. (2005) estimate that variations in the Atlantic Multi-decadal Oscillation could account for up to 0.2°C peakto-trough variability in NH mean decadal temperatures.

    ___________

    PS another point:

    As noted in the second to last blockquote, modes of internal variability don’t necessarily or generally produce the same spatial and temporal patterns that would correlate to changes caused by some external forcing – and the differences also exist among different modes of internal variability (obviously) and to some extent, to different forcing agents, although the similiarities of climate response to different forcing agents with the same global average forcing value can outweigh the differences if the forcings are not too idiosyncratic (orbital forcing is very idiosyncratic, as is biological evolution, and at least in some ways, continental drift).

    On the later point: hence the potential for different climate sensitivities to different forcings. Note that this has not escaped IPCC documents. This can also have importance to regional climatic effects. However, notice that both solar and greenhouse-gas forcing and global warming/cooling in general have similar tendencies in latitudinal and seasonal distributions of temperature change within the troposphere and surface (but have different effects on the diurnal temperature cycle and on the stratosphere); *(I think)* the magnitude of vertical redistribution of radiant heating and cooling within the troposphere and surface via the water vapor feedback might also overwhelm the differences among various greenhouse gases and between those and solar forcing *(? but I’m not quite sure)* – anthropogenic aerosols, on the other hand, can have some important effects on the vertical distribution of radiant heating/cooling below the tropopause level (which can/will affect convection and circulation patterns).

    On the point about internal variability – this suggests that it is *not necessarily the case* (it could still turn out to be approximately true?, but that’s not obvious, at least from what I personally know) that the feedback response to global average temperature changes attributable to internal variability would be the same as that involved in externally-forced changes.

    In fact, if particular modes of internal variability exist (which is true: QBO, ENSO, NAM (AO) and SAM, NAO, PDO, PNA, AMO), this could be because the feedbacks that respond to those forms of change are more positive or less negative than those that respond to other forms of change – in particular for modes that have a slow-evolving component in time series – except that this might not be the case if phases of such modes have intrinsically limited periods, which is the case of the QBO at least (the QBO is a bit like an internal clock of the climate system; it’s the closest thing to a regular cycle that is not forced by any external cycle; it has a combination of positive and negative feedbacks). However, it must also be noted that these positive feedbacks are not necessarily or generally radiative feedbacks that contribute to regional patterns of temperature change. For NAM and SAM in particular, there is a positive feedback of momentum flux to distributions of momentum, and in some ways this may work against a negative radiative feedback that would tend to push circulation patterns back to an externally radiatively-forced equilibrium. On the shorter time scales, baroclinic and barotropic instability of Rossby waves exists because of positive feedback in the interaction of potential vorticity variations across a reversal of the potential vorticity gradient; yet the wavy-perturbations that grow as a result can and do eventually reach amplitudes where simple exponential growth can no longer apply – they don’t grow forever, they are limited by the available heat and momentum distributions. On still shorter timescales and with some similarity to the baroclinic and barotropic instabilities, there is an obvious positive feedback to the growth of small-horizontal scale vertical overturning convection in unstable lapse rates, or for cumulus convection, lapse rates that are greater than moist adiabatic lapse rates – and notice that the convection cells can sustain their organization (when not too turbulent) and/or general existence in part by giving underlying or overlying air an initial lifting or sinking – however, the vertical heat transport has a negative feedback to the vertical (conditional) instability, so that the rate of convection doesn’t continue to grow or shrink but tends to follow the rate of differential heating/cooling, tending to (locally, and when/where pure radiative equilibrium is unstable to convection) sustain a lapse rate of near neutral stability (relative to moist adiabats for moist convection).

  213. We have not been told how much of the models are actually modelled versus driven by long-term data.

    A proper physics driven simulation model should be able to take an initial state and generate future physics based behaviour without any further input.

    Adding *external* system perturbations to the model should improve the accuracy of the model output – eg cosmic rays affecting clouds, solar activity.

    Similarly, adding internal system perturbations should do the same thing. Required internal system perturbations include at least:
    * greenhouse gas emissions
    * geothermal activity
    * anthropogenic thermal pollution

    Any input data should be transparently disclosed, since fossil fuels are limited in supply, their consumption cannot increase linearly forever.

    I suspect that the present models use long-term temperature data as input rather than output – wading through the model code is a pain.
    eg http://www.giss.nasa.gov/tools/modelE/modelEsrc/

    C**** SURFACE INTERACTION AND GROUND CALCULATION
    C****
    C**** NOTE THAT FLUXES ARE APPLIED IN TOP-DOWN ORDER SO THAT THE
    C**** FLUXES FROM ONE MODULE CAN BE SUBSEQUENTLY APPLIED TO THAT BELOW

    The construction above in modelE harks of a runaway train rather than a model that could ever find equilibrum.

    • A proper physics driven simulation model should be able to take an initial state and generate future physics based behaviour without any further input.

      That describes the climate models of the sort used by the IPCC. Grid resolution is limited by computing power and Sub-grid scale phenomena have to be parameterized, but those parameterizations are constrained by observations of the actual phenomena, and wherein they are tuned in order for output to fit some reality, that reality is a temporal average and not a trend. Climate models are not tuned to fit climate trends and thus it is a valid test to compare results to climate trends.

      Some models are tuned to fit trends or specific instances of fluctuations, but these are found in work by Spencer (to show that a greater amount of climate change is natural – with questionable methodology), and in some of the work that supposedly shows a greater contribution of solar forcing to the longer-term climate trend (via bad methodology).

  214. this suggests that it is *not necessarily the case* (it could still turn out to be approximately true?, but that’s not obvious, at least from what I personally know)

    A possible null hypothesis would be that the global radiative feedbacks are the same for any particular global average temperature variation, but that’s not the same as knowing it. (Similar null hypothesis that is actually a necessary condition of the former: sensitivity of global average temperature to global average forcing is the same or similar among different forcers – this is actually known not to be true. Notice though how knowing that two things might not be equal doesn’t imply that one choice is more likely to be the larger of the two.) Another similar null hypothesis is that (Charney?) climate sensitivity is the same across different climatic states – also again essentially known not to be true (Snowball Earth (is that included in Charney sensitivity, though?), runaway water vapor feedback) – but not known not to be almost true over smaller ranges, so far as I know.

    • but not known not to be almost true over smaller ranges, so far as I know.

      … about the present state, and for Charney sensitivity – (?) or at least not including longer-term feedbacks involved in glacial-interglacial transitions – maybe (it’s not a priori obvious that the CO2 feedback would act the same way in warming up from a peak interglacial state or cooling from a peak glacial state; ice sheets are another matter…) .

  215. Changes to ModelE soil module give the distinct impression that the physics model is crude and perpetually refined. Which makes it all the harder to get any idea how close the model is to an approximation of reality or a figment of the progtammers imagination.

    c**** SLE001 E001M12 SOMTQ SLB211M9
    c**** (same as frank''s soils64+2bare_soils+old runoff)
    c**** change to evap calculation to prevent negative runoff
    c**** soils45 but with snowmelt subroutine snmlt changed
    c**** to melt snow before 1st layer ground ice.
    ccc comments from new soils
    c**** 8/11/97 - modified to include snow model
    c**** 10/29/96 - aroot and broot back to original values; added
    c**** call to cpars to change vegetation parameters for pilps.
    c**** 9/7/96 - back to heat capacities/10
    c**** 5/14/96 - added soils64 surface runoff changes/corrections
    c**** 11/13/95 - changed aroot and broot for rainf for 1.5m root depth
    c**** 10/11/95 - back to full heat capacity, to avoid cd oscillations.
    c**** changes for pilps: (reversed)
    c**** use soils100.com instead of soils45.com
    c**** set im=36,jm=24
    c**** set sdata,vdata and fdata to real*4
    c**** divide canopy heat capacities by 10.d0
    c**** change aroot of grass to 1.0d0, to reduce root depth.
    c**** end of changes for pilps
    c**** changes for pilps: (kept)
    c**** modify gdtm surface flux timestep limits
    c**** define new diagnostics
    c**** zero out diagnostics at start of advnc
    c**** end of changes for pilps
    c**** modified for 2 types of bare soils
    c****
    c**** soils62 soils45 soils45 cdfxa 04/27/95
    c**** same as soils45 but with snowmelt subroutine snmlt changed
    c**** to melt snow before 1st layer ground ice.
    ccc end comments from new soils
    c**** also corrects evaps calculation.
    c**** also includes masking effects in radation fluxes.
    c**** modifies timestep for canopy fluxes.
    c**** soils45 10/4/93
    c**** uses bedrock as a soil texture. soil depth of 3.5m
    c**** everywhere, where layers can have bedrock.
    c**** requires sm693.data instead of sm691.data.
    c**** sdata needs to be changed in calling program.
    c**** soils44b 8/25/93
    c**** uses snow conductivity of .088 w m-1 c-1 instead of .3d0
    c**** soils44 8/16/93
    c**** adds bedrock for heat calculations, to fill out the
    c**** number of layers to ngm.
    c**** soils43 6/25/93
    c**** comments out call to fhlmt heat flux limits.
    c**** uses ghinij to return wfc1, eliminates rewfc.
    c**** soils42 6/15/93
    c**** adds snow insulation
    c**** soils41 5/24/93
    c**** uses snow masking depth from vegetation height to determine
    c**** fraction of snow that is exposed.
    c**** reth must be called prior to retp.
    c**** soils40 5/10/93
    c**** removes snow from canopy and places it on vegetated soil.
    c**** soils39 4/19/93
    c**** modifications for real*8 or real*4 runs. common block
    c**** ordering changed for efficient alignment. sdata,fdata,
    c**** and vdata are explicitly real*4. on ibm rs/6000, should
    c**** be compiled with -qdpc=e option for real*8 operation.
    c**** to run real*4, change implicit statement in include file.
    c**** soils38 2/9/93
    c**** adds heat flux correction to handle varying coefficients
    c**** of drag.
    c**** soils37 1/25/93
    c**** changes soil crusting parameter ku/d from .05 per hour to .1d0,
    c**** to agree with morin et al.
    c**** soils36 11/12/92
    c**** calculates heat conductivity of soils using devries method.
    c**** changes loam material heat capacity and conductivity
    c**** to mineral values.
    c**** soils35 10/27/92
    c**** includes effect of soil crusting for infiltration by
    c**** modifying hydraulic conductivity calculation of layer
    c**** 1 in hydra.
    c**** soils34 8/28/92
    c**** uses effective leaf area index alaie for purposes of
    c**** canopy conductance calculation.
    c**** soils33 8/9/92
    c**** changes canopy water storage capacity to .1mm per lai from 1.d0
    c**** soils32 7/15/92
    c**** 1) for purposes of infiltration only, reduces soil conductivity
    c**** by (1-thetr*fice) instead of (1-fice).
    c**** 2) betad is reduced by fraction of ice in each layer.
    c**** 3) transpired water is removed by betad fraction in each layer,
    c**** instead of by fraction of roots. prevents negative runoff.
    c**** 4) speeds up hydra by using do loop instead of if check,
    c**** by using interpolation point from bisection instead of logs,
    c**** and by avoiding unnecessary calls to hydra. also elimates call
    c**** to hydra in ma89ezm9.f.
    c**** soils31 7/1/92
    c**** 1) fixes fraction of roots when soil depth is less than root
    c**** depth, thus fixing non-conservation of water.
    c**** soils30 6/4/92
    c**** 1) uses actual final snow depth in flux limit calculations,
    c**** instead of upper and lower limits. fixes spurious drying
    c**** of first layer.
    c**** Added gpp, GPP terms 4/25/03 nyk
    c**** Added decks parameter cond_scheme 5/1/03 nyk

  216. Patrick027

    You wrote:

    UHI has been effectively ruled out as a cause for the temperature record. There are no cities in the ocean. There are no urban centers floating on the sea ice.

    Well, actually it has NOT been ruled out simply because IPCC has claimed this, based on some doubtful “calm/windy night” studies by Parker et al. and an earlier study by Jones et al., using data from China, Russia and Australia, which concluded that less than 10% of the globally observed warming trend could be attributed to UHI distortion.

    Click to access URBAN_HEAT_ISLAND.pdf

    There have been studies from literally all around the world, which demonstrate that the UHI distortion is real and is significant. Many of these are discussed in the reference cited above.

    There is no real dispute that weather data from cities, as collected by meteorological stations, is contaminated by urban heat island (UHI) bias, and that this has to be removed to identify climatic trends (e.g. Peterson 2003). The dispute centers on whether corrections applied by the researchers on whom the IPCC relies for generating its climatic data are adequate for removing the contamination.

    As the Bard would say, “therein lies the rub.”

    But let’s do a simple bit of arithmetic.

    Oke (1973) finds evidence that the UHI (in °C) increases according to the formula

    UHI = 0.73*log10 (pop)

    where pop denotes population.

    This means that a village with a population of 10 has a warm bias of 0.73°C, a village with 100 has a warm bias of 1.46°C, a town with a population of 1000 people has a warm bias of 2.2°C, and a large city with a million people has a warm bias of 4.4°C (Oke, 1973).

    These numbers are not insignificant.

    The study by Jones showed a UHI distortion but used inconsistent definitions for urban areas, for example, classifying communities in China with up to 100,000 inhabitants (with 3.6°C UHI warming, according to Oke) as ‘rural’.

    As the report points out:

    Their concluding claim that urbanization represents “at most” one-tenth of the global trend is not derived or proved in the paper, it simply appears in the conclusion as an unsupported conjecture.

    A major problem with the surface temperature record is that surface weather stations have historically been placed where people live (and, hence, want to know what the temperature is). These locations have grown exponentially with increased world population and urban plus economic development, with buildings, concrete and asphalt surfaces, automotive traffic, air conditioners in summer and building heating in winter, etc. All of these factors are listed by NOAA in its U.S. Climate Reference Network classification as contributors to readings that are between 1 and 5°C higher than the actual temperature away from these sources. They have contributed to the UHI distortion, which is not noted in the satellite record, since it covers the troposphere all over the globe, but away from these sources of error.

    Around two-thirds of the weather stations, mostly in remote and rural locations in northern latitudes and many in the former Soviet Union, were shut down between 1975 and 1995, with over 60% of these shut down in the 4-year period 1990-1993. This coincides exactly with a sharp increase in the calculated global mean temperature (particularly in the Northern Hemisphere), giving additional credence for a significant UHI distortion of the surface temperature record. There is good reason to believe that, prior to the breakup of the Soviet Union, these remote Siberian locations systematically reported lower than actual temperatures, in order to qualify for added subsidies from the central government, which were tied to low temperatures, so as this distorted record was removed, it resulted in a spurious warming trend. For a graph showing this correlation see:
    http://www.uoguelph.ca/~rmckitri/research/nvst.html

    Click to access intellicast.essay.pdf

    Finally, meteorologist Anthony Watts has examined two-thirds of the 1,221 weather stations making up the U.S. Historical Climatology Network and published the results. Of those examined, more than half fall short of federal guidelines for optimum placement. Some examples include weather stations placed near sewage treatment plants, parking lots, and near cars, buildings and air-conditioners – all artificial heat sources which cause spurious higher temperature readings, providing physical confirmation of a root cause for a significant UHI effect on the record.
    http://www.surfacestations.org/downloads/USHCN_stationlist.xls

    Watts gives the example with photographs of two fairly closely located weather stations, both located north of Sacramento. CA: one (Orland, CA) is properly positioned in a grassy area with trees around, while the other (nearby Marysville, CA) is located near an asphalt parking lot with buildings and airconditioning units nearby. A comparison of the NASA GISS temperature records of the two stations over the 70-year period 1937-2006 shows that the improperly sited station shows a spurious increase in temperature of around 0.2 °C per decade (1.4°C total) higher than the well-positioned station, again confirming a significant UHI distortion.
    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425725910040&data_set=1&num_neighbors=1
    http://data.giss.nasa.gov/cgi-bin/gistemp/gistemp_station.py?id=425745000030&data_set=1&num_neighbors=1

    You point out that the UHI distortion would not affect temperatures taken over the ocean, and this is true.

    So the distortion only affects 30% of the temperature record (assuming the “global average” is truly that), so that a 1°C average distortion of the land record would only result in a 0.3°C distortion of the globally averaged record. This would represent around half of the observed warming over the 20th century.

    So is the uncorrected distortion 50%, 25% or even only 10% of the record (as Jones et al. have postulated)?

    Who knows?

    But one thing is sure.

    There is no good reason to believe the IPCC claim:

    Urban heat island effects are real but local, and have a negligible influence (less than 0.006°C per decade over land and zero over the oceans) on these values.

    Just looking at all the studies out there, I would estimate that IPCC is off by a factor of at least ten, and that the UHI impact on the global record over land has been at least 0.06°C per decade (or 0.6°C for the 20th century). This would equal a 20th century total distortion of the globally averaged land and sea surface record of at least 0.2°C, out of the observed 0.6°C.

    This would also go a long way toward explaining the observed 0.025°C per decade discrepancy in warming rates between the surface and satellite/balloon records after 1979.

    Max

    • Max, your

      Just looking at all the studies out there, I would estimate that IPCC is off by a factor of at least ten, and that the UHI impact on the global record over land has been at least 0.06°C per decade (or 0.6°C for the 20th century).

      is assuming that the temperature record was put together without adequate or near-adequate corrections to all the problems you mentioned, problems which I don’t dispute exist at least qualitatively.

      Couldn’t I just as well assume that the remaining uncorrected bias in the satellite and radiosonde records account for the model-observations descrepancy in lapse rate trends? Actually, though, if you look at the different data sets and their ranges of uncertainty, can it be concluded that there is a discrepancy?

      Went to http://woodfortrees.org/plot/,

      looked at: (global means)

      RSS MSU lower troposphere
      UAH NSS TC lower troposphere
      GISTEMP land-ocean
      HADSST2 sea surface

      Linear trends since 1979:
      K/decade, % of GISTEMP:

      GISTEMP_: 0.16151 ,______ 100
      HADSST2: 0.134203 ,______ 83.1
      RSS_____: 0.152749 ,______ 94.6
      UAH_____: 0.126731 ,______ 78.5

      Note that the rate of warming is expected to be greater over land than over the ocean (for near surface temperatures at least).

    • Oke (1973) finds evidence that the UHI (in °C) increases according to the formula
      (rewritten with units)
      UHI = 0.73 K *log10 (pop)

      Interesting.

      That would suggest that a doubling of population would account for a UHI warming = log(2) * 0.73 K = 0.220 K warming.

      But if half of the initial population lived in places with no population growth, then an overall doubling of population would occur with the other initial half tripling. And the UHI change (assuming equal weighting distributed among initial population, which would tend to imply population centers, to the extent that there are any, would be evenly distributed over the land surface – there would be no clustering) would be 0.73 K * [0.5 * log(3) + 0.5 * log(1)] = 0.174 K warming.

      One region with relatively less warming has been the Southeast U.S. What has the population trend been there relative to the Midwest or Alaska or France, etc.?

      I would think that the formula would be somewhat affected by the concentration of the population within the urban area, and the technology, infrastructure, and lifestyle/culture.

      The urban heat island can actually be negative (local cooling) in dry areas where a major local anthropogenic effect may be enhanced evaporative cooling from irrigation (IPCC AR4 WGI Ch3 SM). I’d expect enhanced UHI in winter in urban areas surrounded by snow cover on short vegetation or fields, simply because of the tendency of urban snow cover to be intentionally reduced or otherwise turn less white, and of course the local albedo feedback from UHI.

      Was there something about those large commuties in China that made them different from other UHIs?

      From the GISTEMP and HADSST2 trends, using the 71 % / 29 % ocean/land area approximation, an infered land trend is 0.228 K/decade, which is 0.094 K higher than the HADSST2 trend. Not all of that can be due to UHI because the temperature over land is expected to change faster than the SSTs.

    • and that the UHI impact on the global record over land has been at least 0.06°C per decade (or 0.6°C for the 20th century). This would equal a 20th century total distortion of the globally averaged land and sea surface record of at least 0.2°C, out of the observed 0.6°C.

      Are you applying a recent trend to a longer time period over which the trend was smaller?

      Even assuming that 100 % of the difference between land and ocean surface/near surface temperature trends is from UHI, that would contribute (based on GISTEMP trend – HADSST2 trend) ~ 0.027 K/decade, or 17 % of the GISTEMP trend, and then, assuming constant proportions, 17 % of the total change over the last century.

  217. Patrick027

    I mentioned that the discrepancy between that surface record and satellite record has been 0.025°C per decade.

    The 4 principal records show (1979 to today):

    Surface:
    0.152°C per decade (HadCRUT)
    0.167°C per decade (GISTEMP)

    Satellite:
    0.127°C per decade (both UAH and RSS)

    So the discrepancy of 0.025°C per decade was based on the HadCRUT record. Using the GISTEMP record, the discrepancy would be 0.04°C per decade.

    Sort of refutes the IPCC claim (AR4 SPM 2007, p.5):

    New analyses of balloon-borne and satellite measurements of lower- and mid-tropospheric temperature show warming rats that are similar to those of the surface temperature record and are consistent within their respective uncertainties, largely reconciling a discrepancy noted in the TAR.

    An even bigger untruth is IPCC’s statement (AR4, FAQ 3.1, p.103):

    For global observations since the late 1950s, the most recent versions of all available data sets show that the troposphere has warmed at a slightly greater rate than the surface.

    This is in accord with physical expectations and most model results, which demonstrate the role of increasing greenhouse gases in tropospheric warming

    Hard to explain how IPCC could get it so absolutely wrong.

    Max

  218. Why are our RSS values so different?

  219. Patrick027

    I would recommend that you download the reported records for HadCRUT, GISTEMP, RSS and UAH and then plot them in Excel and draw the linear trend lines yourself. You will see that the linear trends I have posted are correct.

    The two surface records show a higher warming trend than the two satellite records, as I stated.

    I cannot vouch for the WoodforTrees estimates that you posted.

    They look generally OK, and seem about right for GISTEMP, a bit too low for HadCRUT, too high for RSS and right for UAH.

    Patrick, it is always best to go back to the raw data and draw your own conclusions, rather than relying on someone else to do this for you.

    The surface record showed a more rapid rate of warming than the surface record (by about 0.025 to 0.04degC per decade, as I stated.

    IPCC claimed in SPM 2007 that the discrepancy between the records had been reconciled (untrue) and in AR4 Ch.3 even claimed that the satellite record showed a more rapid warming rate than the surface record (very much untrue).

    This is the point I made, which you have been unable to refute.

    Max

  220. Patrick027

    The RSS record does, indeed, show a trend of 0.152 degC per decade (my mistake).

    The HadCRUT record also shows 0.152 degC per decade.

    The GISTEMP record shows 0.167 degC per decarde.

    The UAH record shows 0.127 degC per decade.

    Even after this correction, the surface record shows a higher rate of warming than the satellite record, as I stated. The difference is now 0.02degC per decade (or 14% higher) rather than 0.025degC per decade, as I stated earlier.

    The IPCC claims are therefore false, as I pointed out.

    Is this observed discrepancy a result of a UHI distortion to the surface record?

    What do you think?

    Max

  221. I read through that part of AR4 WGI Ch 3, perhaps too quickly because I didn’t quite get all the reasoning, but it seemed to be the case that the UAH record was somewhat discounted (more so than RSS, at least) because of greater concerns about inaccuracy.

    … Maybe I’ll look at the Supporting Material for chapter 3.

    But don’t forget that the trends for the lower troposphere are not the same as for the whole troposphere.

  222. Patrick027

    On the UHI discussion we have drifted a bit off topic (“discounting records”, i.e. cherry-picking the ones we like best, etc.), so I will reiterate the basic dilemma.

    Wiki tells us:

    Climate models predict that as the surface warms, so should the global troposphere. Globally, the troposphere should warm about 1.2 times more than the surface; in the tropics, the troposphere should warm about 1.5 times more than the surface.

    IPCC (AR4 WG1 Ch.3 FAQ 3.1, p.103) tells us

    the troposphere has warmed at a slightly greater rate than the surface

    This is obviously untrue, as the record shows.

    The two tropospheric records show warming of (RSS) 0.152 and (UAH) 0.127°C per decade (average 0.140°C per decade), while the two surface records show warming of (GISTEMP) 0.167 and (HadCRUT) 0.152°C per decade (average 0.160°C per decade). The spottier radiosonde record confirms the satellite data, more closely agreeing with UAH than RSS.

    The global warming rate at the surface is 0.02°C per decade (14%) higher than that in the troposphere, instead of 17% lower, as predicted by climate models. This represents a significant discrepancy and a real dilemma.

    IPCC attempts to refute this discrepancy in SPM 2007, p.5 stating

    New analyses of balloon-borne and satellite measurements of lower- and mid-tropospheric temperature show warming rates that are similar to those of the surface temperature record and are consistent within their respective uncertainties, largely reconciling a discrepancy noted in the TAR

    This is obviously untrue as stated, as a significant discrepancy still exists.

    The two most plausible explanations for this observed discrepancy are:

    a) the surface record has an upward distortion resulting from the UHI effect or some other cause
    b) the observed warming is not caused principally by the GH effect
    c) a combination of the above.

    Back to the question:

    Which do you think it is, Patrick?

    Max

    PS I believe the recent revelations of upward fudging of the HadCRUT as well as GISTEMP records may actually help substantiate the AGW premise. If this record has, indeed, been exaggerated by data manipulation plus UHI distortion, then the more comprehensive satellite record may actually have shown greater warming than the surface record, thereby validating the GH origin for at least a significant part of the warming. Do you think this could be the case?

  223. Patrick027

    On the UHI discussion we have drifted a bit off topic (“discounting records”, i.e. cherry-picking the ones we like best, etc.)

    Well, it would be cherry-picking for me to just choose the record that fits my preconceptions, but it would be for you as well (ie to ignore the attempt at better records which happen to be closer to model output). I don’t have the expertise or the time to go through it all myself, as I suspect is the case for you. But some people do. Which is not to say just trust them, of course, but I would want to know why the VG2 or RSS, or versions thereof, are less accurate than other versions or versions of UAH, from somebody who implies such.

    It is not cherry-picking to discount one result rather than another based on the different accuracies/precisions, as best as can be understood without an actual God’s eye view record to compare.

    Anyway, the error-bars on at least some of these records are large enough that they can NEITHER confirm NOR deny that the lapse rate trend has been as modelled, even if the results nominally disagree with model output. And given the range of records and interpretations, while I by myself don’t have the expertise or background to say that the model output has been validated, I do think it CAN NOT BE truthfully said to have been shown to be significantly erroneous.

    IPCC (AR4 WG1 Ch.3 FAQ 3.1, p.103) tells us

    the troposphere has warmed at a slightly greater rate than the surface

    This is obviously untrue, as the record shows.

    What that shows is that the scientists have gravitated away from some versions of the record or prefered other versions or interpretations thereof. Which would be cherry-picking only if not justified.

    The two tropospheric records show warming of (RSS) 0.152 and (UAH) 0.127°C per decade (average 0.140°C per decade), while the two surface records show warming of (GISTEMP) 0.167 and (HadCRUT) 0.152°C per decade (average 0.160°C per decade). The spottier radiosonde record confirms the satellite data, more closely agreeing with UAH than RSS.

    1. The woodfortrees HADCRUT sea surface temperature trend is below the RSS trend, and the trend is expected to be larger over land than over the ocean and thus larger than the global average.

    2. There have been problems with radiosondes, too (in addition to their spottiness). (UHI and OTHER factors (there are other factors besides the small-end mesoscale UHI effect) that could introduce error in the surface air land temperature record, and the potential sources of error in radiosondes, and satellites, and sea-surface temperatures, are discussed in IPCC AR4 WGI Ch3 – in the SM in particular; you might want to check that out.)

    The two most plausible explanations for this [formerly apparent but presently perhaps non-existent or at least questionable ]observed discrepancy are:

    a) the surface record has an upward distortion resulting from the UHI effect or some other cause
    b) the observed warming is not caused principally by the GH effect
    c) a combination of the above.

    Back to the question:

    Which do you think it is, Patrick?

    1. some or perhaps many weather stations within UHIs might be within smaller cooler pockets like parks.

    2. It isn’t the existence of UHI but the change over time that is important here, along with changes in other local land use (irrigation, etc.), and changes in microsite issues (those dreaded air-conditioners!) – including undocumented station relocations – that are important. There are techniques used to search out and correct such effects – I’m not saying these work perfectly, but do they not work at all? Consider comparisons among stations within a region; real temperature trends generally are similar within hundreds of kilometers (I’d expect some exceptions do exist, such as at the edges of seasonal snowcover or an expanding or retreating expanse of vegetation).)

    3. As mentioned before, the (global average – maybe not for sea-ice) sea surface temperature trend is expected to be less than the global average and the land surface-air trend in particular – how much is the difference, I don’t know offhand; that is now a homework assignment for myself.

    4. Oh, I forgot to mention the borehole data. Changing surface temperatures forced from above are associated with downward-propagating temperature signals, which can be modelled based on soil/rock/etc. thermal properties. (PS I’m a little proud that I thought of using such data myself before I ever heard of it.)

    5. Besides that, one can ask whether the GISTEMP or other global average record is consistent or inconsistent, and to what extent, with other observations (sea ice, snow cover, glaciers, atmospheric circulation changes, biological/ecological indicators, … precipitation (potential local effects on trends to watch out for there, too, including UHI), etc.)

    6. What have the results been of searching out surface stations with problematic siting issues? I mean quantitatitively, and not just one or a few anecdotal examples. So far I’ve read that taking out the more problematic stations causes little change in the global record, and one case where adjustments increased the trend (perhaps some of this is mentioned in the websites I’ve posted).

    7. And why discount the (relative lack of) difference in trends between windier and calmer nights?

    5. A lapse rate trend which doesn’t fit the modelled trend for AGW would also tend to not fit the models for solar-forced warming or – so far as know – most other forcings or even internal variability. If the UAH results as interpreted are true, than either there has been less surface warming or the models are wrong. If the models are so wrong, then it can’t be inferred that the different lapse rate trend are evidence of lack of anthropogenic surface warming. But it would imply that one source of negative feedback is lacking, so climate sensitivity could be larger, although there would then be implications for water vapor changes as well – oh, maybe the observed water vapor trend could be used as evidence for tropospheric warming?

    PS I believe the recent revelations of upward fudging of the HadCRUT as well as GISTEMP records

    What fudging?

    • 1. some or perhaps many weather stations within UHIs might be within smaller cooler pockets like parks.

      Please post a photo. There are large numbers of stations known NOT to be in cool pockets. http://www.surfacestations.org/

      2. It isn’t the existence of UHI but the change over time that is important here, along with changes in other local land use (irrigation, etc.), and changes in microsite issues

      CRU/Jones paper on UHI http://www.agu.org/pubs/crossref/2008/2008JD009916.shtml uses a rural reference station with evident microsite issues. The Rothamsted rural “reference station” used in that paper has its own heat island nicely shown on a picture on the web site – a cleared patch of farming land with bare soil.

      The fact that airports are often used as rural references is telling, since airports are clearly warming with growth in air travel. Large airports (Osaka and Singapore for example) are clearly visible on satellite photos as micro-heat islands. http://webmodis.iis.u-tokyo.ac.jp/UHI/

      3. As mentioned before, the (global average – maybe not for sea-ice) sea surface temperature trend is expected to be less than the global average and the land surface-air trend in particular

      Why???? Because of high ocean (water) heat capacity, water is a heat buffer and will demonstrate a slower response to any trend driven by heat from outside the body of water. If the heat emanates from ocean floor geothermal activity, the water will warm first.

      4. Oh, I forgot to mention the borehole data.

      Please quote references. My reading of borehole data suggests it is entirely consistent with Nordell/Gervet’s thermal pollution work which is based on heat energy analysis and heat diffusion in rock and water.

      5. Besides that, one can ask whether the GISTEMP or other global average record is consistent or inconsistent, and to what extent, with other observations (sea ice, snow cover, glaciers, […]etc.)

      IPCC scientists have said there is too much emphasis in IPCC reports on local effects which have not so much to do with long-term global temperature trends.

      6. What have the results been of searching out surface stations with problematic siting issues?

      A NASA scientist said they are surprized about how serious the siting issues are and concerned about the quality of the land-based temperature record.

      • Please post a photo. There are large numbers of stations known NOT to be in cool pockets.

        A photo? That would show almost nothing. We need stats!

        CRU/Jones paper on UHI http://www.agu.org/pubs/crossref/2008/2008JD009916.shtml uses a rural reference station with evident microsite issues. The Rothamsted rural “reference station” used in that paper has its own heat island nicely shown on a picture on the web site – a cleared patch of farming land with bare soil.
        http://www.rothamsted.ac.uk/aen/ecn/images/ECN_AWS_2006.jpg

        Has this soil been progressively more and more cleared over time (outside the annual cycle)?

        The fact that airports are often used as rural references is telling,

        Define often.

      • If the heat emanates from ocean floor geothermal activity, the water will warm first.

        … And it would warm first near the sea floor and upwelling water masses, not near the surface and downwelling water masses.

        And on the off chance you were posing making than a purely hypothetical point, no, geothermal heating from below (less than 0.1 W/m2 global average, and not prone to sudden changes, at least on the large scale) cannot account for any significant climate changes.

  224. Patrick027

    Thanks for your reply.

    The four records I cited (GISS, HadCRUT, UAH and RSS) are the most commonly cited, so these are the ones I used. The radiosonde record (with all its blemishes) correlated more closely with UAH than with the others, but I did not use this record.

    You have been unable to refute the fact that the satellite record for the troposphere (average of RSS and UAH) shows a slightly slower rate of warming than the surface record (average of GISS and HadCRUT).

    So my statement stands that IPCC made an untrue statement when they claimed just the opposite with, “the troposphere has warmed at a slightly greater rate than the surface”.

    That was my point, which you were unable to refute.

    IPCC also stated that the discrepancy between the tropospheric and surface records had been largely reconciled, which is obviously also untrue, as it still exists.

    I pointed out to you that a faster rate of warming at the surface compared to the troposphere is either (a) an indication of UHI or some other distortion of the surface record, (b) an indication that the global warming is not caused principally by the GHE, or (c) a combination of (a) and (b).

    You replied with seven interesting points that did not, however, answer the question of why the surface is warming faster than the troposphere.

    So we come to your last point.

    I wrote:

    I believe the recent revelations of upward fudging of the HadCRUT as well as GISTEMP records may actually help substantiate the AGW premise

    To this you asked:

    What fudging?

    “Rewriting History, Time and Time Again”
    by John Goetz

    Rewriting History, Time and Time Again


    On average 20% of the historical record was modified 16 times in the last 2 1/2 years. The largest single jump was 0.27 C. This occurred between the Oct 13, 2006 and Jan 15, 2007 records when Aug 2006 changed from an anomaly of +0.43C to +0.70C, a change of nearly 68%.

    Repeated “ex post facto modifications” to a historical record = “fudging”.

    BTW, HadCRUT does exactly the same, as I have personally witnessed.

    Max

  225. Some background: http://data.giss.nasa.gov/gistemp/2005/

    UHI and other issues:

    http://www.realclimate.org/index.php/archives/2007/12/are-temperature-trends-affected-by-economic-activity-ii/

    http://www.skepticalscience.com/surface-temperature-measurements.htm

    http://www.skepticalscience.com/microsite-influences-on-global-temperature.htm
    http://www.skepticalscience.com/Guest-post-in-Guardian-on-microsite-influences.html
    http://www.skepticalscience.com/urban-heat-island-effect.htm
    http://www.skepticalscience.com/On-the-reliability-of-the-US-Surface-Temperature-Record.html

    (radiosondes)
    http://cdiac.esd.ornl.gov/trends/temp/angell/angell.html
    http://ams.allenpress.com/perlserv/?request=get-document&doi=10.1175%2F2763.1

    Click to access i1520-0442-16-13-2288.pdf

    (boreholes)
    http://www.ncdc.noaa.gov/paleo/globalwarming/pollack.html

    ——————

    IPCC AR4 WGI Ch9:

    SEE:
    p.703 (FAQ 9.2 Fig 1)
    p.695 Figure 9.12

    Note the agreement of observed surface changes with model output, and note that this extends to just the ocean and smaller regions. SST increases agree with models.

    What about that ~1940 peak?
    p.694:

    Other studies
    show success at simulating regional temperatures when models
    include anthropogenic and natural forcings. Wang et al. (2007)
    showed that all MMD 20C3M simulations replicated the late
    20th-century arctic warming to various degrees, while both
    forced and control simulations reproduce
    multi-year arctic warm anomalies similar
    in magnitude to the observed mid 20thcentury
    warming event.

    ——————–
    Beyond interglacial-glacial variations, the Cenozoic, and PETM:
    Climate sensitivity constrained by CO2 concentrations over the past 420 million years

    Click to access RoyeretalNature07.pdf

  226. Patrick027

    Your last post provided links to various realclimate and skepticalscience blurbs on UHI, radiosondes, boreholes, etc., plus the 1940s peak and Cenozoic plus PETM warming.

    However, it did nothing to refute my statement that (in direct opposition to IPCC claims to the contrary) (a) the surface temperature has risen at a faster rate than the tropospheric temperature since 1979 and (b) that the discrepancy between the two records has not been reconciled.

    You did not take a stand on whether this was (a) the result of UHI distortion or some other inaccuracy in the surface temperature record or (b) an indication that the observed warming was not caused by the GHE or (c) a combination of the above.

    I would say that the most likely answer is (c).

    The observed warming may have been caused to a small extent by the GHE, but to a larger extent by “natural variability” (which is, incidentally, now being blamed for the current cooling after 2000), and the surface record is distorted in an upward direction due to the undercorrected UHI effect, as many studies from all over the world have shown.

    So it looks like we can cap off our exchange here.

    Thanks for an interesting exchange.

    Max

  227. How significant could the UHI or other land station siting issues be if BOTH the global and the sea surface temperature trends match the model output well?

    Max, much of what I have posted only serves to question your assertion, which is to say that it supports NEITHER the claim that the lapse rate trends are significantly different from model output NOR the claim that they agree with the model output. However, I have gotten the impression from some of the links that some of the records which indicate greater warming aloft might be more accurate than the other records – I’m sorry I can’t take a firmer position but that might only be because I can’t remember it all at this moment.

    What you have not shown is:

    that the smaller trends for the troposphere are more accurate,

    that UHI and other siting issues account for a significant fraction of the surface temperature record (you haven’t shown this, you’ve only raised it as a possibility),

    that there is some form of internal variability or natural forcing that would produce more warming at the surface than at some levels in the troposphere, globally averaged, at the relevant time scales, with evidence supporting the occurance of this.

  228. Patrick027

    You wrote:

    What you have not shown is:

    that the smaller trends for the troposphere are more accurate,

    The coverage of the satellite record is much more comprehensive than, and not subjected to the same level of possible human error as, that of the surface record, it has not gone through the deletion of a large number of predominantly rural stations, a large number of which were located in arctic or sub-arctic locations. It is not based on readings from weather stations, a large number of which have been shown to have “siting problems” (thermometers next to heated buildings, asphalt parking lots, AC exhausts, etc. And it has not been “corrected” and “readjusted” ex post facto several times, as have the surface records. These facts all point to the conclusion that it is a more accurate record.

    Do you have any facts that point to it being a less accurate record?

    that UHI and other siting issues account for a significant fraction of the surface temperature record (you haven’t shown this, you’ve only raised it as a possibility),

    Not quite right, Patrick. I have indicated that there have been studies from all over the world that show that there is a significant distortion to the record from the UHI effect; if you wish, I can provide links to these studies.

    that there is some form of internal variability or natural forcing that would produce more warming at the surface than at some levels in the troposphere, globally averaged, at the relevant time scales, with evidence supporting the occurance of this.

    The question of “natural variability” (a.k.a. “natural forcing) was brought up by Met Office, as a rationalization for the fact that it has stopped warming after 2000, despite record increase in CO2. While Met Office was talking about the HadCRUT record, its rationalization was not limited to the surface record (the satellite records have also shown cooling after 2000). The point here is only, if natural forcing could have more than offset record increases in CO2 after 2000, why do we assume that its impact was essentially insignificant over the entire period from 1750 to 2000 (as IPCC has done)?

    Sorry, Patrick, the discrepancy between the tropospheric and the surface records still exists, pointing either to (a) a upward distortion of the surface record, caused by the UHI effect or some other error, (b) a non-GHE cause for the warming or (c) a combination of (a) and (b).

    You have been unable to bring any sound arguments to the contrary.

    Max

  229. The coverage of the satellite record is much more comprehensive than, and not subjected to the same level of possible human error as, that of the surface record,

    You misunderstood my point, because outside of theoretical implications (however sound they may be), the surface record is for the surface and the tropospheric record is for the troposphere, etc. I was refering to the various different trends estimated for tropopsheric changes.

    it has not gone through the deletion of a large number of predominantly rural stations, a large number of which were located in arctic or sub-arctic locations.

    The assimilation of data into a temperature record can take such things into account (hint: it is a lot more sophisticated than just averaging all station data, regardless of density of stations, changes in station location, etc, or station existence).

    It is not based on readings from weather stations, a large number of which have been shown to have “siting problems” (thermometers next to heated buildings, asphalt parking lots, AC exhausts, etc.

    But have you found a quantification of the remaining uncorrected impacts of these on the record that is different from the IPCC estimate?

    And it has not been “corrected” and “readjusted” ex post facto several times, as have the surface records. These facts all point to the conclusion that it is a more accurate record.

    The satellite record, the radiosonde record, the sea surface temperature record, and the surface-air temperature record over land ALL have needed adjustments. It needs to be shown that significant errors remain and that they are in your favor.

    Do you have any facts that point to it being a less accurate record?

    Well, I’ve been under the impression that the surface record for land agrees with that for the oceans. I don’t know offhand if it can be said that other phenomena (sea ice changes, circulation changes, ecological changes, etc.) are in quantitative agreement – although the Arctic sea ice loss in particular has actually been a bit faster than projected, and has sea level rise has been near the high end of projections?

    I have indicated that there have been studies from all over the world that show that there is a significant distortion to the record from the UHI effect; if you wish, I can provide links to these studies.

    Go ahead. But remember we are looking for effects on trends, not on the average over the whole record. (And I probably won’t get to them for awhile – I’m probably not the person to get to them, anyway. and your solar forcing papers have been on the back burner long enough…)

    —–

    me: that there is some form of internal variability or natural forcing that would produce more warming at the surface than at some levels in the troposphere, globally averaged, at the relevant time scales, with evidence supporting the occurance of this.

    your response:

    The question of “natural variability” (a.k.a. “natural forcing) was brought up by Met Office, as a rationalization for the fact that it has stopped warming after 2000, despite record increase in CO2. While Met Office was talking about the HadCRUT record, its rationalization was not limited to the surface record (the satellite records have also shown cooling after 2000). The point here is only, if natural forcing could have more than offset record increases in CO2 after 2000, why do we assume that its impact was essentially insignificant over the entire period from 1750 to 2000 (as IPCC has done)?

    has no answer to the question, and demonstrates misunderstanding. I was refering to either a mode of internal variability or some forcing that would produce some portion of the warming at the surface along with less warming aloft. What would do that on the relevant timescale? Did it happen? And if it can happen, could the models be wrong about the lapse rate change?

    Furthermore, you don’t seem to understand that natural changes can occur throughout a record without contributing much of a trend over longer periods or with changing trend contributions over time.

    Consider fitting a linear trend over various time periods to:

    y = a*t

    y = a*t + b*sin(w*t)

    iterative:
    y(t1) = y(t0) – (t1-t0)*[y(t0)-(a*t0)]/u + a*(t1-t0) + random value

    the discrepancy between the tropospheric and the surface records still exists
    not generally with statistical significance,

    pointing either to (a) a upward distortion of the surface record, caused by the UHI effect or some other error,

    logical inference but without further investigation, it still remains possible that the data is wrong or that the models are wrong about the lapse rate trends

    (b) a non-GHE cause for the warming

    why would a non-GHE cause explain the trends if a GHE cause doesn’t? You have to find an actual forcing or mode of internal variability which could do this, and you haven’t.

    Actually, I can think of a possibility: while surface radiative heating and net radiative cooling of the air drive localized vertical convection to maintain a convective lapse rate, the horizontal and temporal distribution of the net surface radiant heating is quite uneven (generally negative at night and at sufficiently high latitudes particularly in winter) and so this convection doesn’t occur everywhere or at all times, and so, given the muted response (relative to surface and near-surface over land under some conditions in particular) of the troposphere to the diurnal cycle and the horizontal transports of heat within the air (not specifically in parallel and in proportion with that within the ocean) the temperature of the troposphere may follow trends near the surface for some regions and times more than others. In particular, warming can occur near the surface in high latitudes (and at night and in regions of general large-scale subsidence) without convectively warming the rest of the troposphere, which is heated from transports from lower latitudes (and from daytime convection and from regions with localized overturning, etc.).

    1. If surface warming were more or less concentrated in some regions (and times), the global average lapse rate response could be different.

    2. If rearrangements of conditions caused a more even distribution of convection, the average surface temperature could warm more or cool less than the troposphere relative to otherwise.

    3. An unforced reduction of large-horizontal scale overturning, which transports heat both vertically and horizontally and occurs even when the atmosphere is stable to localized convection, would tend to reduce the lapse rate and encourage more localized convection. However, this would also tend to increase the large-scale horizontal temperature variations, which would then to drive more of the large-horizontal scale overturning. So this shouldn’t tend to occur (without forced climate changes that change the horizontal temperature gradients) much on time scales longer than those related to such circulation and its drivers, at least not without some rearrangement of the circulation …

    But shouldn’t there be some evidence of any of these occuring if they are?

    4. Maybe the (recent trend in) spatial arrangment of aerosol forcing has a cooling effect at the surface with some correlation to the surface heat sources for the troposphere, thus having an effect as in 2, to an extent greater than modelled. ? Perhaps a shift of cooling aerosols from North America and Europe to the tropics (??? depends on where in the tropics, etc.). Larger black-carbon surface heating at high latitudes could have the same effect.

    • 2 b. reduced surface moisture will cause higher surface and near-surface temperatures relative to a convectively (regionally/locally) coupled troposphere above. Depending on drying or moistenning trends and their regional/seasonal arrangements, relative to temperatures and atmospheric circulations …(?)

  230. Patrick027

    Thanks for your last post.

    I agree with your statement that the satellite (tropospheric) record and surface record measure two different things.

    The troposphere just happens to be warming at a slightly slower rate than the surface, although GH theory tells us the opposite should be the case. Your response has not explained why you think that this discrepancy exists.

    You have also not addressed my point that IPCC erroneously claimed faster warming in the troposphere than at the surface, nor have you had any comment on why IPCC claimed the discrepancy between the records had been reconciled, when it clearly has not.

    My point was simply that since the troposphere shows slower warming than the surface we either have a non-GH cause of warming or a distortion of one or the other record.

    There are many studies out there from all over the world pointing to an upward distortion of the surface record due to the UHI effect, land use changes, poor station siting and elimination of a majority of rural stations in sub-arctic and arctic locations.

    To this you point out:

    The assimilation of data into a temperature record can take such things into account (hint: it is a lot more sophisticated than just averaging all station data, regardless of density of stations, changes in station location, etc, or station existence).

    And

    have you found a quantification of the remaining uncorrected impacts of these on the record that is different from the IPCC estimate?

    I’d say the operative word in your first sentence is “can”. But there is no evidence that these things have actually been taken into account, and since the surface record still shows faster warming than the tropospheric record, it is likely that these things have not been “taken into account” enough to compensate for this distortion, otherwise this discrepancy would not exist even after “correction”.

    To my point that the historical surface record keeps getting re-written “ex post facto”, you replied:

    The satellite record, the radiosonde record, the sea surface temperature record, and the surface-air temperature record over land ALL have needed adjustments. It needs to be shown that significant errors remain and that they are in your favor.

    Yes. ALL records have needed adjustments. But only the surface records have been subjected to frequent after-the-fact “re-writing of history”, which is what I was referring to.

    To my question regarding the satellite record:

    Do you have any facts that point to it being a less accurate record?

    You did not reply directly, but rather entered a discussion of land and sea temperature records, sea ice changes, circulation changes, ecological changes, Arctic sea ice loss and sea level rise. These are all interesting topics, but do not provide an answer to my question.

    Then you asked whether there was “either a mode of internal variability or some forcing that would produce some portion of the warming at the surface along with less warming aloft”.

    I am not aware of such a purported “mode of internal variability”, and I have also not seen it cited by IPCC as a possible explanation for the observed discrepancy between the warming rates of the surface and tropospheric records, which was the topic of our exchange.

    You brought up several alternate suggestions to explain the observed discrepancy, all of which are interesting but highly speculative.

    To summarize, the discrepancy exists, with the surface apparently warming at a faster rate than the troposphere, despite IPCC claims to the contrary.

    Whether this is telling us (the two simplest explanations) that the warming is not principally attributable to the GHE or that the apparent warming at the surface is distorted in an upward direction due to UHI or some other under-corrected effect, is unclear, and you have been unable to shed any real light on these questions.

    We should probably cap off our discussion, as it is getting repetitive without the basic questions being resolved.

    But thanks for an interesting exchange.

    Max

    • Max –

      That some of my statements are of a speculative nature – they’re supposed to be. I certainly was not asserting that any of those things would serve to explain the apparent discrepancy. If the discrepancy is real, perhaps these are part of model error. But I actually don’t expect this to be the case; I think a decreasing lower-tropospheric lapse rate (global average, in general – not below cloud base levels, not at high latitudes, not in all regional conditions, etc.) makes sense with surface and tropospheric warming in general.

      You keep making the point that the expected result of GH forcing changes is greater warming in general within the troposphere than at the surface in the global average, as if to imply this were unique to GH forcing; it is not, it is not even unique to forced changes – and if the model results are wrong in this respect, it doesn’t really disprove the importance of anthropogenic forcing, etc.

      I don’t have the expertise to judge which analyses of which sattelite records are most accurate, but if the IPCC states that the observed trend is in agreement with models, then presumably the judgement was made that some analyses were more or less accurate, etc. On the other hand, the multitude of records and analyses and their error bars doesn’t allow in an obvious way for falsification of models. I’m not sure if you’re just ignoring these points or if you are just reasserting that the judgement of the scientists was wrong.) (I did point out borehole and sea surface temperature agreement; if the land station-based record has such error than why isn’t the SST record different, etc.?) ( — Well let’s look at Ian Forrester’s links, then…

  231. Anyone who is interested in the facts behind tropospheric warming should look at this paper:

    http://w w w.realclimate.org/docs/santer_etal_IJoC_08_fact_sheet.pdf

    It shows that the arguments made by the deniers are wrong.

    See also this paper:

    http://w w w.climatescience.gov/Library/sap/sap1-1/third-draft/default.htm

    This should put to rest the nonsense put forward by denier manacker who is trying to muddy the waters. Do not believe him, he is anti-science and only provides repeated arguments which have been proven to be wrong.

    I will not be responding to any comments by manacker since he will keep on repeating his falsehoods again and again and again ad infinitum. Do not pay any heed to him.

  232. Patrick027

    Some new input on the problem of upward distortion of the surface temperature record due to poor station siting.

    Click to access surfacestationsreport_spring09.pdf

    Looks like 89% of US stations have this problem.

    Do you think it is reasonable to assume that this is a US problem only? (I don’t.)

    Do you think it is reasonable to assume that the assimilation of data into the temperature record has taken this into account and corrected for it? (I don’t.)

    Max

  233. Ian Forrester

    The Santer et al. rationalization, which you cited, appears to be:

    The observations of tropospheric rate of warming don’t fit the model simulations, so the observations must be wrong. (Duh!)

    But this has nothing to do with the fact that the troposphere is warming more slowly that the surface, in direct opposition to the claims by IPCC (and this was the topic of discussion).

    So your post does not put anything “to rest” as you erroneously claim.

    Sorry, Ian. You’re wrong again. Keep trying.

    Max

  234. manacker, just where does it say that the troposphere has to warm faster than the surface? This is just something introduced by the AGW deniers.

    Bernard J has a good discussion of this misconception by AGW denier over at Deltoid:

    http://scienceblogs.com/deltoid/2009/12/russian_analysis_confirms_20th.php#comment-2194319

  235. Ian Forrester, Reur February 17, 2010 @ 9:45 pm

    Your links would not work for me even when I pasted them into my browser. The first one I fiddled with by changing your w w w to www, but it still did not work for me.
    I see it is by Ben Santer on RC, so what is it? An opinion piece I would imagine? I would not value it much what with reading some of Santer’s opinions and whatnot in the Climategate Emails.

    BTW, I notice that Patrick 027 , and Max, both of whom actually discuss the science in great detail; unlike you, seem to be ignoring your words of wisdom. (three exchanges since yours)

  236. Bob_FJ, thanks for the laugh, you support rubbish by a TV weather presenter but won’t read scientific information by a respected scientist.

    Your arrogant and insulting behaviour towards science and scientists show you to be what you really are, an ignorant ranting AGW denier.

    Patrick is doing his best to answer manacker’s continual repetition of completely false and wrong arguments. A word of advice, just because manacker repeats rubbish over and over again does not make it true. Why don’t you actually read some real science and learn something?

  237. Ian Forrester Reur February 18, 2010 @ 10:10 pm

    Poor Ian Forrester, you really cannot see the wood from the trees!
    Notice that Patrick 027 is paying no heed to your infinite wisdoms!
    Thanks for the laugh!

    Did you catch that I could not open your two links?
    Any chance you could provide good links that you imply are the ultimate truth of real science? Then perhaps I could, as you say; actually read some real science and learn something
    I’m always willing to learn!

    BTW, you claim that I: “…support rubbish by a TV weather presenter but won’t read scientific information by a respected scientist.
    Any chance that you could elaborate on that and advise what I should read?
    Thanks (again) for the laugh; you are a really hilarious guy!

  238. FJ Fool, I ‘m glad you find your anti-science rubbish funny. I can assure you that all reasonable people are very concerned about the effects of climate change.

    Too bad that you cannot open the links, they work for me. Maybe your computer is programmed to ignore science and only deal in fantasy.

    You people are pathetic.

  239. Ian Forrester

    You ask:

    manacker, just where does it say that the troposphere has to warm faster than the surface? This is just something introduced by the AGW deniers.

    Sorry, Ian, you are wrong again.

    It was not “introduced by the AGW deniers”, but by IPCC.

    IPCC AR4 WG1 Ch. 3 FAQ, p.103 tells us (bold type by me):

    Above the surface, global observations since the late 1950s show that the troposphere (up to about 10 km) has warmed at a slightly greater rate than the surface, while the stratosphere (about 10–30 km) has cooled markedly since 1979. This is in accord with physical expectations and most model results.

    and

    For global observations since the late 1950s, the most recent versions of all available data sets show that the troposphere has warmed at a slightly greater rate than the surface, while the stratosphere has cooled markedly since 1979. This is in accord with physical expectations and most model results, which demonstrate the role of increasing greenhouse gases in tropospheric warming and stratospheric cooling

    And finally in AR4 WG1 Ch. 9, p.680 IPCC refers to “fingerprints” of greenhouse warming, and in Figure 9.1 c) and f) shows the zonal mean atmospheric temperature change as simulated by the models from greenhouse warming at different heights and latitudes. This shows a tropospheric “hot spot” over the tropics to mid-latitudes between around 5 to 13 km altitude of around 0.6°C per century greater warming than at the surface.

    Actual physical observations do not show this “hot spot” or “fingerprint” of greenhouse forcing, but rather that the troposphere has warmed at a slightly slower rate than the surface.

    Max

  240. Ian Forrester

    Pardon me for cutting into your exchange with Bob_FJ, but one thing you wrote caught my eye:

    I can assure you that all reasonable people are very concerned about the effects of climate change.

    Polls taken in several countries show that you are wrong on this point (as you have consistently been on all points you have made so far).

    Most respondents do not think climate change is a serious concern.

    Max

  241. manacker, if they do not acknowledge that climate change is a problem then they have been duped by people like you and therefore are not behaving in a reasonable manner.

    I can assure you that the science is correct and problems will be encountered and are being encountered now. Get your head out of the sand, you are pathetic.

    And as for your continued lack of scientific awareness I would first like to point out that the tropical troposphere hotspot is not a “fingerprint” for greenhouse gas warming but is a fingerprint for any type of warming. Secondly, the latest data show that it is in fact present.

    There have been many problems with MSU data particularly for UAH.

    However, it is now accepted that the tropospheric temperature increase is higher than the surface (accepted by scientists but not by retired TV weather reporters). I’m sure you could find many papers to confirm this if you really wanted to find out. However, I will cite one paper and leave the others for you to find:

    http://www.nature.com/nature/journal/v429/n6987/full/nature02524.html

    From 1979 to 2001, temperatures observed globally by the mid-tropospheric channel of the satellite-borne Microwave Sounding Unit (MSU channel 2), as well as the inferred temperatures in the lower troposphere, show only small warming trends of less than 0.1 K per decade (refs 1–3). Surface temperatures based on in situ observations however, exhibit a larger warming of approx0.17 K per decade (refs 4, 5), and global climate models forced by combined anthropogenic and natural factors project an increase in tropospheric temperatures that is somewhat larger than the surface temperature increase6, 7, 8. Here we show that trends in MSU channel 2 temperatures are weak because the instrument partly records stratospheric temperatures whose large cooling trend9 offsets the contributions of tropospheric warming. We quantify the stratospheric contribution to MSU channel 2 temperatures using MSU channel 4, which records only stratospheric temperatures. The resulting trend of reconstructed tropospheric temperatures from satellite data is physically consistent with the observed surface temperature trend. For the tropics, the tropospheric warming is approx1.6 times the surface warming, as expected for a moist adiabatic lapse rate.

  242. BobFJ – I haven’t been ignoring Ian Forrester; I went to the first link (more on that later to be addressed to manacker) and intend to go the the second – Yes, I had trouble getting the first link (even after w w w conversion to www) to work, but it did eventually.

    manacker –

    Okay, I have to be brief and the moment, but the Santer et al. pdf document doesn’t simply assert greater confidence in model results than in observations (though that can be a reasonable opinion – it’s happened before that observations were corrected (via new analysis of obs or new obs) in favor of model results).

    The point, which is, in part, part of what I’ve been saying, is that:

    1. Accounting for uncertainties in observations – INCLUDING the inherent limitation on statistical significance of a trend fitted to data that doesn’t actually form a perfect line (something I’ve been aware of but haven’t highlighted up to now) – the observations don’t disagree with model results with statistitical significance (even if nominally different).

    2. New data and new analyses of old data (callibration issues, etc. – have to go back to the document again to find the details, but see also IPCC AR4 WGI chapter 3 and SM) lean away from old lapse rate trends and toward modelled expectations (though how far toward, I don’t remember offhand – see other links I posted earlier).

  243. Polls taken in several countries show that you are wrong on this point (as you have consistently been on all points you have made so far).

    What do those people think about evolution? How many believe in the lunar landing conspiracy theory? Ian was refering to reasonable people (and presumably he had the subset that is well-informed in mind).

  244. Patrick027

    Ian Forrester made a claim:

    I can assure you that all reasonable people are very concerned about the effects of climate change.

    I pointed out to him that his conclusion that “all reasonable people are very concerned about the effects of climate change” has been put into serious question by the polls taken in several countries, which show that concerns about climate change rank very low among many respondents.

    You have drifted off topic with your question and statement:

    What do those people think about evolution? How many believe in the lunar landing conspiracy theory? Ian was refering to reasonable people (and presumably he had the subset that is well-informed in mind).

    Ian has said ALL reasonable people. I am sure that among the poll respondents that rated climate change low among their concerns there were some “reasonable people”.

    One could only agree with Ian if one modified the definition of “reasonable people” to “people who are very concerned about the effects of climate change”, and that would not be “reasonable”, Patrick, as I am sure you must agree.

    I can find no dictionary that provides me this very restricted definition of “reasonable”.

    “Informed” is also no criterion. Certainly Professor Richard Lindzen is well “informed” on our planet’s climate to a much greater level of knowledge that either you, Ian or me. Yet he is not “very concerned about the effects of climate change”, as his various publications have made clear.

    By the same token, there is no reason to believe that all those who rated climate change low are not “informed”.

    You are on a slippery slope here, Patrick. Stay away from sweeping generalities about what ALL people think and stick with climate science, where you have a valid opinion.

    Ian was wrong, and your statement (or question) added nothing to change this.

    Max

    • The point was that your cited poll doesn’t contradict Ian purely by its fact.

      The dictionary won’t tell you that reasonable, well-informed people accept the theory and facts of evolution, the reality of the Apollo lunar landings, etc. You have to apply the definition to knowlegde of the world to reach those conclusions.

      Certainly Professor Richard Lindzen is well “informed” on our planet’s climate to a much greater level of knowledge that either you, Ian or me.

      There are some skeptics, perhaps Lindzen amont them, who do have much more experience than I do and I would guess greater expertise in at least some areas (tropical convection).

      However, this does not preclude them from being wrong nor from being unreasonable, and (factually, or via follow-up studies) wrong they have been often been shown to be, and they have been caught being unreasonable (the logic doesn’t hold). Perhaps especially for the BIG PICTURE.

      Lindzen could know many things that I don’t, but he’s been wrong about the relative role of water vapor in the greenhouse effect, his studies supporting negative feedbacks generally haven’t held up, and he’s mangled logic.

      It only takes ignorance of one or a few core salient facts, or illogical thinking on one or a few occasions, to reach a very wrong conclusion, especially if arguments are constructed based on only corners of the big picture.

    • By the way, doesn’t your ‘Black Swan’ argument imply that I am more likely to be correct than Lindzen, since/assuming Lindzen knows more?

  245. Patrick027

    Your two points (about models versus observations and new analyses of old data, for example on lapse rate trends) are interesting.

    They do, however, not change the fact that physical measurement of tropospheric temperature by satellites since 1979 show a slower rate of warming than physical measurement of surface temperature, in direct contradiction to claims by IPCC (the topic of our discussion).

    I believe the evidence shows that this discrepancy can be attributed to 2 principal causes:

    – not all of the observed warming during this period was caused by the GHE
    – the surface temperature record is distorted in an upward direction by the UHI effect, land use changes, poor station siting, etc.

    I have heard nothing from you to refute this.

    Max

    • I have heard nothing from you to refute this.

      Seriously?

      In at least one of the links I provided (look for Tamino, I think), an analysis was mentioned that showed greater reduced lapse rate in obs than in the model output.

      And see also links from Ian Forrester.

  246. Patrick 027: I should have been more specific when I wrote that you were not heeding the wisdom of Ian Forrester. I meant the following that he does not even obey himself:

    I will not be responding to any comments by manacker since he will keep on repeating his falsehoods again and again and again ad infinitum. Do not pay any heed to him.

    Ian Forrester: You wrote this:

    Too bad that you [Bob_FJ] cannot open the links, they work for me. Maybe your computer is programmed to ignore science and only deal in fantasy.

    Patrick confirms that he too found your links to be visibly bad on his “scientifically programmed” computer screen. Please be a good boy, and post proper links so that I can actually read some real science and learn something?

    • Just to clarify: It did appear to be a proper link (after removing spaces between w’s), it just initially came up with a ‘file damaged’ message. I tried again once or twice – I may have hit the refresh button – and then it worked just fine.

  247. Ian Forrester

    Sorry. All the rationalizations in the world to the contrary do not change the fact that the observed warming rate by satellites for the troposphere show a slower rate of warming than that observed at the surface, nor that the “tropospheric hot spot” predicted by model simulations (but not actually observed) are for GH warming (not for any warming).

    Max

    Response– Yes, it is for “any warming” which is just basic atmospheric thermodynamics, if you’d bother to read a textbook for take an intro class. That you continue to go around the internet making idiotic statements just speaks volume to our ignorance on the subject. And as many, many studies have noted, there are a lot of issues with the data here, so it’s hard to say with he confidence you do that a discrepanacy is real. This would all require that you bother to read a bit or stay up-to-date with the issues instead of continue to repeat the same denial talking points over and over. We wouldn’t want that.– chris

  248. Ian Forrester

    Do yourself a favor, Ian.

    Download the surface and satellite records from 1979 through 2009.

    Plot them in Excel.

    Draw a linear trend line for each.

    When you do this you will see that the records show the following decadal warming trends in °C per decade:

    0.164 NCDC
    0.152 HadCRUT
    0.158 Average Surface

    0.130 UAH
    0.147 RSS
    0.138 Average Troposphere

    You will then be able to confirm that the observed warming at the surface was around 0.02°C per decade faster than the tropospheric warming rate, in direct contradiction to claims by IPCC

    Max

  249. Chris

    My, my! Looks like you are getting a bit testy.

    Yeah. I’ve seen your treatise on whether or not the hotspot is a fingerprint of GH warming:

    Skeptics/Denialists Part 2: Hotspots and Repetition

    where you write:

    Tropospheric warming in the tropics is a signature of greenhouse warming, but it is more accurate to say that it is not a unique signature (i.e., you get this “hotspot” with all types of forcings).

    But if you check IPCC AR4 WG1 Ch. 9, Figure 9.1, you will see that Figure 9.1c) shows all warming and 9.1f) shows the GH fingerprint, a tropospheric “hot spot” over the tropics and lower middle latitudes between around 5 to 13 km altitude of around 0.6°C per century greater warming than at the surface. For comparison Figure 9.1a) (solar forcing) does not show this “hot spot”.

    For a peer-reviewed study on this by Douglass et al. check:

    Click to access climatemodel.pdf

    Monckton has written this paper on this subject

    Click to access moncktongreenhousewarming.pdf

    For additional info on this check:
    http://www.climate-skeptic.com/tag/fingerprint

    On the other hand, Roy Spencer agrees with you that there is no “hot spot” “fingerprint” associated with GH warming, as shown in IPCC AR4 Ch.9:
    http://www.drroyspencer.com/2009/10/hotspots-and-fingerprints/

    But regardless of whether IPCC or you and Roy Spencer are right on the hotspot as an indicator of GH warming, no one argues that GH warming should occur more rapidly at the surface than in the troposphere, and that is what the temperature record since 1979 shows.

    I think we can agree on that point.

    Max

  250. I will be gone for a week, so if you have any responses, I will get back to you after 28 Feb.

    Max

  251. manacker that is what I did in the link I provided (or at least someone else loaded the data into Excel and I chose which data to run the regression on). Are you that stupid or do you just not like being shown that you are wrong?

    GISS data gives a higher rate of increase than HadCrut since it covers the Arctic which is known to be warming faster (ever hear of arctic amplification?). Also there are still problems with UAH data. Their data are always the lowest of the sets of global data. Of course the UAH data are calculated by well known deniers and twisters of data.

    Do yourself a favour and read up on the science, stay away from denier websites and try and understand the real science.

    You are a pathetic troll and AGW denier.

  252. Ian Forrester

    Don’t be a dummy.

    The Wood for Trees stuff you cited was cherry-picked to only include RSS and GISS, and you did not even show the linear rate of warming (GISS is higher than RSS).

    I showed you two satellite (UAH, RSS) and.two surface (NCDC and HadCRUT) records, with the linear rates of warming for each.The surface record shows faster warming than the satellite record, as I indicated. Check it out for yourself using the raw data as published rather than a Wood for Trees rehash, if you are able to do so.

    Do not make silly accusations that someone is lying when you do not have all the facts, Ian. It only makes you look silly.

    Max

    PS Will be back in a week.

  253. You just don’t know good science when you are you exposed to it. UAH is fudged (too many errors, all in the one direction, to have any confidence in the data), the satellites don’t cover all of the globe.

    You are cherry picking.

    Cherry picking is selecting data to prove you are right when in fact you aren’t. What I did was chose the data which gave maximum coverage and thus is closer to the real numbers. This is a perfectly acceptable thing to do. My choice was based on science (knowing which data sets are most accurate) , yours was because you wanted a certain result. You have no scientific reason for picking those data sets except they showed what you wanted them to show i.e. you cherry picked them.

    In addition, satellite data are compromised because the troposphere data are contaminated with a small amount of stratospheric data which cools the tropospheric data.

    You are pathetic. You are the on who is being silly when you keep repeating wrong information when you have been told again and again that you are wrong.

  254. I have a basic question that I hope somebody can help me understand about these diagrams. Since a black body would emit radiation in all directions, why are the emitted by atmosphere and back radiation values not equal? What accounts for the (333 – (169+30)) difference? I think the answer is because the atmosphere is not just one layer that can be treated as a single black body, but I’m having trouble describing this to a friend. Can somebody give me an intuitive explanation of why those terms do not have to be equal?

    • jp – you are absolutely correct. The atmosphere as a whole emits more radiant power downward to the surface than it does upward to space because the temperature generally decreases going upward, and parts of the atmosphere partially block radiation from other parts. There is some wavelength dependence – at wavelengths and in conditions where the atmosphere is more transparent, the difference* will be less; going towards wavelengths with greater opacity, the difference* tends to increase and the combined emission from the surface and atmosphere that reaches space shrinks*; but at some wavelengths the opacity is large enough for the emission to space to increase with greater opacity because of increasing emission from the upper stratosphere.

      *- relative to blackbody radiation, which also varies over wavelength.

      There are multiple ways of describing how this works which are mathematically equivalent.

      (jump to last paragraph (before footnote) for intuitive qualitative description)

      One is to consider many individual layers that are each partially transparent; if one considers pairs of layers and the net radiation exchanged by such pairs (from emission to absorption, following through any scattering/reflection and not counting scattering/reflection within a layer as energy recieved), the satisfaction of the second law of thermodynamics becomes very clear.

      Another approach, though, is to consider the radiant flux in a direction from all layers behind to all layers ahead and beyond; from any vantage point embedded deep within an isothermal material with sufficient opacity and with sufficient portion of that opacity coming from absorption and with all emission at local thermodynamic equilibrium, the radiant intensity (flux per unit area per unit solid angle in a particular direction) and flux per unit area (power through a unit area from all directions that cross through the unit area from the same side) are the same** in all directions (the direction of a flux per unit area is perpendicular to the unit area considered) and thus the net flux and net intensity are zero in all directions; when there is some temperature variation, however, wherein their are cooler and warmer regions, then there can be a net flux in some direction, which tends to be larger for larger temperature variations, for temperature variations that are concentrated in space relative to the opacity (so that the warm region can be ‘seen’ from the cold region and vice versa) and for the regions of anomalous temperature to be large enough relative to the opacity (so that the temperature variations can be ‘seen’ at all). Following the radiant intensity along a path, the value is always tending to ‘catch up’ to that of blackbody radiation that would be in equilibrium with the temperature at a given point as photons from other places are absorbed and photons from that location are emitted along the path, but will depend on conditions at prior points along the path, where the photons originated – note that scattering and reflection may result in photon origins distributed along branching paths or over a volume of space), higher opacity reduces the distances that photons can travel and thus reduces the influence of conditions at greater distances, so that the radiant intensity depends more on conditions nearby, so that the intensity catches up to it’s thermodynamic equilibrium value over shorter distances; this means that the radiant intensities and fluxes in opposite directions are more similar at higher opacities, so the net intensities and fluxes are smaller (unless the temperature variations all occur on smaller spatial scales with not much of a large scale trend over space).

      For me, a good intuitive qualitative approach is to imagine what it looks like when you are in a fog – the farther away something is, the more it is hidden by the fog, and that goes for the fog itself (fog hides more fog). An actual fog blocks viewing of images by scattering; it also blocks radiant energy from reaching across a distance from a source, but this isn’t the same as blocking an image; a greenhouse effect can be based on scattering, but for Earthly conditions, scattering is minor relative to absorption and emission, so imagine that the fog is actually made of bits of carbon that are glowing incandescently. Now imagine how it looks if the temperature of the fog varies over distances.

      **(except for effects of directionally-dependent index of refraction, which is not a big issue for the atmosphere; refraction bends rays so that the solid angle that envelopes a set of rays changes, increasing and decreasing the intensity of radiation along a path. This is related to total internal reflection. This satisfies the second law of thermodynamics when one accounts for blackbody radiation’s dependence on the refractive index. However, total internal reflection prevents a blackbody embedded in a high-refractive index material from appearing brighter when viewed outside of that material – the intensity of blackbody radiation and of radiation from any particular source depends on the index of refraction at the location it is measured).

  255. Discarded?

  256. Ian Forrester; please take note of the following:
    Patrick 027, Reur: https://chriscolose.wordpress.com/2008/12/10/an-update-to-kiehl-and-trenberth-1997/#comment-1991

    Just to clarify: It did appear to be a proper link (after removing spaces between w’s), it just initially came up with a ‘file damaged’ message. I tried again once or twice – I may have hit the refresh button – and then it worked just fine.

    Thanks for your advice. I tried it again, as before with www ILO Ian’s erroneous w w w, but again got the message; “Oops! This link appears to be broken“. That was after pasting it into my Google home page message bar. BTW, I’m inflicted with Vista and use the latest version of Vista’s IE. I then tried the link into my Google search thingy and it worked. Have not read it yet, and became depressed after seeing the long list of authors. (that do not give me a warm feeling). I don’t know if any are qualified statisticians, do you? (See second link below)
    Perhaps one evening if there is nothing else to do, and after a glass or two of Cab-Merlot, I may have a go.

    Testing “discarded” comment with links removed:

    In the process of Google search, I saw that there were many adverse commentaries on Santer et al 2008. For instance, try for starters:
    I won’t overburden you with too much disappointment at this stage!

  257. Bob_FJ, is climatefraudit the only site you ever read? No wonder you know nothing about climate science.

  258. Patrick 027:
    Of the various critiques of the Realclimate/Santer et al paper, I found the three that I linked to above to be the most detailed and interesting. Have you found time to read them yet? I highly recommend them.

    Ian Forrester:
    I even read Realclimate, and I’m having a substantial exchange on the “Daily Mangle” thread at the moment

  259. Ian Forrester

    Wrong again, Ian,

    I cited two satellite (UAH, RSS) and two surface records (HadCRUT, NCDC), while you cherry-picked out one satellite and one surface record to try to prove your point.

    Max

  260. Patrick027

    Thanks for RealClimate links.

    Here’s one you may have missed, which tells a different story:

    Click to access ndx_christy.pdf

    No matter how you twist and turn it, Patrick, the problem remains:

    Satellite record of tropospheric warming shows slower warming rate than surface record.

    Is it due to UHI at surface? Is it due to non-greenhouse warming that starts at surface rather than in the troposphere?

    Who knows?

    But it is there, and there is no point denying it.

    I do not believe that it makes much sense for you to post various RealClimate or other links attempting to deny or rationalize away this observed dilemma, so why don’t we break off this part of our exchange?

    Max

  261. Max, you still haven’t accounted for how the surface temperature over the ocean has risen and how the heat content of the oceans has increased (the combination of the two make internal variability a less likely contender).

  262. Patrick027

    The ocean water temperature has apparently cooled a bit since the Argo system was put into service (replacing the very inaccurate measurements before Argo), which would mean that the heat content of the oceans has decreased instead of increased.

    Internal variability (or natural forcing factors, however one prefers to call this) was cited by the UK Met Office as the reason for the observed cooling after 2000 of the globally and annually averaged land and sea surface temperature anomaly as reported by HadCRUT.

    Makes sense to me.

    Max

  263. Ian Forrester

    Manacker said:

    which would mean that the heat content of the oceans has decreased instead of increased.

    That is not what recent papers are saying.

    Check out this paper:

    Click to access VonSchukmann_et_al_2009_inpress.pdf

    or

    http://www.agu.org/pubs/crossref/2009/2008JC005237.shtml

    Looks like a lot of the heat (energy) is being stored at deeper depths than previously thought.

  264. Patrick 027

    Also, if the heat isn’t increasing (and/or land ice is not melting somewhere), how can sea level rise be explained?

  265. Ian Forrester

    OK.

    The record shows that the globally and annually averaged land and sea surface temperature has cooled after 2000.

    The record also shows that since the more reliable Argo measures were installed, the upper ocean is also cooling.

    These observations tell us that the heat content of our climate system has decreased. But since this answer does not fit with the AGW theory, we are now hypothesizing that this theoretical “added heat” is being hidden out of sight in the deep ocean.

    Gimme a break, Ian. This is gibberish.

    Max

  266. Ian Forrester

    Look Manacker, I hate to see solid science labeled as “gibberish.” You are not a scientist and everything you say here is chosen from well know denier sites and has been shown to be rubbish.

    You are despicable when you resort to your childish rants when denigrating solid science.

    The heat content has been measured, which you would have found if you had read the papers I linked to.

    Good grief, why are deniers so stupid?

  267. Ian Forrester

    You raise a good point about sea level rise.

    It has apparently not risen for the past few years.

    But the satellite altimetry readings are so inaccurate (according to Carl Wunsch) that they are not a reliable indicator.

    The old tide gauge records have shown no acceleration of sea level rise, but rather a steady rise, with large decadal swings in the rate of rise.

    Non-polar glaciers are melting on average.

    Both Antarctic and Greenland ice caps had a net increase in mass over the 10-year period 1993-2003, but Greenland is apparently losing mass now, while Antarctica is roughly in balance.

    So is the sea level really rising today and, if so, what is causing this?

    Back in 2007, IPCC estimated a 1961-2003 rise of 1.8 mm/year, but could only “account for” 1.1 mm/year of this. In the same report IPCC estimated a 1993-2003 rise of 3.1 mm/year, based on satellite altimetry.

    As two of the NOAA scientists involved with satellite altimetry reported for 1992-2003:

    Click to access EGU04-J-05276.pdf

    The TOPEX/Poseidon mission is nearing the completion of its twelveth year. The remarkable length of the record implies that the global rate of sea level change can be estimated from this single altimeter with striking reliability. The currently accepted value is 2.5±0.5 mm/year.

    However, every few years we learn about mishaps or drifts in the altimeter instruments, errors in the data processing or instabilities in the ancillary data that result in rates of change that easily exceed the formal error estimate, if not the rate estimate itself.

    And

    It seems that the more missions are added to the melting pot, the more uncertain the altimetric sea level change results become.

    All this seems to confirm that the sea level measurements are so inaccurate today that it is hard to draw any conclusions.

    Max

  268. Ian Forrester

    Check “Cooling of the Global Ocean since 2003” by Craig Loehle:

    Click to access OceanCoolingE&E.pdf

    Max

  269. Ian Forrester

    More rubbish from the anti-science geologists.

    Do you ever read a real science paper? E & E is not even fit to wrap stinky fish in.

    The oceans are deeper than 700 m. Science is not static as you deniers would like to think, but new methods, new data etc are always being added to the picture so that we now know a lot more about where the “extra heat” that Trenberth wondered about has gone.

  270. Ian Forrester

    You wrote:

    More rubbish from the anti-science geologists.

    Is geology not a science?

    Hmmm…

    Max

  271. Pete Ridley

    I see that Ian Forrester continues with his vile invective against anyone who dares to challenge The (significant human-made global climate change) Hypothesis. After recently being subjected to similar abuse from Ian myself, I researched his contributions on the Web over many years. In Ian’s opinion any who challenge AGW are moronic, stupid, illiterate, dishonest, devious, ignorant, arrogant, extravagant, indecent, rude, pathetic, selfish, deniers, trolls, lying slime-balls, don’t know what they are talking about, haven’t a clue how science works, insult intelligent people, live in a fantasy world, are on an anti-science crusade or suffer from Dunning Kruger syndrome (all of these can be found in his numerous blog comments). It doesn’t matter who they are, even respected scientists are subjected to his invective. There are many examples, e.g. on desmogblog, scienceblogs or through Grist and of course here on Chris’s blogs.

    Exchanges with Ian appear to me as a prime example of the blind acceptance by many intelligent people of the UN-sponsored propaganda about our use of fossil fuels causing catastrophic global climate change and the vicious way that they attack anyone who challenges that propaganda. Ian is no fool and when debating his own specialist area (biochemistry) he does so in a perfectly reasonable manner. When trying to debate the subject of global climate processes and drivers, apparently way beyond his area of expertise, he seems completely different. I find his Jeckel & Hyde character to be very puzzling, in fact at one time I mistakenly thought that these were two different Ian Forresters.

    In order to understand what makes so many DAGWers act in the manner exemplified by Ian it is necessary to find out as much as possible about their background, such as upbringing, education, training, business and social activities. Ian provides a useful case study because he doesn’t hide behind a false name so much background information is readily available on the Internet. I have started a thread on Australian Senator Steve Fielding’s blog with the title “Climate Change – What Makes a DAGWer Angry”.(Note 1) using Ian as a case study. If any of you who are interested in trying to figure out the psychology of people like Ian then please join in. I’d particularly like to have someone who can offer expertise in human psychology. Ian has refused to join in the debate there, which is disappointing (please Ian, let’s have your input).

    Ian, since you are involved on this blog you may be interested in something that I picked up today relating to your real area of expertise, GM crops. Several years ago you commented elsewhere (Note 2) on this, voicing your objections to the activities of Monsanto and not appearing to favour the introduction of GM canola in Canada. (You also appeared to be very supportive of Prince Charles and it made me wonder if you were of Scottish stock. My experience is that the Scots are more supportive of the Royal Family than we English).

    How do you feel about this abair.gov.au “Media Release 2 March 2010” (Note 3) in which QUOTE: Tom Shenstone, Director General, Policy, Head of Research at Agriculture and Agri-Food Canada provided an overview of Canada’s wheat marketing arrangements and grains industry. He also discussed Canada’s experience in adopting genetically modified crops which began over a decade ago. “Eighty to 90 per cent of Canadian canola is now genetically modified and there was not much resistance to its introduction by farmers because they saw it deliver increased returns,” Mr Shenstone said UNQUOTE.

    NOTES:
    1) see http://www.stevefielding.com.au/forums/viewthread/692/P30/
    2) see http://www.grist.org/article/a-princes-dream-far-fetched-fairytale-or-a-real-future-of-food/
    3) see http://www.abareconomics.com/corporate/media/2010_releases/ol_2mar_9_10.html

    Best regards, Pete Ridley

  272. Ian Forrester

    Yes, it is a science, that is what makes the fact, that geologists (and other oil industry associated deniers) will support an anti-science group such as FOS, so serious.

    Anyone who can only cite rubbish from the well known denier sites is an ASS (anti-science sufferer). They are anathema to scientists everywhere when they denigrate a discipline which most honest scientists have worked in for most of their professional lives.

  273. The part of the 2008/9 paper “EARTH’S GLOBAL ENERGY BUDGET” Trenberth, Fasullo and Kiehl that I found most interesting is the “Discussion” (Page 320) especially QUOTE: It is not possible to give very useful error bars to the estimates. Fasullo and Trenberth (2008a) provide error bars for the TOA radiation quantities, but they are based on temporal and spatial sampling issues, and more fundamental errors associated with instrumentation, calibration, modeling, and so on, can only be assessed in the qualitative manner we have done here, namely, by providing multiple estimates with some sense of their strengths and weaknesses. …… In our analysis, the biggest uncertainty and bias comes from the downward longwave radiation. This source of uncertainty is likely mainly from clouds. Accordingly, as well as providing our best estimate of the Earth’s energy budget (Fig. 1) we have provided a discussion of problems and issues that can hopefully be addressed in the future. UNQUOTE.

    It’s interesting to compare the updated estimated figures with those shown in the DAGWer’s bible, the UN’s IPCC Fourth Assessment Report (AR4) Chapter 1, FAQ 1.1 Fig. 1 (Page 96). Any experts here like to comment on those significant changes and the associated significant uncertainties about the validity of those budget estimates? Methinks it would be wise for global governments to place a pause on all policy strategies being developed and even implemented under the pretext of controlling global climates to prevent DAGW until those significant “problems and issues” have been addressed.

    Best regards, Pete Ridley.

  274. Ian, you said “a discipline which most honest scientists have worked in for most of their professional lives.” but what is this discipline to which you refer. Surely you don’t consider “climate science” to be a discipline, like Climatology, Meteorology, Atmospheric dynamics, Atmospheric physics, Atmospheric chemistry, Solar physics, Historical climatology, Geophysics, Geochemistry, Geology, Soil Science, Oceanography, Glaciology, Palaeoclimatology, Palaeoenvironmental reconstruction, Ecology, Synthetic biology, Biochemistry, Global change biology, Biogeography, Ecophysiology, Ecological genetics, Applied mathematics, Mathematical modelling, Computer science, Numerical modelling, Bayesian inference, Mathematical statistics, Time series analysis, etc. You know what I mean?

    Best regards, Pete Ridley

  275. Pete Ridley

    You refer to the uncertainties expressed by Trenberth et al. on the amount of longwave back radiation:

    In our analysis, the biggest uncertainty and bias comes from the downward longwave radiation.

    This is fundamental to the whole premise of dangerous AGW, so it is astounding that Trenberth et al. would have made such a concession.

    Earlier on this blog (early January) I attempted to get an answer regarding the origin and basis for the “net absorbed 0.9 W/m2” in the Earth’s global, annual energy budget as shown in the cartoon (lead article).

    This is obviously an extremely important but very small difference between some very large numbers.

    I found it strange that the net imbalance in the annual energy budget should be so much higher than that resulting from the annual increase of GHGs (= one eighth to one thirtieth of the 9 W/m2 figure).

    There were a few responses, but no one could give me a satisfactory answer to the origin of and basis for the 9 W/m2 figure.

    Finally I saw that this number had not been calculated by Kiehl + Trenberth, but had been taken over from a paper by James E. Hansen. This study, in turn, calculated the number based on the circular logic of first assuming the estimated forcing from a doubling of CO2, then comparing the theoretical warming from this forcing with the actually observed warming (1880-2003), and postulating that the difference must be hidden in the pipeline somewhere, thereby arriving at the 0.9 W/m2 “net imbalance” figure.

    In other words, the net absorbed 0.9 W/m2, which is postulated to cause our planet to warm from the GHE, is an unsubstantiated plug number based on circular logic.

    You point out that Trenberth et al. cannot defend the accuracy or even the validity of the estimated back radiation, a number almost 400 times greater that the “net absorbed” figure of 9 W/m2, due to uncertainty regarding the net impact of clouds.

    This tells me that the “net imbalance”, which determines whether or not our planet will warm as a result of the GHE, is a totally meaningless number.

    So in this case, a “picture” (the energy balance cartoon at the top of this thread) is definitely NOT worth “a thousand words”.

    Max

  276. Patrick 027

    Methinks it would be wise for global governments to place a pause on all policy strategies being developed and even implemented under the pretext of controlling global climates to prevent DAGW until those significant “problems and issues” have been addressed.

    Best regards, Pete Ridley.

    The problems and issues are not THAT significant; you are over-reacting.

    ——–

    Max –
    There were a few responses, but no one could give me a satisfactory answer to the origin of and basis for the 9 W/m2 figure.

    You might want to look up ‘satisfactory’ in the dictionary.

    ——

    PS

    While inadvisable behavior, Buzz Aldrin’s punching a conspiracy theorists neither proved the conspiracy theorist correct nor Buzz Aldrin wrong.

    • Patrick027

      When you say that the “problems” are not that great, are you referring to the purported “problems” tied to AGW according to IPCC (i.e. melting Himalayan glaciers, African crop failures, severe weather events, etc.)?

      If so, I can fully agree.

      As to “satisfactory”, I believe the meaning is quite clear, and there was no “satisfactory” answer.

      Max

  277. Pete Ridley

    Manacker, I’m surprised no-one else has bothered to comment on how important the uncertainty in those estimates are when subtracting large numbers with significant uncertainties from other large numbers with significant uncertainties. It seems to me to be wish-full thinking that a sensible result can be arrived at, but then, I’m not a scientist so have no idea how to manipulate the data in order to arrive at the desired result. Lets hope that this “can hopefully be addressed in the future” with the attention it warrants.

    Talking about manipulating data, I see that the UK’s Science and Technology Committee has started its enquiry, with Phil Jones in the hot seat (Note 1). He looked extremely uncomfortable under questioning, but that is understandable. What a way to treat a scientist.

    It’s the same kind of treatment that Dr. Michael Mann received following the US enquiries into the statistical manipulations used to derive the “hockey stick”. According to Christopher Booker in his book “The Real Global Warming Disaster” Dr. Edward Wegman said after his assessment that Mann et al’s analysis was QUOTE: simply incorrect mathematics UNQUOTE. and of the analysis by McIntyre and McKitrick QUOTE: valid and compelling UNQUOTE. Wegman concluded QUOTE: Mann’s assessments that the decade of the 1990s was the hottest decade of the millennium and that 1998 was the hottest year of the millenium cannot be supported by his analysis UNQUOTE.

    What right had Dr. Wegman to say such things. He’s not a scientist, only a professional statistician for 38 years, Chairman of the National Academy of Sciences Committee on Applied And Theoretical Physics have. Fancy treating people from QUOTE: a discipline which most honest scientists have worked in for most of their professional lives. UNQUOTE in such a disrespectful manner. It’s enough to drive someone to calling each of those inquisitors names, like “an ASS (anti-science sufferer)”.

    QUOTE: .. a “picture” (the energy balance cartoon at the top of this thread) is definitely NOT worth .. UNQUOTE “not worth the paper it is written/printed on” (Note 2).

    NOTES:
    1) see http://news.bbc.co.uk/1/hi/sci/tech/8543289.stm
    2) see http://www.macmillandictionary.com/dictionary/american/paper

    Best regards, Pete Ridley

    • Patrick 027

      Need I remind you that the relative uniqueness of recent warmth has been supported by other studies, the errors in Mann (et al, I think)’s original hockey stick, if there were any, didn’t have a big effect on the big picture, and McIntyre and McKitrick’s work had serious error.

      • Pete Ridley

        But Patrick 027, did you miss my comment about Dr. Edward Wegman, who said after his assessment that Mann et al’s analysis was QUOTE: simply incorrect mathematics UNQUOTE. and of the analysis by McIntyre and McKitrick QUOTE: valid and compelling UNQUOTE?

        Did yu also miss my comment that Dr. Wegman is professional statistician for 38 years, Chairman of the National Academy of Sciences Committee on Applied And Theoretical Physics.?

        Best regards, Pete Ridley

  278. Pete Ridley

    Someone once wrote that the use of “ad hominem” attacks to demonize those who are rationally skeptical of the DAGW premise (for example as “climate deniers” or “flat earthers”) is actually a desperate and emotional (hence irrational) attempt to derail a rational discussion, which is not going in the favor of the DAGW supporter.

    A similar demonization by association can be seen in James E. Hansen’s “death train” hyperbole.

    To be sure, both sides can fall into this trap. However, it has been my observation that it is far more likely to be used by DAGW supporters than by those who are rationally skeptical of this premise.

    You questioned the psychological factors to explain this phenomenon.

    I am not a trained psychologist, but I think it can be explained by the fervor, almost religious in nature, with which the DAGW supporters defend the DAGW paradigm and attack anyone who questions its validity. If the conviction in a premise is strong enough, the premise can become a “cause”, and the conviction becomes a “belief”. And belief in a cause can be a powerful motivational factor, particularly when it is motivated by an underlying fear of impending disaster.

    There may some that question the DAGW premise who are motivated by an opposing religious belief, but the great majority are those who are rationally skeptical of the premise in the scientific sense. These individuals are not motivated by the fear of an imminent disaster and lack the fervor of the DAGW “believer”.

    Maybe you have another explanation, but that is the only way I can explain the observed fact that the DAGW supporters resort to the ad hominem demonization of those who question the DAGW premise more frequently than the other way around.

    Max

    • Patrick 027

      underlying fear of impending disaster.

      Interesting.

      Alternatively, there could be a non-religious, scientifically justified fear of either impending disaster or, less dramatically, impending serious net-negative (economic/social/environmental/moral/etc.) consequences relative to alternatives, which motivates some ferocity.

      Combined with that, there could be annoyance, mounting to frustration, then anger (depending on the persistence and intensity and manner), directed at those who cast themselves as knowledgeable and thoughtful but who have clearly missed important points or are running with urban legends, or have illogical arguments, or just make stuff up, and/or are justifiably suspected of having ulterior motives (Exxon et al funding, religious right connections, free market purists (who don’t seem to understand market economics in some cases, as they think that which would spur innovation would do the opposite, and other odd things), and so forth).

      (What is unfortunate is that this can spill over onto innocent people who couldn’t be expected to know any better.)

      Instead of leading to anger, it can instead lead to dissappoint. Or both I suppose you could question why it should matter to scientists what the public thinks of evolution, but I think some emotion is understandable. (In that issue, there are real consequences to society via medicine and maybe AGW (what people think about paleoclimate or science in general), more general concern about quality of education; also, some frustration in that the underlying reasons for not buying the science are rooted in odd notions about implications for morality and such that are really not logical.)

      I don’t think Buzz Aldrin had a fear of impending disaster when he hit that conspiracy theorist. I don’t think he was acting out of religious fervor either.

      • Patrick027

        We are waxing philosophical here, but let me respond to your post

        You wrote:

        there could be a non-religious, scientifically justified fear of either impending disaster or, less dramatically, impending serious net-negative (economic/social/environmental/moral/etc.) consequences relative to alternatives, which motivates some ferocity.

        Yeah. Whether you call it “disaster” or use your longer description it is a classical “doomsday scenario” in either case.

        Is it a “fear”? Yes. Fear is a strong emotion, which can “motivate some ferocity”, as you put it.

        Is it “non-religious”? Maybe (at least using the formal definition of “religion”).

        Is it “scientifically justified”? I have not seen the empirical evidence to support the premise that anthropogenic global warming, caused principally by human CO2 emissions, is a potential serious threat, so I would conclude that, until this empirical evidence, based on actual physical observations, is presented, this “fear” is not “scientifically justified”.

        You then continue in justifying the emotional reaction of the supporters of the dangerous AGW premise:

        Combined with that, there could be annoyance, mounting to frustration, then anger (depending on the persistence and intensity and manner), directed at those who cast themselves as knowledgeable and thoughtful but who have clearly missed important points or are running with urban legends, or have illogical arguments, or just make stuff up, and/or are justifiably suspected of having ulterior motives (Exxon et al funding, religious right connections, free market purists (who don’t seem to understand market economics in some cases, as they think that which would spur innovation would do the opposite, and other odd things), and so forth).

        That sentence is quite a mouthful. The emotion of “fear” is being augmented by emotions of “annoyance, mounting to frustration, then anger”. This is the emotional path that drives young children to throwing tantrums. It seems to me that all this emotion would make it very difficult to engage in a rational, factual discussion, possibly explaining why so many DAGW believers resort to emotional outbursts and “ad hominem” attacks rather than sticking with the subject matter.

        The aspersion that the rational skeptics of the DAGW premise “cast themselves as knowledgeable and thoughtful but who have clearly missed important points or are running with urban legends, or have illogical arguments, or just make stuff up” is a red herring, Patrick. Are you “casting yourself as knowledgeable and thoughtful”? On what basis do you believe that you are more so than those who are rationally skeptical of the DAGW premise? How do you define “illogical arguments”?

        The last part is pure hogwash, Patrick

        To say that those who are rationally skeptical of the DAGW premise are “justifiably suspected of having ulterior motives (Exxon et al funding, religious right connections, free market purists (who don’t seem to understand market economics in some cases, as they think that which would spur innovation would do the opposite, and other odd things), and so forth)” is an illogical ramble. There may be some that fit into one or another of these categories, just as there may be all sorts of opportunists, scoundrels, fools, etc. who support the DAGW premise, but there are a great many more on either side who do NOT fit into the categories you (or I) listed.

        Patrick, you have fallen into the trap of attacking the person, rather than the argument.

        Your next part, comparing DAGW with evolution is too far-fetched to be taken seriously, so I will not dignify it with a comment.

        In your last part you opined that there is

        frustration in that the underlying reasons for not buying the science are rooted in odd notions about implications for morality and such that are really not logical

        This is a cop-out, Patrick.

        Wiki defines “rational skepticism” as follows:

        Scientific skepticism or rational skepticism (also spelled scepticism), sometimes referred to as skeptical inquiry, is a practical, epistemological position in which one questions the veracity of claims lacking empirical evidence.

        Those who are rationally skeptical of the DAGW premise are demanding that the “science” be robustly supported by empirical evidence derived from actual physical observations.

        It appears that your “frustration” comes from the fact that you are unable to provide such empirical evidence to support the DAGW premise.

        Max

  279. I don’t have the time or expertise to personally correct all the mistakes here so I’m just focussing on this:

    Max, you think you haven’t been given a satisfactory answer as to why the imbalance is not equal to the annual rate of change of forcing. You have been given this; you just don’t realize it. So let me ask you this: What reason do you have for assuming that it is equal to the annual rate of change in forcing?

    Max and Pete, suppose climate models are way way off, more than one could truly justifiably expect in one particular direction, and the climate sensitivity is half of what was thought. What would the policy implications be? While there are nonlinearities in the economics, would not a good linear approximation be simply to halve the otherwise proposed ‘carbon tax’ (or equivalent policy effect)? Does this resemble what we have now? The tax we have in the U.S. is zero. Zero is half of zero.

    I still need to educate myself on how the paper that is the headline of this thread calculated the imbalance, but oceanic heat content itself indicates something in that range, and agrees with model output** Furthermore, if your concern about uncertainties stems from the uncertainty or errors that may be as large as radiative forcing and feedbacks, you have to remember that at least some of these small differences between large numbers are actually known quite well; the radiative forcing by greenhouse gas increases is quite well quantified relative to it’s size. How could this be? Think about a tape measure that may have been inadvertently stretched out so that all measurements are 10 % off. An error in the height of a person could be several inches. But the error in the difference in the heights of two people would generally be quite a bit less than that so long as the same tape measure is used. Also, if both people were standing on their toes, the total heights would be in error more than the difference. And so on if they were standing in water and the parts submerged were ignored so long as the underlying surface is not too steeply sloped, etc.

    **
    http://www.realclimate.org/index.php/archives/2008/06/ocean-heat-content-revisions/
    The numbers from tide gauges (and later, satellites) were higher than what you got by estimating each of those terms separately. (Note that the difference is mainly due to the early part of the record – more recent trends do fit pretty well).

    Initial results from the Argo data seemed to indicate that the ocean cooled quite dramatically from 2003 to 2005 (in strong contradiction to the sea level rise which had continued) (Lyman et al, 2006). But comparisons with other sources of data suggested that this was only seen with the Argo floats themselves. Thus when an error in the instruments was reported in 2007, things seemed to fit again.
    “Initial results” links to: http://www.realclimate.org/index.php/archives/2006/08/ocean-heat-content-latest-numbers/
    “reported” links to: http://www.realclimate.org/index.php/archives/2007/04/ocean-cooling-not/

    So what does this all mean? The first issue is tied to sea level rise. The larger long term trend in ocean warming reported here makes it much easier to reconcile the sea level estimates from thermal expansion with the actual rises. Those estimates do now match. But remember that the second big issue with ocean heat content trends is that they largely reflect the planetary radiative imbalance. This imbalance is also diagnosed in climate models and therefore the comparison serves as an independent check on their overall consistency.

    ————-

    http://www.realclimate.org/index.php/archives/2005/05/planetary-energy-imbalance/ (I hope this isn’t going too far in quoting:)

    The overall global surface temperature is also well modelled in this and other studies. While impressive, this may be due to an error in the forcings combined with compensating errors in the climate sensitivity (2.7 C for a doubling of CO2 in this model) or the mixing of heat into the deep ocean. Looking at the surface temperature and the ocean heat content changes together though allows us to pin down the total unrealised forcing (the net radiation imbalance) and demonstrate that the models are consistent with both the surface and ocean changes. It is still however conceivable that a different combination of the aerosol forcing (in particular (no pun intended!)) and climate sensitivity may give the same result, underlining the continuing need to improve the independent estimates of the forcings.

    So how well does the model do? The figure shows the increase in heat content for 5 different simulations in the ensemble (same climate forcings, but with different weather) matched up against the observations. All lines show approximately the same trend, and the variability between the ensemble runs being greater than the difference with the observations (i.e. this is as good a match as could be expected). The interannual variability, predominantly related to ENSO processes, is different but that too is to be expected given the mainly chaotic nature of tropical Pacific variability, the short time period and the models’ known inadequacy in ENSO modelling. The slope of these lines is then related to the net heat imbalance of around 0.60+/-0.10W/m2 over 1993-2003, and which the models now suggest has grown to around 0.85+/-0.15 W/m2. The distribution of heat in the ocean in the different runs is quite large (figure 3 in the article) but clearly spans the variations in the observations, which is of course just one realisition of the actual climate.

    What does this imply? Firstly, as surface temperatures and the ocean heat content are rising together, it almost certainly rules out intrinsic variability of the climate system as a major cause for the recent warming (since internal climate changes (ENSO, thermohaline variability, etc.) are related to transfers of heat around the system, atmospheric warming would only occur with energy from somewhere else (i.e. the ocean) which would then need to be cooling).

    Secondly, since the ocean warming is shown to be consistent with the land surface changes, this helps validate the surface temperature record, which is then unlikely to be purely an artifact of urban biases etc. Thirdly, since the current unrealised warming “in the pipeline” is related to the net imbalance, 0.85+/-0.15 W/m2 implies an further warming of around 0.5-0.7 C, regardless of future emission increases. This implications are similar to the conclusions discussed recently by Wigely and Meehl et al.. Many different models have now demonstrated that our understanding of current forcings, long-term observations of the land surface and ocean temperature changes and the canonical estimates of climate forcing are all consistent within the uncertainties. Thus since we are reasonably confident in what has happened in the recent past, projections of these same models under plausible future scenarios need to be considered seriously.

    ————

    PS lapse rates:

    http://www.realclimate.org/index.php/archives/2005/08/et-tu-lt/

    In a related paper, Santer et al compare the surface/lower-troposphere coupled tropical variability at different timescales in the data and in model simulations performed for the new IPCC assessment. At monthly timescales (which should not be affected by trends in the model or possible drifts or calibration problems in the satellites or radiosondes) there is a very good match. In both models and data there is the expected enhancement of the variability in the lower-troposhere (based simply on the expected changes in the moist adiabatic lapse rate as the surface temperature changes). The models have large differences in their tropical variability (which depends on their represenation of El Nino-like processes in the Pacific) but the results all fall on a line, indicating that the lower tropospheric amplification is robust across a multitude of cloud and moist convective parameterisations.

    At longer (decadal) time scales, the models still show very similar results (which makes sense since we anticipate that the tropical atmospheric physics involved in the trend should be similar to the physics involved at the monthly and interannual timescales). However, the original UAH 2LT data show very anomalous behaviour, while the new RSS 2LT product (including the latest correction) fits neatly within the range of model results, indicating that this is probably physically more consistent than the original UAH data.

  280. Pete Ridley, you quoted in part:
    March 3, 2010 @ 3:26 pm

    “…Mann’s assessments that the decade of the 1990s was the hottest decade of the millennium and that 1998 was the hottest year of the millenium…”

    Funny how GISS has since depressed 1998, and elevated 2005, (and clockwise rotation elsewhere) thus showing a remorseless climb in temperatures not seen in other records!

  281. I am one of the small army of technically trained people who have voluntarily formed into the quality assurance force which we perceive climatology to lack. Though having had no exposure to climatology prior to 5 months ago and though still substantially ignorant of the details of the IPCC climate models, I have nonetheless discovered foundational errors for these models that are fatal to the pretentions of the IPCC to basing its assessment reports upon scientific findings.

    I’m currently engaged in investigation of a question that arises in relation to the Kiehl-Trenberth diagram that is labelled “Global Energy Flows (W m-2).” Perhaps a climatologist can help me to understand the diagram better.

    I’m unclear on what is meant by an “energy flow.” I’ve Googled on the phrase “energy flow” and come up with nothing.

    Kiehl and Trenberth work for UCAR. On UCAR’s Web site, a page at the URL http://www.windows.ucar.edu/tour/link=/earth/climate/greenhouse_effect_gases.html presents a similar diagram, but here the diagram is labelled “Global Heat Flows (W m-2).” By the two diagrams, UCAR implies that an “energy flow” is a “heat flow,” but if this is true, the heat flux that is labelled the “back radiation” flow from relatively cold matter in Earth’s atmosphere to relatively hot matter in Earth’s surface, in violation of the second law of thermodynamics.

    An alternate possibility is that the page on UCAR’s Web site is mislabelled and that an “energy flow” is a radiation intensity. If this is the case then my error-hunting interests are piqued by a comment by made by Gerhard and Tscheuschner in their Jan. 6, 2009 paper “Falsifcation Of The Atmospheric CO2 Greenhouse Effects Within The Frame Of Physics.” On page 59, G&T state that: “…conservation laws (continuity equations, balance equations, budget equations) cannot
    be written down for intensities. Unfortunately this is done in most climatologic papers, the cardinal error of global climatology.”

    No matter how I interpret “energy flow,” it appears there is an error. Can a climatologist help me to understand this situation better?

    • Patrick 027

      Heat flow does go from hot to cold, true. When it does so by radiation, however, it must be kept in mind that it is a net radiant energy flux from point of emission to point of absorption, which is the difference between fluxes in opposite directions (these are the fluxes that are shown in the diagram). The photons are going in both directions because, at LTE (local thermodynamic equilibrium, a rather good approximation for most of the mass of the atmosphere and surface), at any given wavelength (and polarization and direction) an amount of material emits radiation as a fraction of perfect blackbody radiation for that temperature that is equal to the fraction of radiation incident on it (from the same direction, same polarization and wavelength). Thus, within an isothermal material at LTE that is sufficiently opaque (with a sufficient portion of that opacity coming from absorption), photons are emitted and absorbed at about the same rate and fluxes in any pair of opposite directions tend to cancel; the material is in equilibrium with the photons. When there is a gradient in temperature that occurs over a short enough distance relative to the distances that photons travel between emission and absorption, there is a net flux of radiant energy from hot to cold. But also note that when a hotter or colder pocket is found within material, if that hotter or colder pocket is sufficiently thin or otherwise has optical properties such that it it is a relatively weak absorber (and thus a weak emitter), then fluxes of radiant energy may pass through that pocket which are independent of that pocket’s temperature but are dependent on the temperatures of surrounding material or wherever emissions and absorptions are able to occur.

      The radiant power emitted by the surface and absorbed by the atmosphere is generally more than that emitted by the atmosphere that reaches and is absorbed by the surface, because the surface is generally warmer than the atmosphere, in particular that portion of the atmosphere which emits the photons that are absorbed by the surface. And so on for radiant fluxes among layers of atmosphere, between any part of the surface or atmosphere and space, and betweeen any of those and the sun. Etc.

      (LTE: There is a tendency for material to approach LTE over time as molecules, atoms and ions, electrons, etc, interact amongst themselves (thermalization) and exchange energy; at LTE, the energy distribution fits a particular statistical distribution among the various forms it can take. If energy is absorbed by or lost from a system at a sufficiently rapid rate relative to the thermali, then if that energy is going into or coming from a particular subset of forms, this can disrupt LTE. For example, if CO2 molecules absorbed and emitted photons sufficiently fast relative to molecular collisions, then the CO2 molecules could effectively have a different temperature than the gas they are within, and might not have a single definable temperature, but could be said to have relatively more or less energy on average than would be expected at LTE. In the vast majority of the mass of the atmosphere, molecular collisions happen fast enough to keep the air near LTE and keep the temperature of the air about the same as that of the CO2 or other radiatively important molecules or material within it.

      Examples of radiation that is emitted via processes not at LTE are fluorescence and phosphoresence. In these cases, there is a population of particles (such as electrons) that are excited with more energy than they would have if at equilibrium with the whole material they are in, and thus emit radiation at some wavelengths with greater intensity than the blackbody radiation for the temperature of the material. A macroscopic analogy is an incandescent light bulb – the filiment is white-hot; because of the spreading of rays from a small source and the scattering (for a frosted bulb) that occurs at the bulb surface, the bulb surface ’emits’ radiation that is less intense than that emitted by the filiment, but nonetheless is (at least in visible wavelengths) much more intense than the blackbody radiation that could be emitted by the bulb surface even if it it had perfect emissivity (requiring it also to be perfectly opaque.) Of course, if we resolve smaller scales, we can still consider different parts of the bulb to be at or near LTE, and also note that the bulb surface is not emitting radiation but is allowing some fraction of photons to pass through it; those parts of the bulb are still large enough to have statistically significant populations of molecules, etc.).

      • Patrick 027:

        Thanks for taking the time to reply! While appreciating your attempt at helping out, I have to report that the content of your reply isn’t what I need.

        I hunt for errors in the foundations of modern climatology. Currently, I’m looking into an error that seems to exist in the the Kiehl-Trenberth type of diagram. To help me out, you would have to free yourself from the assumption that the foundations of modern climatology are necessarily without error. If there is error, climatology must logically be rebuilt on a firmer foundation.

        There are 2 ways of interpreting this type of diagram. Under each of these interpretations, this type of diagram is inconsistent with physical reality.

        If an “energy flow” is intepreted as a heat flux, this type of diagram is invalidated from the violation of the second law of thermodynamics by the back-radiation; the violation results from the flow of heat from colder to warmer matter. If, on the other hand, an “energy flow” is interpreted as a radiation intensity, this type of diagram is invalidated by its assumption that there is a conservation law for radiation intensities when there is no such law. According to Gerhard and Tscheuschner “…conservation laws (continuity equations, balance equations, budget equations) cannot be written down for intensities. Unfortunately this is done in most climatologic papers, the cardinal error of global climatology.”

        If you or any other blogger were to address this issue in a response, I’d be most appreciative.

        Response: But a photon does not where it is going. If you place two objects next to each other with different temperatures, they are both radiating energy in accordance with Stefan-Boltzmann. Clearly, radiation from the colder object must head toward the hotter obejct, and vice versa. You have to consider the “net” heat flow to make any sense of thermodynamics in this regard. I can assure you that over a century of atmospheric physics has not been invalidated by amateur thought experiments like those of G&T– chris

  282. Patrick 027

    As I recall, that specific assertion about a single decade or a single year relative to other single decades or years, was found to be insuffiently supported – but not found to be incorrect.

  283. Pete Ridley

    Manacker, thanks for your thoughts on “what makes a DAGWer angry”. Not being a psychologist I find your explanation plausible, probably because it fits in with my own opinion. A major puzzle for me is that the vicious DAGWers are not all like that all of the time. Ian Forrester is a prime example. In my thread I suggest that in Ian’s case it stems from frustration about not really understanding those numerous disciplines involved in climate processes and drivers. If you have time why not join in there (Note 1). The daughter of a good friend of mine will be starting her final year studying for a PhD in psychology so I’ve suggested to her that she might like to use this in her thesis with something like “21st Century Climate Change Psychology”.

    Patrick 027, from the enormous number of links that you have provided it appears that you depend almost exclusively upon Realclimate for obtaining your understanding of global climate processes and drivers. Have you ever considered that just maybe Gavin and friends are somewhat biased and you are not getting the full picture? It may be a picture that you prefer but there are others that you should try to consider in order to ensure that you are not restricting your vision.

    Notes 1) see http://www.stevefielding.com.au/forums/viewthread/692/P30/

    Best regards, Pete Ridley

  284. Patrick027

    You wrote:

    Max, you think you haven’t been given a satisfactory answer as to why the imbalance is not equal to the annual rate of change of forcing. You have been given this; you just don’t realize it. So let me ask you this: What reason do you have for assuming that it is equal to the annual rate of change in forcing?

    This is not quite correct, Patrick. I have simply asked for the logic and arithmetic used in arriving at the “net imbalance” figure of 0.9 W/m2 in the annual global energy budget. K+T did not calculate this but simply plugged it in. All I could find was a paper by James E. Hansen, which (as I pointed out) arrived at this number using circular logic (plus some bad arithmetic).

    On this blog I got 3 answers: one from Blouis giving a range of –0.1 to +0.9 W/m2, one from Bob_FJ telling me that Trenberth himself has stated that this is a fantasy number, and one from you, giving no satisfactory explanation at all. In other words, I got no satisfactory explanation other than the Hansen paper, which is flawed for the reasons stated.

    The “hidden in the pipeline” postulation is exactly that: a postulation. As I pointed out, the cooling of the atmosphere at both the surface and in the troposphere after 2000 plus the cooling of the upper ocean since Argo measurements were in service tend to refute this postulation, as the latent heats of fusion (from melting ice) and evaporation (from added atmospheric water vapor) are too small to be where the missing energy is “hidden”. You mention that “oceanic heat content agrees with model output”. This has obviously diminished since the Argo measurements started; is that what the models predicted?

    As to the later “correction” of the observed Argo data to make it agree with the model predictions, that smacks too much of data manipulation, Patrick, so I will ignore it. There have been too many other examples recently.

    You then point out that the radiative forcing of GHGs is well quantified. True (although there several different approaches for estimating these, which yield slightly different results). And the annual energy imbalance due to the GHE is based on the logarithmic annual change in atmospheric concentration of these GHGs, which can easily be calculated. The radiative forcing from increased CO2 (280 to 379 ppmv, from year 1750 to 2005) is estimated by IPCC to be 1.66 W/m2, and from all anthropogenic factors (incl. CO2) 1.6 W/m2. I have no qualms with these estimates.

    You cite several RealClimate blurbs.

    The first purports to show that the discrepancy between the warming rate at the surface and in the troposphere has essentially been resolved, agreeing with the models. Yet the discrepancy is still there, showing a more rapid warming at the surface than in the troposphere. We have gone through this here ad nauseam, so there is no point repeating all this.

    The next one purports to show that

    as surface temperatures and the ocean heat content are rising together, it almost certainly rules out intrinsic variability of the climate system as a major cause for the recent warming

    Well, check the thermometers out there, Patrick, even the ones next to AC exhausts and asphalt parking lots. They tell us that the surface temperature has cooled after 2000. The RC graph on “ocean heat content” only goes as far as 2003 and is based on the inaccurate readings prior to Argo. Since 2003 the ocean has cooled. So this paper is based on outdated info.

    The next paper tries to play down the observed ocean cooling since Argo measurements started. It is another case of “the observed facts do not agree with the model projections, so the observed facts must be wrong”. Subsequent studies confirm the ocean cooling, so you can forget Gavin’s blurb.

    The next paper tries the same thing, stating that since sea level is rising, the ocean must be warming. So we take a totally inaccurate and very questionable indicator of sea level to refute an observed temperature drop. Bad science, Patrick.

    Don’t bring RC blurbs as scientific evidence, Patrick. The RC site has “a horse in the race” and is not providing unbiased scientific information on climate change, but rather rationalizations to support the preconceived AGW position.

    Max

  285. Pete Ridley

    Manacker, thanks for your thoughts on “what makes a DAGWer angry”. Not being a psychologist I find your explanation plausible, probably because it fits in with my own opinion. A major puzzle for me is that the vicious DAGWers are not all like that all of the time. Ian Forrester is a prime example. In my thread I suggest that in Ian’s case it stems from frustration about not really understanding those numerous disciplines involved in climate processes and drivers. If you have time why not join in there (Note 1). The daughter of a good friend of mine will be starting her final year studying for a PhD in psychology so I’ve suggested to her that she might like to use this in her thesis with something like “21st Century Climate Change Psychology”.

    Patrick 027, from the enormous number of links that you have provided it appears that you depend almost exclusively upon Realclimate for obtaining your understanding of global climate processes and drivers. Have you ever considered that just maybe Gavin and friends are somewhat biased and you are not getting the full picture? It may be a picture that you prefer but there are others that you should try to consider in order to ensure that you are not restricting your vision.

    Notes 1) see http://www.stevefielding.com.au/forums/viewthread/692/P30/

    Best regards, Pete Ridley

    Bob_FJ, I’m sure that given the opportunity Mann could find some data manipulation “trick” with which these tree ring experts seem to be so familiar. Those Climategate E-mails tell us a lot about this QUOTE:

    (Note 1) – From: Gary Funkhouser .. To: k.briffa ..Subject: kyrgyzstan and siberian data Date: Thu, 19 Sep 1996 15:37:09 –0700 Keith, .. Once I get a draft of the central and southern siberian data .. I’ll send it to you. I really wish I could be more positive about the Kyrgyzstan material, but I swear I pulled every trick out of my sleeve trying to milk something out of that. .. The data’s tempting but there’s too much variation even within stands. I don’t think it’d be productive to try and juggle the chronology statistics any more than I already have – they just are what they are .. I think I’ll have to look for an option where I can let this little story go as it is.
    Not having seen the sites I can only speculate, but I’d be optimistic if someone could get back there and spend more time collecting samples, particularly at the upper elevations. Cheers, .. Gary Funkhouser Lab. of Tree-Ring Research The University of Arizona
    (Note 2 – the famous “trick” E-mail) – From: Phil Jones .. To: ray bradley .., mann .., mhughes .. Subject: Diagram for WMO Statement Date: Tue, 16 Nov 1999 13:31:15 +0000 Cc: k.briffa .. , t.osborn ..
    Dear Ray, Mike and Malcolm, Once Tim’s got a diagram here we’ll send that either later today or first thing tomorrow. I’ve just completed Mike’s Nature trick of adding in the real temps to each series for the last 20 years (ie from 1981 onwards) amd from 1961 for Keith’s to hide the decline. Mike’s series got the annual land and marine values while the other two got April-Sept for NH land N of 20N. The latter two are real for 1999, while the estimate for 1999 for NH combined is +0.44C wrt 61-90. The Global estimate for 1999 with data through Oct is +0.35C cf. 0.57 for 1998. Thanks for the comments, Ray. Cheers .. Prof. Phil Jones Climatic Research Unit
    (Note 3) – From: “Michael E. Mann” .. To: Ray Bradley .., “Malcolm Hughes” .., Mike MacCracken .., Steve Schneider .., tom crowley .., Tom Wigley .., Jonathan Overpeck .., .. Michael Oppenheimer .. , Keith Briffa .. , Phil Jones .. , Tim Osborn .. , Tim_Profeta .., Ben Santer .. , Gabi Hegerl .. , Ellen Mosley-Thompson .. , “Lonnie G. Thompson” .. , Kevin Trenberth .. Subject: CONFIDENTIAL Fwd: Date: Sun, 26 Oct 2003 13:47:44 -0500
    Dear All, This has been passed along to me by someone whose identity will remain in confidence. Who knows what trickery has been pulled or selective use of data made. Its clear that “Energy and Environment” is being run by the baddies–only a shill for industry would have republished the original Soon and Baliunas paper as submitted to “Climate Research” without even editing it. Now apparently they’re at it again…
    My suggested response is:
    1) to dismiss this as stunt, appearing in a so-called “journal” which is already known to have defied standard practices of peer-review. It is clear, for example, that nobody we know has been asked to “review” this so-called paper
    2) to point out the claim is nonsense since the same basic result has been obtained by numerous other researchers, using different data, elementary compositing techniques, etc.
    Who knows what sleight of hand the authors of this thing have pulled. Of course, the usual suspects are going to try to peddle this crap. The important thing is to deny that this has any intellectual credibility whatsoever and, if contacted by any media, to dismiss this for the stunt that it is..
    Thanks for your help, mike

    two people have a forthcoming ‘Energy & Environment’ paper that’s being unveiled tomoro (monday) that — in the words of one Cato / Marshall/ CEI type — “will claim that Mann arbitrarily ignored paleo data within his own record and substituted other data for missing values that dramatically affected his results. When his exact analysis is rerun with all the data and with no data substitutions, two very large warming spikes will appear that are greater than the 20th century. Personally, I’d offer that this was known by most people who understand Mann’s methodology: it can be quite sensitive to the input data in the early centuries. Anyway, there’s going to be a lot of noise on this one, and knowing Mann’s very thin skin I am afraid he will react strongly, unless he has learned (as I hope he has) from the past….”
    Professor Michael E. Mann Department of Environmental Sciences, Clark Hall University of Virginia
    UNQUOTE.
    Note that all of those E-mail pre-dated the two US enquiries (The House Energy & Commerce Committee and the House Committee in Science) which reported in June/July 2006 (Notes 4 & 5). You really should read Christopher Booker’s book. It’s a mine of information on the UN’s global climate change scam. For more on “trick”, Mann and associates take a look at the Climate-gate site (Note 6). It includes a very useful search facility.

    As John Daly said “All science is numbers, but not all numbers is science” (Note 7).

    You’ll notice many recognisable “hockey-team” members’ names in those E-mails. As I said previously, Phil Jones was understandably uncomfortable under scrutiny during the Science & Technology Committee enquiry. After all, he and many other supporters of The (significant human-made global climate change) Hypothesis have an awful lot to lose if their actions are found to be unethical or claims about a climate catastrophe arising from our use of fossil fuels are found to be nothing but hot air. The careers of many self-proclaimed experts in “climate science” are under threat, not just among mature academics but also among would-be scientists who have vigorously supported The Hypothesis. I anticipate a lot of red faces in academia soon.

    You can see moves by some to leave the sinking IPCC ship by distancing themselves from the most committed. What was it that Phil Jones’s “right hand man” vice-chancellor Professor Edward Acton at the Science & Technology Committee enquiry (Note 8) said QUOTE: I think the science is robust UNQUOTE and QUOTE: scientists tell us that there is no doubt UNQUOTE then said with such humility QUOTE: I’m not a scientist, I’m a historian UNQUOTE (not guilty your honour, it was him). It must have been so reassuring that Professor Acton was prepared to give him 100% support like that. Have a look at the A/Vs and read the comments at the Wattsupwiththat site. It’ll be a nice break from concentrating on those realclimate prognostications.

    NOTES:
    1) see http://www.climate-gate.com/email.php?eid=12&s=kwtrick
    2) see http://www.climate-gate.com/email.php?eid=154&s=kwtrick
    3) see http://www.climate-gate.com/email.php?eid=376&s=kwtrick
    4) see http://www.nap.edu/catalog.php?record_id=11676#description

    Response: This is well off topic now. No more.– chris

  286. Patrick027

    It looks like Pete Ridley is giving you the same advice on using RealClimate as your source of information as I have done (our two posts crossed).

    Max

  287. In his comment of 3rd March Ian Forrester referred to “FOS” but didn’t explain what this was. For anyone puzzled by this I believe he is referring to the Friend of Science Society (Note 1). I have taken a quick look at its “position Statement” and find that it fits in well with my own understanding of climate processes and drivers and the political distortions made to the science. In April 2008 Ian threw his usual insulting accusations in a comment on an article “University of Calgary Audit Exposes Friends of Science Wrongdoing” by Richard Littlemore on Desmogblog. Ian lashed into someone with QUOTE: Troll, you are one sick person .. scum bag like you. You are one disgusting person. I don’t know why you are allowed to post such lies and distortion on this site. UNQUOTE. That particular exchange related to funding received by parties on each side of the climate change debate and Ian’s ire was directed at “Troll” for daring to suggest that people like James Hanson might be making money through their climate change activities and presumed to ask QUOTE: AGW not going the way it should perhaps? UNQUOTE. Several familiar names crop up in that article and in the comments, like Hansen, McKitrick, etc.

    As well as being vicious Ian also appears to be a conspiracy theorist. In a more recent comment on a Deep Climate article “In the beginning: Friends of Science, Talisman Energy and the de Freitas brothers “(Note 2) relating to FoS receiving funds from oil companies he complains that QUOTE: Friends of Science also appear to have a mole in the Calgary Herald who forwards copies of e-mails sent as “Letters to the Editor” thus leaking private information such as e-mail address UNQUOTE. Another of his comments shows his “left-of-centre” political inclination.

    Contributor Cam MacKay commented QUOTE: A veritable paucity of publications and citations, wouldn’t you say? UNQUOTE. That could also be used to describe the evidence available in support of Ian’s claims to having been a scientist for 40 years (I can find no such evidence). Cam has his own blog (Note 2) which includes a challenge to DAGWer claims that QUOTE: Alaska and northern Canada are 5-10C warmer than the average for this time of year, so are North Africa and the Mediterranean UNQUOTE. He shows that for QUOTE: Alert Nunavut, (the northernmost temperature station in the center of Canada’s Actic) UNQUOTE this claim is pure fantasy. He also provides a very good animation of QUOTE: Nasa’s Arctic Sea Ice Increase UNQUOTE from 1979 to 2009. It shows the waxing and waning of the ice in a cyclical manner, with current changes being far from unusual. A picture paints a thousand words. There is more good stuff to look at on that blog – enjoy.

    NOTES:
    1) see http://www.friendsofscience.org/assets/files/documents/FoS%20Position%20Statemen1.pdf
    2) see http://cammackay.com/

    Best regards, Pete Ridley

  288. Chris

    I am not going to argue with you whether your response to Terry Oldberg (March 4, 2010, 10:11 pm) answered his question, but your last sentence caught my eye:

    I can assure you that over a century of atmospheric physics has not been invalidated by amateur thought experiments like those of G&T– chris

    Your “assurance” is very nice, Chris, but it ignores one basic fact: “over a century” of the restrictions of Newtonian physics were invalidated by a young physicist working in the patent office of Bern. These things are rare, I’ll admit, but they happen. So this line of argumentation is invalid, and it is better to stick with debating the science itself.

    Max

    • I warmly and repectfully submit to Chris that the statement “I can assure you that over a century of atmospheric physics has not been invalidated by amateur thought experiments like those of G&T” logically fails for: a) arguing ad hominem and b) appealing to authority.

      Chris:

      My purpose in posting here is to solicit the help of people more erudite than myself in atmospheric physics in my hunt for error, if any, in the Kiehl-Trenberth type of diagram. My level of erudition is slight. I’m a mechanical, electrical and nuclear engineer by training, with a background in model building.

      One might doubt that supporting the effort of an amateur like myself would be worth the time of a professional. However, in a few month’s worth of work, I’ve already uncovered fatal foundational errors in atmospheric physics; I’d be happy to discuss these with you or any of your colleagues at another time. The ease with which I’ve identified these errors leads me to believe that atmospheric physics lacks an adequate system for quality assurance.

      If you or someone else who is more erudite in atmospheric physics than me were to help me out on the issue of the validity of the Kiehl-Trenberth type of diagram, I’d appreciate same. It seems to me that there is a foundational error in this type of diagram. I’ve exposed my reasoning, What if anything is wrong with this reasoning?

      I’ve uncovered evidence supporting the contention that error in the Kiehl-Trenberth type of diagram is obscured by replacement in atmospheric physics of the precise terminology of thermodynamics and heat transfer by an ambiguous terminology. The ambiguity produces the phenomenon in which disagreements, e.g., whether or not there is a second law violation, cannot be resolved by reference to facts and the principles of physics.

      In thermodynamics, a distinction is made between heat, work and internal energy. In atmospheric physics, I observe, these distinctions can be wiped out by the use of the ambiguous term “energy.” Sometimes, atmospheric physicists speak of the “net heat flow” in discussions of possible violations of the second law but the “net heat flow” is a confusing neologism. In standard technical English, it is not the “net heat” that may not flow from cold to hot matter under the second law but rather the “heat.” The notion of the “net heat” seems to reflect confusion of the separate concepts of “heat” and “radiation intensity.” Atmospheric physicists seem to incorrectly equate radiation intensity to heat, producing the need to say it is the “net heat” that may not flow from cold to hot matter.

      Confusion produced by ambiguous or non-standard use of technical English in atmospheric physics is evident in alternate labellings of concepts by people who work for UCAR, including Kiehl and Trenberth. In a Kiehl-Trenberth diagram that is posted at UCAR’s Web site, the entity that flows through the diagram is identified as “heat.” However, in the diagram that is the topic of this blog, this entity is identified as “energy.” If an “energy flow” is a heat flow, then the Kiehl-Trenberth diagram is invalidated from a violation of the second law of thermodynamics by the “back-radiation.” If an “energy flow” is a radiation intensity then, according Gerlich & Tscheuschner, the Kiehl-Trenberth diagram is invalidated by its presumption that there is a conservation law for radiation intensities when there is no such law. The people at UCAR are obviously confused about how to label the entity that flows through a Kiehl-Trenberth diagram and they invented this concept! Their confusion provides a portion of the basis for my suspicion that there is an error here.

      • I interpret Kiehl and Trenbert’s diagrams as energy flux diagrams – the movement of energy per unit area per unit time from one entity to another. As such I think that they are valid and useful.

        We can quibble about accuracy (If Miskolczi is correct then the K&T numbers are wrong). But from a thermodynamic point of view I think they work fine.

        In the case of the “back radiation” I think this is a physical reality – if you go outside with a spectroscope, you will measure a flux from atmosphere to ground (and an equal flux sideways!).

        The NET transfer of energy per unit are per second from the surface to the atmosphere is:

        Emitted Radiation – Back Radiation – Emitted Radiation Escaping to Space + Latent Heat carried by evaporated/traspired Water Vapour + Direct Conduction from the Surface to the Air.

        The energy collects in the atmosphere as follows:
        A. Conduction(17) – at the bottom of the column.
        B. NET Radiation(23) – mostly in the lowest 500m of the column
        C. Latent Heat(80) – when the water vapour condenses, somewhere in the clouds. Note that Latent Heat flux is TWICE as strong as the other two transfer mechanisms COMBINED.

        A couple of observations about the surface:
        1. The sensitivity of the surface to flux changes in either Solar Irradiation or Back radiation is between 0.95 to 0.15 DegC/W/m^2. This is between one third and one half of the sensitivity of the upper atmosphere.
        2. As the temperature of the surface increases, due to whatever cause, the relative strength of the Latent Heat Flux increases, compared to the NET Radiation. (providing conduction remains constant, which seems reasonable).

        At the higher levels, the energy fluxes into the atmosphere are balanced by radiation of IR to space from two distinct main levels:
        1. A “fuzzy” water vapour horizon (fuzzy as water absorbs radiation across a wide range of frequencies, and because it is so variable in the atmosphere), but roughly at the cloud tops and above. This accounts for about 2/3.
        2. A well defined CO2 horizon. Standard CO2 absorption tables suggest to me that this level is in the tropopause and above (not true for the far wings, but these have almost zero flux. For wavenumbers 600 through to 750 over 50% of emissions originate above the tropopause). This accounts for about 20%. (the rest is from the other gases.)

      • I agreed with you Terry. Have you had a look at the work of Bo Nordell and thermal pollution, which does have a basis in heat energy.
        http://www.ltu.se/shb/2.1492/1.5035?l=en

        There are only two ways in which a climate scientist can convert a radiative flux into a temperature change:
        1. The formula to convert radiative flux (RF) to surface temperature change is (ΔTs): ΔTs = λRF, where λ is the climate sensitivity parameter with estimates that vary from near zero through to IPCC’s estimates and higher.
        2. Put it in a climate computer model – an arcance beast made of obscure code (usually fortran) attempting to model the behaviour of the earth but with so many assumptions and inaccuracies that they have no chance of predicting anything.

        The climate scientist approach to energy and temperature is most unfulfilling and yes conservation of energy flux is pure nonscience, but you won’t be able to convince a climate scientist of that.

        Climate scientists and their computer models as referenced by IPCC think low clouds cause surface warming! Thank goodness this has been disproven by the latest cloud computer mnodels.

    • Your “assurance” is very nice, Chris, but it ignores one basic fact: “over a century” of the restrictions of Newtonian physics were invalidated by a young physicist working in the patent office of Bern. These things are rare, I’ll admit, but they happen. So this line of argumentation is invalid, and it is better to stick with debating the science itself.

      Einstein was a Genius. But Relativity is, under many conditions, a small correction to Newtonian physics, often so small it can be set aside.

      Chris was sticking with the science. G&T reads like it was written about some alternative universe, or perhaps just written by people who don’t know what they’re talking about – and it’s OBVIOUS.

      • Patrick 027

        You ask:

        Are Petty, Schmidt, Pierrehumbert, Kiehl, Trenberth, Fasullo, and many others … are they amateurs?

        This is the wrong question, Patrick.

        I could ask the (equally irrelevant) question, “are Lindzen, Christy, Spencer, Carter, Chylek, de Freitas, Gray and many others…are they amateurs?”

        The “consensus” scientists defending the prevailing AGW paradigm are certainly not “amateurs”, but they might be wrong, as the prevailing “consensus” has been several times in the past.

        I do not necessarily agree with everything that G+T have claimed, but it is up to the “consensus” scientists to unequivocally refute G+T with scientific facts. I have not seen this so far, but if you have, please provide links (no RealClimate blurbs, please).

        Max

        PS To claim that Einstein’s theory of relativity was only a “small correction to Newtonian physics, often so small it can be set aside” is absurd, Patrick, and (I think) you know it.

        Response: No. Any quack can put something on the internet claiming the moon is made of green cheese. Even if somehow bypasses the peer-review barrier, it is not anyone’s job to refute them. It is self-evidently garbage. As it happens, there are rebuttals available, including Arthur Smith’s (2008), and one just accepted for publication in IJMPB by Eli Rabett, myself, Arthur Smith, and several others. In normal scientific discussions, such a rebuttal would not even be necessary, although we decided to write one just because of the importance of the topic outside of the scientific community and the potential to confuse the laymen. As a proxy for its importance, see how much G&T is discussed at scientific conferences, how much it is cited in the primary literature, etc.

        If you cannot understand how bad G&T is, then no offense, but you really don’t deserve much of an opinion on whether experts are right or wrong. This is like pre-calculus students reading that 2+2=5, believing that this now generates a huge “controversy” in the math community that deserves peer-reviewed rebuttal, and then proceeding to argue with advanced mathematicians about partial differential equations– chris

  289. Pete Ridley

    I see that in response to Terry Oldberg’s comment on 4th March @ 1.25 and 10:11 pm both Patrick 027 and Chris talk about radiated energy and heat as though they are the same thing. This is not my understanding so perhaps they’d be good enough to clarify.

    In the 6th Jan 2009 version 4 of their paper Gerlich and Tscheuschner say QUOTE: By showing that
    a) there are no common physical laws between the warming phenomenon in glass houses and the fictitious atmospheric green-house effects,
    (b) there are no calculations to determine an average surface temperature of a planet,
    (c) the frequently mentioned difference of 33 _C is a meaningless number calculated wrongly,
    (d) the formulas of cavity radiation are used inappropriately,
    (e) the assumption of a radiative balance is unphysical,
    (f) thermal conductivity and friction must not be set to zero, the atmospheric greenhouse conjecture is falsified. UNQUOTE.

    The G&T paper to which Terry refers attracted both criticism (in 2008 – Note 2) and support (in 2009 – Note 3) but nothing seems to have been resolved as far as I can see. Their argument seems yet to be refuted convincingly. Gerhard has several published works to his name (Note1) as does Tscheuschner and both appear to have sound pedigrees whereas I have come across nothing by Chris or Patrick 027 (other than these blogs). For Chris to say QUOTE: I can assure you .. UNQUOTE and for Patrick to offer his analysis appears to me not to be enough to convince a discerning person that they are both correct and G&T are wrong.

    NOTES:
    1) see http://ruby.fgcu.edu/courses/twimberley/EnviroPhilo/Critical.html
    2) see http://fr.arxiv.org/abs/0802.4324 and http://rabett.blogspot.com/2009/04/die-fachbegutachtung-below-is-elis.html
    3) see http://arxiv.org/ftp/arxiv/papers/0904/0904.2767.pdf

    Best regards, Pete Ridley.

    • Patrick 027

      But G&T never succeeded in showing any of those things, except maybe those things that we already knew (models and theory don’t assume purely radiative equilibrium (it is understood the climate tends to approach radiative-convective equilibrium, with convection being important in the global average for the troposphere), and certainly don’t set friction to zero. It is well understood that the heat tranport by conduction through the air is of neglible importance except within the surface material and a thin layer of air in contact with the surface and with regards to cloud microphysics.).

      And are Chris and I the only critics? There are plenty of successful qualified scientists who agree. Are Petty, Schmidt, Pierrehumbert, Kiehl, Trenberth, Fasullo, and many others … are they amateurs?

      G&T is really just a bunch of cr_p. It doesn’t take a lot of number crunching to see it either, it’s right there on the upper surface, glaringly obvious.

      If you two are going to go down this rabbit hole, you might as well say 2+2 = -30 and Robin Hood was a time-travelling CIA agent.

      • Even a bunch of IPCC scientists disagree with the IPCC.
        http://environmentalresearchweb.org/cws/article/opinion/35820

        The beauty of science is that everybody has the right to be wrong and the facts will speak for themselves.

        Sadly, it will take many years for scientists to resolve all the measurement and modelling problems that plague climate science.

        I think the politicians actually got it right in Copenhagen. By committing to not let global temperatures rise by more than 2 degrees C/K, the world has an answer that keeps the problem at bay while the scientists take years to figure out what is really happening.

      • Patrick 027:

        I wonder if you and/or Chris would be kind enough to respond to Pete Ridley when he says:

        “I see that in response to Terry Oldberg’s comment on 4th March @ 1.25 and 10:11 pm both Patrick 027 and Chris talk about radiated energy and heat as though they are the same thing. This is not my understanding so perhaps they’d be good enough to clarify. “

      • Patrick 027:

        It sounds as though you’re stating that G&T are wrong when they say “the assumption of a radiative balance is unphysical…” Would you be so kind as to share with us the basis for your conclusion?

      • Patrick027

        I have told you that I do not support the conclusions of G+T.

        I also do not doubt that many highly qualified individuals reject the G+T conclusions.

        I have just not seen an official refutation of their conclusions, as I pointed out to Chris.

        This all has nothing to do with “Robin Hood” or any other silly analogy, but simply writing:

        G&T is really just a bunch of cr_p. It doesn’t take a lot of number crunching to see it either, it’s right there on the upper surface, glaringly obvious.

        is no answer, Patrick. You’ve got to do a bit better than that to be credible.

        Max

  290. Gulp. Small correction to my previous post.

    “1. The sensitivity of the surface to flux changes in either Solar Irradiation or Back radiation is between 0.95 to 0.15 DegC/W/m^2. This is between one third and one half of the sensitivity of the upper atmosphere.

    Should read:

    1. The sensitivity of the surface to flux changes in either Solar Irradiation or Back radiation is between 0.095 to 0.15 DegC/W/m^2. This is between one third and one half of the sensitivity of the upper atmosphere.

  291. Patrick 027:
    I commented earlier, that amongst the many adverse commentaries on Santer et al 2008. There was this very interesting cluster:
    Link 1) http://climateaudit.org/2009/05/28/santer-et-al-2008-worse-than-we-thought/

    They really are rather educational, and I’m surprised that you have not commented in any way!
    I’m splitting-up the links because the spam filter seems to be objecting:

  292. Blouis79 March 6, 2010@2:39 am:

    Thank you very much for the input!

    You’ve alluded to the idea in climatology of “radiative forcing.” My error-seeking suspicions are raised by this concept. Could there be, in this concept, the tacit assumption of the existence of a balance of radiative fluxes when this “balance” is based upon a non-existent conservation principle? I’d appreciate your commentary on this issue.

  293. Blouis79

    You wrote:

    I think the politicians actually got it right in Copenhagen. By committing to not let global temperatures rise by more than 2 degrees C/K, the world has an answer that keeps the problem at bay while the scientists take years to figure out what is really happening.

    You hit the nail on the head.

    The IPCC temperature projections, which exceeded 2°C rise above today’s temperature by year 2100 have several basic problems.

    First, the “scenarios” project different temperature increases (°C increase relative to 1980-1999) with different rates of increase in atmospheric CO2 (CAGR, % per year):

    Temp. – CO2 – “Scenario”
    1.8°C – 0.48% – B1
    2.4°C – 0.65% – A1T
    2.4°C – 0.80% – B2
    2.8°C – 0.86% – A1B
    3.4°C – 1.29% – A2
    4.0°C – 1.52% – A1F1

    These rates of CO2 increase result in the following projected CO2 concentrations (ppmv) by year 2100:

    B1 – 600
    A1T – 700
    B2 – 800
    A1B – 845
    A2 – 1250
    A1F1 – 1540

    Based on optimistic forecasts of total reserves, all the fossil fuels on our planet contain only enough carbon to raise atmospheric CO2 levels to just under 1000 ppmv, so “scenarios” A2 and A1F1 can be discarded as unrealistic from the start.

    The actual CAGR of CO2 increase over the past 50 years has been 0.42%, and over the past 20 years, 0.49%, so the 0.48% projected increase for scenario B1 seems reasonable, the 0.65% increase for A1T stretches the imagination a bit and the 0.80% and 0.86% for scenarios B2 and A1B are not reasonable, so can also be discarded.

    This leaves us with scenarios B1 and A1T, with projected temperature increases (above 1980-1999 levels) of 1.8° and 2.4°C, respectively.

    The temperature of the last five years (2005 through 2009) averaged 0.3°C higher than the IPCC baseline average (1980-1999), so we are talking about IPCC projected temperature increases beyond today of:
    1.5°C – B1 scenario
    2.1°C – A1T scenario

    So, even using the questionable IPCC 2xCO2 climate sensitivity based on strong net positive feedbacks and all the other exaggerations, we barely arrive at the 2°C increase from today to year 2100 without cutting CO2 emission rates, so we do not need to do any “mitigation” or implement and carbon caps or taxes and the politicians are taking zero risk with their 2°C “commitment”.

    Besides, they’ll all be long gone by year 2100.

    Max

  294. Chris

    Don’t get so worked up.

    I have said from the start that I do not support the conclusions of G+T, simply that they have not been conclusively refuted.

    The Smith paper has a few basic problems and the second rebuttal is still “work in progress”.

    It is the ARGUMENT that is being used that is FALSE, namely that a “whole bunch” of highly qualified individuals accept the dangerous AGW premise and that G+T are therefore wrong.

    This would be just as silly as if I were to say that a “whole bunch” of highly qualified individuals DO NOT accept the dangerous AGW premise and that G+T are therefore RIGHT.

    Both lines of argumentation are silly, Chris, as I am sure you would agree.

    Max

  295. Chris

    Further to my earlier post, I have gone through the rebuttal by A.P. Smith of the G+T paper.

    As I said earlier, Chris, I am not going to claim that the G+T paper is “right” (and that the greenhouse theory is therefore false).

    However, I did not see a clear refutation of it in the Smith rebuttal you cited, but rather a succinct explanation of how the greenhouse theory works, including a calculation method for estimating its impact on a hypothetical “global average temperature”.

    Smith states that the “average temperature is mathematically constrained to be less that the fourth root of the average fourth power of the temperature”.

    Smith concludes that “the only way the fourth power of the surface temperature can exceed this limit” (i.e. a “value determined by the incoming stellar flux and the relative reflectivity and emissivity parameters”) is to be covered by an atmosphere that is at least partly opaque to infrared radiation. This is the atmospheric greenhouse effect.”

    G+T state that “the popular climatologic ‘radiation balance’ diagrams describing quasi-one-dimensional situations…do not properly represent the mathematical and physical fundamentals”. G+T also state that the Stefan-Boltzmann equation used in calculating “heat transfer for a radiation-exposed body” is “invalid for real objects”.

    G+T apparently question the method of calculation used as a “standard in global climatology” (p.63), in other words the basis for the conclusion reached by Smith.

    As I understand it, this has to do with whether the “fourth root is drawn before averaging” rather than afterward.

    Maybe this is just a moot point, but it seems strange to me that Smith did not specifically refute this statement in G+T.

    Another objection I saw in G+T (p.66), which Smith also did not refute directly, is to the concept of a “global average temperature”. G+T quote another study (Essex et al.) that says that: “there is no physically meaningful global temperature for the Earth in the context of the issue of global warming” and “a given temperature field can be interpreted as both ‘warming’ and ‘cooling’ simultaneously, making the concept of warming in the context of the issue of global warming physically ill-posed.”

    The point G+T make here (whether it is valid or not) is that one cannot make simplified calculations based on a hypothetical “global average temperature”, since the influence of local factors is far too great. Again, I did not see a refutation of this statement by Smith.

    Smith presents the “proof” that GH warming exists on p.8: “The only way for a planet to be radiatively warmer than the incoming sunlight allows [minus what is reflected from its surface albedo] is for some of the thermal radiation to be blocked from leaving.”

    This is not, in the true sense, a scientific proof that GH warming exists; it is, at best, a “proof by default”, which could be restated as follows:

    “We (think we) know how much energy the sun is bringing in, we (think we) know how much is reflected back into space, and since a calculation shows that something else is going on that we cannot explain otherwise, which results in temperatures that are higher than those we calculated based on our assumptions, we conclude that it is greenhouse warming.”

    A final point I saw was that G+T referred to a paper by Schack, which referred to CO2 as “an absorbent medium”, but not in the context that atmospheric CO2 would radiate heat back to a warmer ground, causing surface warming.

    Again, I am not taking either side on this issue, but it appears to me that the Smith paper was a rebuttal but not a refutation of the specific points raised by G+T.

    Regards,

    Max

  296. PART I:

    Max – is no answer, Patrick. You’ve got to do a bit better than that to be credible.

    1.
    Well, I have gone into in mind numbing detail elsewhere, and I get tired of repeating myself repeating repeating myself myself.
    Stay tuned (might link to a Realclimate page, but specifically MY OWN COMMENTS there (G&T wasn’t the original topic of the post) among others) but for a quicker explanation, see below.

    PS my point about relativity was true; you may have misunderstood my intent: I was not talking about galaxies and the speed of light, I was talking about driving a car (excluding some modern technical support such as GPS), timing a race, building a house, that kind of thing.

    It’s similar to the curvature of the Earth: if you are looking at a small area, a map on a piece of paper works fine because a flat approximation (for the geopotential surfaces if not the actual topography) works.

    The overall point being that sometimes a dramatic new finding, important though it may be, mathematically works out as a small adjustment to the prior accepted theory in many cases (extremes may have been necessary to identify the difference).

    This doesn’t really apply to G&T, however, since they are actually producing a hypothesis that has been preemptively falsified.

    PS See Isaac Asimov: “The Relativity of Wrong”

    Terry Oldberg –

    2.
    However, in a few month’s worth of work, I’ve already uncovered fatal foundational errors in atmospheric physics

    Just so you understand, it is a bit annoying and off-putting for a person to state this when it is obviously incorrect, even if it is understably not so obvious to the person making the statement. You yourself seem to be aware of the possibility of a misunderstanding on your part, according to other parts of your comments.

    3.
    In thermodynamics, a distinction is made between heat, work and internal energy. In atmospheric physics, I observe, these distinctions can be wiped out by the use of the ambiguous term “energy.”

    A rose by any other name would smell as sweet and be made of carbon compounds and some water, etc.

    Most people use the term ‘heat’ and it’s derivatives (such as ‘heating’, not mathematical derivatives) colloquially in a physically imprecise manner (and sometimes having almost nothing to do with enthalpy or internal energy).

    This is also true among some groups of scientists. This does not mean that their equations are wrong. It’s a labeling issue, that’s all. The math is consistent. Mixups do not result (except such as now, when people think climate theory violates laws of thermodynamics).

    It is especially understandable that diagrams intended for the general public would use imprecise language, not for the purpose of confusion but for the purpose of brevity. I do readily admit that this can go to far and lead to confusion, such as the notion the climate models assume no convection (False) and that the atmosphere emits equally to the surface and to space (false, because of temperature variations within the atmosphere so that different brightness temperatures are apparent from different places).

    If heat is only ever the net flow of (non-work) energy (with entropy = energy/temperature), then so be it, and in that case, the energy flows such as shown in the Kiehl et. al. and other such diagrams are not individually fluxes of heat, but the fluxes in opposite directions have difference that are fluxes of heat.

    But because of how radiation works, it is convenient and useful to show the seperate radiant fluxes that add to form fluxes of heat.

    4.
    If an “energy flow” is a radiation intensity then, according Gerlich & Tscheuschner, the Kiehl-Trenberth diagram is invalidated by its presumption that there is a conservation law for radiation intensities when there is no such law. The people at UCAR are obviously confused about how to label the entity that flows through a Kiehl-Trenberth diagram and they invented this concept! Their confusion provides a portion of the basis for my suspicion that there is an error here.

    Again, there is no confusion among the scientists involved (I’d expect they are, as I am, aware of the various colloquial terms, and know what something is from the context. I know exactly what the fluxes are in such a diagram as in Kiehl et. al. whether they use ‘heat’ (colloquially) or ‘energy’. The mathematics is unchanged by such labelling issues.

    There is a conservation law here. Actually there are two – well one isn’t a conservation law but it does require sources and sinks to account for variation from conservation:

    Conservation of radiant intensity except for emission, absorption, scattering (includes diffraction), reflection, refraction – (basically a generalized version of Schwartzchild’s equation):

    In the absence of gravitational redshift and lensing, and in the absence of macroscopic diffraction (though I don’t know the details of how, I know this equation can be amended to fit more complex conditions), and setting aside the issue of Raman scattering (partial absorption and a shift in frequency):

    Where

    I(P,Q)

    is the spectral (monochromatic) radiant intensity (a flux per unit area per unit solid angle per unit frequency) at point P in the direction Q along a line of sight (which follows any bends that would occur from refraction or reflection, etc.), and at a given polarization (if photons with the same polarization change orientation along the path, then the polarization for which I is considered at one point must correspond as such with the polarization at other points along the path), per unit of the ‘spectrum’ of polarizations, then, the change in I over a small length dx along that path is:

    For constant index of refraction,

    dI = dE – dA – dS1 + dS2

    when there is no reflection, and across reflecting interfaces,

    change in I = R

    where
    dE/dx is the emission per unit length
    dA/dx is the absorption per unit length
    dS1/dx is the scattering out of the line of sight per unit length (including diffraction by unresolved objects).
    dS2/dx is the scattering into the line of sight per unit length from other directions (including diffraction by unresolved objects).****(see below)

    R is the reflection out of the line of sight. For 100 % reflection, this can be zero if the line of sight follows the reflected ray (there is some choice of which branch to follow when there is partial reflection; this does not mean the math is not specified; the choice of branch determines the value of R).

    Reflection often occurs with scattering (diffuse reflection). In that case, there isn’t a single branch or pair of branches that the path takes, but a range of paths. In that case it is easiest to stick with defining the path along which the equation applies as the forward direction (bent by refraction) going across an interface, even if I goes to zero.

    When the index of refraction varies, dI will be nonzero even if the other terms are zero, but at least for the simple case in which the index of refraction at a point X is independent of Q, where n is the real component of the index of refraction:

    Where I# = I/(n^2)

    Then d(I#) = (dE – dA – dS1 + dS2)/n^2

    change in I# across an interface is R/n^2.

    For dE, dA, dS1 and dS2, and R = 0, I# is conserved (so I remains proportional to n^2).

    Note that total internal reflection is implied by this (I decreases as *rays* (here I am using that term to describe paths taken by photons/radiant energy) spread over a larger angle (this is entirely different from rays maintaining constant directions spreading out from a point due to differences between directions – in that case, I can be conserved while the flux per unit area decreases); rays spread over a larger solid angle as n decreases, but the total solid angle of one hemisphere is always 2*pi, so some of the rays originally within a hemisphere of directions must exit that hemisphere. This can happen via gradual bending if n decreases gradually. There is a cone of acceptance, which contains the photon paths at higher n that spread into the whole hemisphere at a lower n if reflection within that cone is zero. If the interface has a texture that scatters rays or bends rays differently at different locations, the concept of the cone of acceptance applies at scales larger than the texture as a grouping of solid angles that is more complexly arranged, and/or is diffused, so that some paths have intermediate probability of photons escaping or being trapped, but, accounting for rays that temporarily escape but, due to the texture, find their way back into the high n material, the total effect on a flux per unit area (an area that is the surface area if the texture were removed) of isotropic radiation (I constant over all directions toward the interface from either side, allowing for a difference between sides) must be the same (if that were not true, a perpetual motion machine could be built – see below).

    Notice that I goes higher (in the absence of absorption, scattering, and reflection) crossing from lower n to higher n as rays are packed into a smaller solid angle.

    You might ask if this is unphysical considering that reflection occus at an interface between different n materials. Yes, this is true, but the reflection (outside the cone of acceptance for going from high n to low n) approaches zero if the change in n is made gradual on a scale larger than the wavelength, or with fine-scale texturizing that has a similar averaged effect, and it is also possible to devise antireflective coatings, which can work perfectly for particular combinations of wavelength and direction, and larger scale texturization can cause some or most of the reflected rays to reach the interface multiple times, and each time, some fraction (within the cone of acceptance for going from higher to lower n) may be transmitted.

    Note also that n values different from 1 make frequency a more convenient measure of the spectrum than wavelength. When n varies, the value of wavelength and of a consistent unit wavelength (for considering intensity per unit wavelength) change following a path, as a function of n.

    dA = I*acs*dx
    dE = U*ecs*dx
    dS1 = I*scs1*dx
    dS2 = IO*scs2*dx

    where acs, ecs, and scs1 are the absorption cross section, emission cross section, and scattering cross section per unit volume.

    *****(I haven’t studied scattering in enough detail to say much about scs2; it is in effect a scattering cross section per unit volume but acts on incident radiation from other directions; IO is some function of the intensities of radiation in all other directions (including the reverse direction along the same path. However, assuming no violation of the second law of thermodynamics and working backwards, it must be the case that if I is the same in all directions, dS2 = dS1. While subwavelength processes can diverge quite a bit from the characteristics of geometric optics (although conservation of energy, continuity of electric and magnetic fields, etc, still apply), I suspect based on simple visualization that for any pair of directions, the fraction of I from one direction that is scattered into another direction must be the same as the fraction of I coming back from the other direction that is scattered into the same direction, again in the reverse direction).

    **** (Alternatively, it may be procedurally simpler for dS1 to include scattering ‘back’ into the original direction from the original direction (because scattering distributions are not usual depicted with an infinitesimal hole in them) while to balance that, IO is a function of intensity over all directions including the direction along which I is being determined.)

    (For spectral (non-diffuse) R = rcs*I#

    (rcs acts as a cross section per unit area at scales that do not resolve the wavelength-scale processes of reflection and refraction at an interface; it is not a per-unit-volume quantity unless dx is applied down to scales near or smaller than wavelengths, etc, at which scales, reflection and refraction across an interface become rather complex processes)

    , rcs is the same for radiation going in the opposite direction across an interface, except outside a cone of acceptance. Also, for the branch of reflected radiation, radiation going in the reverse direction must also have the same rcs. IF this were not true, it would be possible to build a perpetual motion machine. Presumably analogous behavior applies to diffuse reflection (see above note on scattering). Note that in general, radiation processes, in particular between a pair of points, follows a
    You can see me as much as much as I can see you rule, which satisfies the second law of thermodynamics (one way mirrors either use tricks of lighting (it’s easier to see into a lit room than into a dark room), or otherwise (hypothetically) could be concievably based on variations of optical properties over polarization and frequency, and variations in lighting accordingly (an intriguing possibility for energy efficient lighting would be lighting that appears white but emits over a series of intervals with gaps in between and windows that reflect for one set of intervals and transmit for the other, thus allowing outside light in but keeping all light emitted inside from escaping. Might not be technologically feasable or practical, though, at this point).

    When all emission and absorption is accomplished via processes at local thermodynamic equilibrium (ie no fluorescence or phosphoresence, etc.),

    U = Ibb
    dE = Ibb*ecs*dx

    where Ibb is the blackbody radiant intensity for the frequency of radiation and temperature at point P.

    Note that Ibb is also proportional to n^2; thus, total internal reflection and refraction in general do not violate the second law of thermodynamics (if Ibb were not proportional to n^2, it would be possible to build a perpetual motion machine by having a warm blackbody within high n transparent material facing a colder blackbody across a lower n material, and sufficient total internal reflection would cause the radiant flux reaching the colder blackbody from the warmer blackbody to be lower than the radiant flux from the colder blackbody to the warmer blackbody. In order for two isothermal surfaces to emit and absorb equal amounts of radiation from each other across a change in n, the emission must be proportional to n^2 to make up for total internal reflection).

    scs + acs = ecs, the extinction cross section per unit volume.

    In the absence of emissions and reflection, and with constant esc along a path, integrating over x yields the formula for tranmission:

    I#(P2) = I#(P1) * exp(-ecs*x)

    where x is measured along the path from P1 to P2 and P1 to P2 is the direction of the radiation.

    ecs*x is the optical thickness of that path length. If ecs varies along a path, it is still true that

    I#(P2) = I#(P1) * exp(-optical thickness from P1 and P2).

    Optical thicknesses along successive intervals add linearly, and contributions from acs and scs add linearly, and contributions to either acs or scs from different materials or sets of objects, etc, that contribute (the same is true of ecs).

    So far as I know, ecs must be the same at a given location for two opposite directions along the same path. This is not necessarily the case for scs and acs seperately, for example, if there are particles that are mirrored on one side and blackbodies on the other and there is some alignment. That is not typical of the atmosphere, though.

    But, for two opposing directions along the same path, over the same interval dx, ecs and acs must be the same if all emissions and absorptions occur via processes at local thermodynamic equilibrium (ei zero fluorescence and phosphorescence). If this were not the case, there would be violations of the second law of thermodynamics. By having ecs in one direction being equal to acs in the opposite direction, if the Ibb for that location is equal to the I coming from a direction, then the I emitted from the interval dx in a direction is equal to the I absorbed from that direction. This is because Ibb is the intensity that is in equilibrium with other (non-photon) matter at a given temperature (and n value). If the I from a direction is greater than Ibb, absorption is greater than emission; if I is less than Ibb, emission is greater than absorption. As measured by intensity, radiation with greater brightness temperature than the temperature of a material is able to add more heat to that material than the material loses toward the direction that the radiation came from, and the material will emit more radiant heat toward a direction than it absorbs from that direction if the brightness temperature of the intensity from that direction is lower than the temperature of that material. This is modulated by the optical properties of the material – if its ecs and acs are zero, there is no radiant heat gain or loss.

    It is possible to identify contributing fractions of I coming from different intervals along the path in the direction I is coming from (and different branches if there is partial reflection, and different volumes of space if there is scattering). The distribution can be defined as an emission weighting function; the volume integral of the product of that function with the Ibb (a function of local temperature) (we’re assuming local thermodynamic equilibrium in as far as emission is concerned). Because each unit of that distribution has ecs in the forward direction equal to acs in the reverse direction (and because of the properties of scattering and reflection), the emission weighting function is the same as the distribution of absorption of radiation coming back along a path. Which means that the net flux (I forward minus I reverse) between any two pairs of locations, from the contributions to emission and absorption, is always from higher to lower temperature.

    It should be mentioned at this point that the linearities and equalities mentioned (ecs = acs for opposing directions, optical thicknesses add linearly over intervals, etc.) only apply strictly to radiation at one particular frequency, direction (or corresponding weighted-sets of directions as with reflection, scattering, etc.), and where important, polarization. Optical properties (rcs, ecs, acs, scs1 and scs2, etc.) can vary as functions of these variables. Thus, when integrating over polarizations, frequencies, and directions, nonlinearities can result. However, because the net radiant intensity along any path from emission to absorption between a pair of locations is (for local thermodynamic equilibrium) always from higher to lower temperatures, this remains true for the total fluxes over the whole spectrum for all polarizations and all paths between the two points that radiation can take.

  297. Patrick027

    Thanks for you last post (very impressive!).

    You write:

    This doesn’t really apply to G&T, however, since they are actually producing a hypothesis that has been preemptively falsified.

    Can you provide links to studies based on actual physical observations that have preemptively falsified G&T? I have only seen the Smith paper, which does not do this.

    Thanks.

    Max

    PS Let me repeat that I do not necessarily subscribe to the conclusions drawn by G+T, I just have not seen them “preemptively falsified” and wonder why this should be so.

  298. Guys, this and the thread on Alley’s AGU conference are now closed. The comments have gotten too off-topic and rather ridiculous, and are going to be moderated more strictly on the basis of relevancy in the future.

  299. Pingback: NASA Charged in New Climate Fakery: Greenhouse Gas Data Bogus | The American Jingoist