Some new CMIP6 MSU comparisons

We add some of the CMIP6 models to the updateable MSU comparisons.

After my annual update, I was pointed to some MSU-related diagnostics for many of the CMIP6 models (24 of them at least) from Po-Chedley et al. (2022) courtesy of Ben Santer. These are slightly different to what we have shown for CMIP5 in that the diagnostic is the tropical corrected-TMT (following Fu et al., 2004) which is a better representation of the mid-troposphere than the classic TMT diagnostic through an adjustment using the lower stratosphere record (i.e. TMT_{corr} = 1.1 TMT - 0.1 TLS


This data for the historical and SSP3-70 scenario (135 simulations) is for the region 20ºS-20ºN. This allows us to provide an updateable comparison to the equivalent satellite temperature diagnostics from RSS v4, UAH v6 and the new NOAA STAR v5. As with the earlier CMIP6 comparisons, I’ll plot the observational time series against both the full ensemble and the ensemble screened by the transient climate response (TCR) as we recommended in Hausfather et al (2022), and plot the time series and trend histogram.

Graph showing the tropical TMT-corrected observations and model simulations 1979-2022. The models are shown as an ensemble mean and 95% spread for all models, and for a subset of models with climate sensitivity within the IPCC range.
Historgram showing the tropical TMT-corrected trends in observations and model simulations. The subset of models with climate sensitivity within the IPCC range are highlighted in red and contrasted with the CanESM5 ensemble which has distinctly higher trends.

Two things are clear. First the 24-model ensemble as a whole is clearly warming faster than the observations, but the histogram shows that this ensemble is heavily skewed by including 53 ensemble members from CanESM5/CanESM5-CanOE (green in the histogram) which unfortunately has a very high climate sensitivity (ECS 5.6ºC, TCR 2.7ºC). The TCR Screened ensemble (only including the 15 models that have 1.4ºC < TCR < 2.2ºC) is in red and is closer to the observations in terms of trends, but only 7 simulations (from 6 models) out of 53 simulations have trends within the uncertainties of the observations.

The above selection of CMIP6 models does not include the range of configurations of the GISS coupled models that we looked at in Casas et al. (2023). Since this is a somewhat differently designed ensemble, I’ll plot that similarly (45 simulations), and note too that these are global means, again, for the corrected-TMT product (for the historical and SSP2-45 scenarios after 2014). This ensemble samples model structural variability (vertical resolution, model top, interactive composition) and some aspects of forcing uncertainty (notably for aerosols and ozone), as well as the initial-condition (‘weather’) variability we are used to seeing.

Graph showing the global TMT-corrected observations and GISS model simulations 1979-2022. The models are shown as an ensemble mean and 95% spread for all models.

As above, the GISS ensemble diverges slightly from the observations. I’ve also included a line for the AMIP ensemble mean (red) (simulations that use the observed sea surface temperatures as an additional forcing) which shows that the specifics of the interannual variability and observed trend can be matched if the sequence of El Niño and La Niña etc. are matched. For the 1979-2022 trends, the GISS ensemble is a closer match to the observations then the 24-model selection shown above – particularly the GISS-E2.2 simulations all of which are within the uncertainties of the observational spread.

Historgram showing the global TMT-corrected trends in observations and GISS coupled model simulations.

The point of this exercise is first, to include CMIP6 in the comparisons. While we know that this is a trickier ensemble to work with because of the broad (and unrealistic) spread in climate sensitivity, the point in highlighting the GISS model efforts here too is to point out that we are starting to do a better job in terms of sampling different kinds of uncertainty. The CMIP ensembles are still ‘ensembles of opportunity’, but increasingly we are able to take slices through the ensemble to isolate different kinds of sensitivity that are perhaps orthogonal to what has been possible before and make a difference to many observational comparisons – not just the MSU records.


  1. S. Po-Chedley, J.T. Fasullo, N. Siler, Z.M. Labe, E.A. Barnes, C.J.W. Bonfils, and B.D. Santer, “Internal variability and forcing influence model–satellite differences in the rate of tropical tropospheric warming”, Proceedings of the National Academy of Sciences, vol. 119, 2022.

  2. Q. Fu, C.M. Johanson, S.G. Warren, and D.J. Seidel, “Contribution of stratospheric cooling to satellite-inferred tropospheric temperature trends”, Nature, vol. 429, pp. 55-58, 2004.

  3. Z. Hausfather, K. Marvel, G.A. Schmidt, J.W. Nielsen-Gammon, and M. Zelinka, “Climate simulations: recognize the ‘hot model’ problem”, Nature, vol. 605, pp. 26-29, 2022.

  4. M.C. Casas, G.A. Schmidt, R.L. Miller, C. Orbe, K. Tsigaridis, L.S. Nazarenko, S.E. Bauer, and D.T. Shindell, “Understanding Model‐Observation Discrepancies in Satellite Retrievals of Atmospheric Temperature Using GISS ModelE”, Journal of Geophysical Research: Atmospheres, vol. 128, 2022.

Source link