View Single Post
Old 06-04-2013, 02:11 PM
Elroch Elroch is offline
Invited Guest
Join Date: Mar 2013
Posts: 143
Default Re: Data snooping and science

Originally Posted by Michael Reach View Post
Elroch, I think you are mistaken. Certainly the wikipedia article doesn't discuss the point. Here's a place that does
"Tuning the climate of a global model"
Note that the paper says that this model didn't need much tuning (still had some, though) because it was based on an earlier model that was carefully tuned to fit the 20th century data. This quote:
"Climate models ability to simulate the 20th century temperature increase with fidelity has become something of a show-stopper as a model unable to reproduce the 20th century would probably not see publication, and as such it has effectively lost its purpose as a model quality measure."

I think it is clear that none of the models are "purely physical" in the sense you mean. There are many models, and they make many choices, and they are constrained ("tuned") by the requirement that they must fit 20th century data. All those choices lead to different predictions and different sensitivities to CO2. I don't understand how one can claim that the choices are minor, when they lead to a range of several degrees C in their predictions (Figure 1 is very striking), as the paper points out. As such, overfitting is a potential issue, and Bayesian statistics should be usable to decide between them afterwards.
Actually, your interpretation is wrong in a crucial way, according to the climate scientists I have consulted. Tuning is not done using any information relating to trends in temperatures: it is only done using average conditions, in order to make models agree with empirical data that is completely independent of climate change (eg weather variations over a year). It is really a way of incorporating empirical knowledge of average short term behaviour into models via parameters that cannot themselves be measured accurately.

So the tuning you refer to involves no curve fitting to temperature increase at all. Any increase in closeness of fit to historical data is an unsurprising consequence of models becoming increasingly complete and of higher spatial and temporal resolution.

Note that the bottom line of the paper you referred to is that even when they tuned all the parameters that could be tuned, with the aim of creating variation in the predictions, they found that there was not as much variation in predictions as they had thought possible. In consequence, the conclusions that damaging global warming is likely in this century under a range of scenarios is made more robust rather than less.

[If you consider this as a policy issue for the world, bear in mind that even a significant risk of disastrous consequences would justify quite extreme measures: this is a case of highly asymmetric risk. This is why there is a majority agreement that such measures are necessary, and it is potentially catastrophic that there is not unanimity.]
Reply With Quote