What about the consistent treatment of uncertainties in practice?
Say, there is the leaked IPCC report (AR5) to be published in September, 2013.

And there are guidance notes for the lead authors.

The AR5 will rely on two metrics for communicating the degree of certainty in key findings:

  • Confidence in the validity of a finding, based on the type, amount, quality, and consistency of evidence (e.g., mechanistic understanding, theory, data, models, expert judgment and the degree of agreement. Confidence is expressed qualitatively.
  • Quantified measures of uncertainty in a finding expressed probabilistically (based on statistical analysis of observations or models results, or expert judgement).

For the assessment of confidence, a two-dimensional quality space is introduced. One quality dimension is the level of agreement (in the expert group). The second is the evidence (of the statement, result or forecast). The idea is that the assignment of confidence should increase if the agreement in the group and the evidence of the fact increase. Ok, this seems to make sense. (However, what is evidence of a fact?)

Five principles are given how to treat issues of uncertainty, e.g. “Be aware of a tendency of a group to converge on an expressed view and become overconfident in it.” (Very important factor, from my perspective!)

Also, two principles are given how to review the information available, e.g. “assess issues of uncertainty and risk to the extend possible.” (Should this not be self-evident?)

Then, a “calibrated language” is introduced for expression of uncertainties, namely a wording for the likelihood scale is defined, such as “virtually certain” (99-100% probability) or “very likely” (90-100% probability) and so forth.

My personal conclusion: Ok, this is a good start. However, most of the guiding principles should be a matter of course. Also, I am not sure whether I understand the text correctly: What is the link between the “quantified measures of uncertainty” and the “likelihood scale“? Do the authors of the guideline really think that a likelihood can be assigned quantitatively? It should be clear that this is impossibleĀ (since model validation is never absolutely quantifiable!)

From my perspective, the core is that everything which is written in the IPCC is based on some expert agreement. Accordingly, the assignment of the likelihood of scenarios is based on the agreement of the group of scientists which form the Intergovernmental Panel on Climate Change. Hence it is a social process and the report should be interpreted against its social constructivist background.

Finally, everyone who wants to read the leaked climate report should have a quick view into the short (4-pages) guidance report.