Archives for category: Articles (Scientific)


How embarrassing. It is really a shame. One year of reading books, going into movie theaters, hearing or playing music pieces, talking to people, inhaling scientific articles, visiting places, staying in a monastery…and I just forget.

Yes, I saw this movie from Woody, that other from Polanski, also Jim Jarmusch… sometimes went into movies twice a week – simply forgot all…read this fantastic books about “True professionalism”, some other about “Possibilities”, “Resilient leadership”…something terrifying by Paul Auster, Bukowski…something great by Ian McEwan – oh yes: “The sweet tooth”… then this lady Nobel prize winner with her short stories… read articles by Hannes Leitgeb about “belief revision”… also some other philosopher Kevin Kelly… heuristics…also the psychologist Gerd Gigerenzer…rediscovered the sex scene “Fuck me” with William Dafoe – what a great actor! – read books about Accounting, Microeconomics, Valuation of Companies…read books about the art of negotiations…rediscovered Schnittke’s choral “The Master of All Living Things” and the classic Bach piece “Ich steh an Deiner Krippe hier”…forgot for sure more than half of all masterpieces that I would need to mention here – well… and now, 2014 is ahead! So what?

Stop reading? Refuse to go to the movies? Stop doing anything? – just stay in your bed and sleep. …I wonder what other remedies one could think of.


What about the consistent treatment of uncertainties in practice?
Say, there is the leaked IPCC report (AR5) to be published in September, 2013.

And there are guidance notes for the lead authors.

The AR5 will rely on two metrics for communicating the degree of certainty in key findings:

  • Confidence in the validity of a finding, based on the type, amount, quality, and consistency of evidence (e.g., mechanistic understanding, theory, data, models, expert judgment and the degree of agreement. Confidence is expressed qualitatively.
  • Quantified measures of uncertainty in a finding expressed probabilistically (based on statistical analysis of observations or models results, or expert judgement).

For the assessment of confidence, a two-dimensional quality space is introduced. One quality dimension is the level of agreement (in the expert group). The second is the evidence (of the statement, result or forecast). The idea is that the assignment of confidence should increase if the agreement in the group and the evidence of the fact increase. Ok, this seems to make sense. (However, what is evidence of a fact?)

Five principles are given how to treat issues of uncertainty, e.g. “Be aware of a tendency of a group to converge on an expressed view and become overconfident in it.” (Very important factor, from my perspective!)

Also, two principles are given how to review the information available, e.g. “assess issues of uncertainty and risk to the extend possible.” (Should this not be self-evident?)

Then, a “calibrated language” is introduced for expression of uncertainties, namely a wording for the likelihood scale is defined, such as “virtually certain” (99-100% probability) or “very likely” (90-100% probability) and so forth.

My personal conclusion: Ok, this is a good start. However, most of the guiding principles should be a matter of course. Also, I am not sure whether I understand the text correctly: What is the link between the “quantified measures of uncertainty” and the “likelihood scale“? Do the authors of the guideline really think that a likelihood can be assigned quantitatively? It should be clear that this is impossible (since model validation is never absolutely quantifiable!)

From my perspective, the core is that everything which is written in the IPCC is based on some expert agreement. Accordingly, the assignment of the likelihood of scenarios is based on the agreement of the group of scientists which form the Intergovernmental Panel on Climate Change. Hence it is a social process and the report should be interpreted against its social constructivist background.

Finally, everyone who wants to read the leaked climate report should have a quick view into the short (4-pages) guidance report.


Better think twice about established theories, especially in economy. For the bias between theories and reality check out the paper Prospect Theory: An Analysis of Decision under Risk (1979) by Nobel Prize winner Daniel Kahneman and Amos Tversky.

How do people actually make decisions? In brief, Kahneman & Tversky test the by-then established theory of expected utility against empirically obtained data. In result, the theory fails. For an illustration of their work, consider the options 1. or 2. with

  1. 50% chance to win 1,000 Euro and 50% chance to win nothing
  2. 450 Euro for sure

what would you choose?

Consider that the expected value is larger for 1. than for 2.. (For each option, multiply the amount that you can win with the chance that you win it and sum over all amounts. This is the expected value. Check that the total of the chances yields 100% per option.) However, human mind seems to decide differently.

Kahneman & Tversky give a lot more entertaining and illustrating examples, (which are partly based on work of the economist Maurice Allais in 1953). For example:

Consider PETER who wears a red T-shirt. He won som money in the lottery and wants to play a decision-game with you. He asks you to choose option 1. or 2.:

  1. 2500 with 33% and 2400 with 66% and 0 with 0.1%
  2. 2400 with certainty

Consider ANNA with a short green skirt and long blond hair. She has also got some money left and asks you to choose option 1. or 2.

  1. 2,500 with 33% and 0 with 67%
  2. 2,400 with 34% and 0 with 66%

Apparently, in PETER’s setting, most people choose 2.. In ANNA’s set-up, most people choose 1.. Where is the problem?

(Forget about the short skirt.) Let’s focus on the numbers: In Peter’s case, human cherish the utility to win 2400 with 100% more than to win 2500 with 33% or 2400 with 66%. Utility theory would postulate that you can write this in following equation (with multiplication “*” and fixed “utility assignments” u(2400) and u(2500)):

100*u(2400) > 33*u(2500) + 66*u(2400)

Now, utility theory would postulate that you can treat this with usual algebra operations and subtract 66*u(2400) at both sides which yields:

34*u(2400) > 33*u(2500).

Ok. Hence, whether or not this makes sense depends on the values of u(2400) and u(2500).

Now, consider this operation for Anna’s question. This yields:

33*u(2500) > 34*u(2400)

This is just the opposite. Apparently, there is some problem with the theory. Namely there is a mismatch between the mathematical concepts which are used here and the real world.

Ok, whatever skirt or T-shirt Anna wears, one can wonder whether there is some structural deviation from rational decision-making in the human brain.

In brief: People seem to like high gains with low probability (think about lotteries) and people seem to prefer security with less expected value over 50/50 chances with higher expected value (think about the market of assurances).

Kahneman & Tversky propose a value function for scaling the gains and losses. (They propose a function which is concave for gains and convex for losses and steeper for gains than for losses). Furthermore, they analyse weighting functions for the dealing of probabilities of the human mind. Also, they notice that decision-making depends on which reference point you choose. The interesting part is that this reference point can be influenced by the alternatives that you have.

My personal conclusion: Honestly, I do not think that value functions and weighting functions really can take into account the complexity of human decision-making.

Assume that we have a decision problem, as above, with two options and two assignments of chances. Assume that we want to estimate how a certain person (or company) decides. Then we need to assign eight parameters in order to assess how this agent decides. From my perspective, this is the fiddler’s game, which means fitting curves to data. Not really satisfying from theoretical perspective. Furthermore, from my perspective, the reference point will have a much larger effect on decision-making than the shape of the scaling functions!

This is why, honestly, I think that it is much more important to assess the reference point. Take the individual setting of the decision problem into account. Nevertheless, this paper is absolutely worth reading. It is inspiring! It gives illustrating backbone to the bias between mathematical simplicity and the complex reality. I love it!

Enjoy reading!

Dear L. J-square Wittgenstein,

well, you wonder how we can know that we know?

There is this chaotic collection of notes of yours, called “On certainty”. (You died short after. I hope that this does not refer to readers, too?)

You repeatadly ask the same question: if I see my hand, how can I know that this is my hand? This is your keypoint, right? The question won’t let you go. You are tortured by it. You rephrase and enhance it several times. Such as:

“Why shouldn’t I think of the earth as flat, but extending without end in every direction (including depth)? But in that case one might still say ‘I know that this mountain existed long before my birth.’ – But suppose I met a man who didn’t believe that?”

Ok, I see your point. Furhtermore you mention the challenge to define truth and knowledge if someone thinks (honestly thinks) that he knows something but later it is shown that this was an error.

However, your notes are so chaotic and unsystematic that it is really hard to read them. Or let’s say: Understanding what you wanted to say. Or maybe you  just changed your mind several times during the writing process? Honestly, this is what I think!

Can you not just get out of your grave for a discussion?

By the way, this is my favourite question: How often do we need to check a mathematical calculation in order to be sure that it is correct? Quite my daily problem as a maths student. (Maybe 3x? And it still can be wrong…)

I love it!

[Ludwig Josef Johann Wittgenstein: On Certainty (Über Gewissheit)
ed. G.E.M.Anscombe and G.H.von Wright
Translated by Denis Paul and G.E.M.Anscombe
Basil Blackwell, Oxford 1969-1975]

Dear Charles Sanders Peirce,

thank you a lot for your great and inspiring piece of work: The fixation of belief. Let’s briefly go through it.

On reasoning you write: “The object of reasoning is to find out, from the consideration of what we already know, something else which we do not know.” I completely agree with this. And you say: “We are, doubtless, in the main logical animals, but we are not perfectly so. Most of us, for example, are naturally more sanguine and hopeful than logic would justify.” Ok, so far.

Next, you distinguish between different methods for fixation of belief. Namely:

  • Method of tenacity
  • Method of authority
  • Method of “a priori”
  • Scientific method

Wait: “Following the method of authority is the path of peace“?
Are you sure? I would love to discuss this one with you!

And then, come on, you compare the logical method with a bride to choose? Honor the others deeply? I think, your last paragraph is very humorous:

“The genius of a man’s logical method should be loved and reverenced as his bride, whom he has chosen from all the world. He need not contemn the others; on the contrary, he may honor them deeply, and in doing so he only honors her the more. But she is the one that he has chosen, and he knows that he was right in making that choice. And having made it, he will work and fight for her, and will not complain that there are blows to take, hoping that there may be as many and as hard to give, and will strive to be the worthy knight and champion of her from the blaze of whose splendors he draws his inspiration and his courage.”

Wow! I am blown away. Unfortunately you are dead, already. “The Fixation of Belief” was published in November 1877. However, thank you very much for this article.

All others: Enjoy reading!

[The Fixation of Belief. Popular Science Monthly 12 (November 1877), pp. 1-15; Charles Sanders Peirce]