14 July 2008

Some quick statistics on the calibration quiz

On Saturday I gave a quiz from Ian Ayres' book Super Crunchers which asked you to provide 90% confidence intervals for ten numerical questions with well-defined answers. Roughly speaking, you should select your answers so that you expect to get nine of the questions right and you believe you're equally likely to have gotten each of them wrong.

Nineteen people have taken the quiz.
Out of the 190 individual answers received, 97 were correct -- slightly over half. The distribution of scores on the quiz is as follows:



Score2345678910
Number of people143423101

In short, the respondents as a group confirm Ayres' claim that "almost everyone who answers these questions has the opposite problem of overconfidence -- they can't help themselves from reporting ranges that are too small." Ayres cites a book by J. Edward Russo and Paul J. H. Schoemaker, Decision Traps: Ten Barriers to Brilliant Decision-Making and How to Overcome Them, which I haven't read; supposedly "most" people get between three and six questions right. I'm actually soewhat surprised that you as a group don't seem all that different from the general population.

I have some other comments -- which questions seem particularly difficult or easy, what we might say about confidence intervals other than 90 percent -- but I'm hoping more people might answer, so I'll wait for that. (Although if the remaining answers are suspiciously better-calibrated that the answers so far, that might turn out to be not such a good idea.)

4 comments:

Anonymous said...

Writing as the proud owner of the leftmost datapoint... that overconfidence thing was wild. For a bunch of answers, had I opened up my range just a little bit more...

The center's of my ranges were ok, but 90% confidence? I missed that completely.

By the way, books in the old testament? How many?

Jonathan

Anonymous said...

Now I'm actually going to have to go back and check my answers. I remember think on a couple of the answers that I was picking a pretty tight range, but that I also had an obligation to pick my 90% confidence.

For a couple of them, I noticed that I thought I was either very close, or a million [virtual] miles away, and that bimodality of feeling was hard to adjust to.

For the most part, we readers probably don't know a whole lot about many of the things in questions. We are as naive as the general population.

Michael Lugo said...

David,

I wasn't saying that my readers would be more knowledgeable about those particular questions -- rather that they'd be more knowledgeable about probability, and in particular what 90% confidence means.

Anonymous said...

Right-oh on the more knowledgeable about probability. I think, though, that we are accustomed through practice to envision our distributions as being very central.

When we understand very little about something, our variance is higher [variance being a proxy for accuracy; our accuracy is lower]. When our SD's get big relative to the value of our mean, our distros a going to be flatter. My ego takes a hit when I can't narrow my estimates, even knowing what I know about probability.

Maybe it's that we want to externalize the variance as a feature of the possibilities, and not a characterization of our knowledge. So, I see the wide spread, and think to myself, "That can't be right," because I don't encounter many processes like it. By the by, it's an interesting anthropological problem.