23 October 2007

The spinning woman, part two

Remember the spinning woman that I wrote aboutI wrote about last week?

Supposedly, people who saw her spinning clockwise are more "right-brained" (in the pop-science sense of creative, able to see the big picture, and so on) and people who saw her spinning counterclockwise are more "left-brained".

According to the good folks at Freakonomics, if you use college major as a proxy for pop-culture brain-sidedness, it actually works the other way -- "left-brained" people are more likely to see her spinning clockwise. (Of course, this isn't scientific -- it's just Freakonomics readers, and the sample sizes are small.)

Steven Levitt writes:
I often joke about how the information provided by someone who is incredibly terrible at predicting the future (i.e., they always get things wrong) is just as valuable as what you get from someone who is good at predicting the future. I used this strategy with some success by betting the opposite of my father whenever he’d bet a large sum of money on a football team that was sure to cover the spread.

That's definitely true. I've heard that another such "anti-predictor" is Punxsutawney Phil, the "official" groundhog of Groundhog Day. He gets yanked out of hibernation on February 2, and if he sees his shadow we're supposed to have "six more weeks of winter". (This is part of the utterly bizarre American tradition of Groundhog Day.) I read once that he's wrong something like 80% of the time, although I don't have a source for this. The Wikipedia article says that the actual error rate is something like 60% to 70%. Still, it seems like the groundhog is more often wrong than right.


michael said...

I think people taller than 5'7" see her left people under 5'7" see her spin right and people 5'7" see her as still.

Mark said...

It's interesting Levitt and yourself mention "anti-predictors". A similar phenomenon in machine learning, anti-learning, has been investigated in the last couple of years.

For particular datasets, learning algorithms that assume proximity implies similarity will induce classifiers that predict the opposite of what is the case. You can actually exploit and create a really good classifier by predicting the opposite of what the bad classifier tells you.

It's quite counter-intuitive.

Rettaw said...

Also, it's not hard to make yourself see it spinning the other way, you can accomplish this quite easily by covering up parts of the pictures. Unfortunatly I've forgotten which part, but once you know you need only cover them for a short time to be change the direction of rotation at will.