Here are three problems that have occurred to me, somewhat randomly, over the last few weeks.
1. I mentioned here that it would probably be relatively easy to rank, say, all the open problems in analysis by their perceived difficulty to expert analysts, and all the open problems in algebra by their perceived difficulty to expert algebraists. But how could we merge these two rankings together, given that comparing an algebra problem and an analysis problem is probably much more difficult and error-prone than comparing two algebra problems or two analysis problems? For example, comparing an arbitrary algebra problem and an arbitrary analysis requires finding someone who is fluent enough in the relevant areas to be able to really understand the problems! (If you object to the fact that "algebra" and "analysis" are too broad for this to be true, just subdivide the areas; the idea is still the same. We can rank problems within suitably small subdisciplines of mathematics; how do we merge those rankings into a ranking of all mathematical problems?)
2. Most people are attracted either to men or to women, not to both. (I'm assuming this for the sake of this problem.) It seems reasonably easy to rank-order the set of all men in order of attractiveness, and the set of all women in the same way; but again, how do you merge these two? How do you compare the attractiveness of a man to the attractiveness of a woman? Oddly enough, I think this one's easy to solve because of "assortative mating", which basically means "people will end up with people who are about as attractive as themselves". (Incidentally, although this seems obvious to anyone who's ever been around people, how can it be derived from first principles?) But the information we get from assortative mating is likely to be imperfect, again; there are plenty of relationships where one member of the couple is more attractive than the other.
3. Major League Baseball has two leagues. We have a mechanism for determining the "best" team in each league. The common wisdom now is that the American League is the better of the two leagues; look at this year's World Series, which the Red Sox are currently leading three games to none. (But don't look at last year's.) But that doesn't necessarily mean that, say, the eighth-best team in the AL is better than the eighth-best team in the NL. There's interleague play, but most of the time that's only a single three-game series between any pair of teams, which really doesn't give much information. (And most pairs of teams in different leagues won't play each other at all in any given season.) Let's assume, for the sake of argument, that we have to use just which teams have won and lost against each other, and maybe the scores of the games; we can't use the fact that teams are made up of players, and compare the AL and NL statistics of a player who has played in both leagues.
The common thread is that in all three cases we have a pair of lists which we believe are each correctly rank-ordered, but comparison between the lists is difficult, or we're doing our rankings based on already collected data and we can't go out and get more, and sometimes our comparisons might be wrong (for example, in the baseball scenario, the team which wins the season series between two teams is not necessarily better). All the analysis of sorting algorithms I've seen (which is, I'll admit, not that much) seems to assume that all comparisons are equally easy and there are no errors. That's a nice place to start, but it hardly seems a place to stop.
28 October 2007
Subscribe to:
Post Comments (Atom)
7 comments:
How do you compare the attractiveness of a man to the attractiveness of a woman?
This is, I think, a real problem in experimental psychology. I'm thinking specifically of Clark and Hatfield's classic study of "Gender Differences in Receptivity to Sexual Offers," which found, among other things, that male college students were more likely than females to agree to sleep with a "reasonably attractive" stranger. How can we be sure that the male propositioners recruited by the researchers weren't less attractive than the female ones? What would it even mean for the male propositioners to be less attractive than the female ones?
I should add that Clark and Hatfield's conclusions aren't really affected by this problem, because they had a baseline to compare to: when the same propositioners asked students to merely go out with them, about 50% of both male and female students said yes. We can therefore be sure that there is some sort of difference between men and women's receptivity to overtly sexual offers (relative to their receptivity to merely romantic offers).
I should also add, however, that according to M. Voracek, A. Hofhansl, and M.L. Fisher in "Clark and Hatfield's evidence of women's low receptivity to male strangers' sexual offers revisited," Clark and Hatfield's experiment has never been replicated.
It seems reasonably easy to rank-order the set of all men in order of attractiveness, and the set of all women in the same way...
This reminds me of another interesting problem! If you asked every woman in the world to rank all the men in order of attractiveness, you would most likely end up with a different ranking for every woman. I suspect, however, that the rankings wouldn't be completely different. Even though some women would rank Harrison Ford above Johnny Depp, while others would rank Johnny Depp above Harrison Ford, the vast majority of women would rank both Harrison Ford and Johnny Depp above Frankenstein's monster (as played by Boris Karloff).
I think this kind of "fuzzy ranking" is important in physics, where it allows us to say things like, "the Moon is on the same size scale is the Earth, but both are on a far greater size scale then a bacterium."
Since this ranking is a social choice, wouldn't Arrow's theorem guarantee that there is no perfect ranking algorithm (assuming you regard Pareto efficiency, non-dictatorship and independence of irrelevant alternatives as desirable)?
Of course you could still discuss the best way to achieve an imperfect ranking in each case based on relevant sociological factors.
By the way, you have a very interesting blog. I have been lurking for a while.
You do need at least a little cross-list information.
Given that in the context you're describing, this information will be small, you need a model - some detainled assumptions for the noise/probabilistic information you do have, in order to make progress with some form of inference.
In some cases the assumptions will be checkable within-group (e.g. in the baseball case, if you have a model for the way scores relate to ability, the model assumptions will be testable within-league)
The more cross-group information is available, the weaker assumptions you should be able to get away with and still make some reasonable kind of inferences.
There's a fair amount of work in theoryCS on this topic. For example, take a look at this paper for sorting and selection problems with variable costs.
Comparing rankings is also a well-studied problem. There are different notions of the distance between rankings, and this well-regarded paper contains a review of what's known, as well as new algorithms.
Incidentally, although this seems obvious to anyone who's ever been around people, how can it be derived from first principles?
This isn't formal, but pick any reasonable terminating algorithm for pairing with the most attractive person willing to pair with you; assuming the distribution of attractiveness is roughly equal, the end result will be that couples are paired up with roughly equal attractiveness.
It's a fun party game to put numbers on peoples' heads (that they can't themselves see) and tell them to pair with the highest number willing to pair with them. Everyone will rush the 10s, who will pair with other 10s; then everyone will rush the 9s, etc. It works out that the pairs are roughly equal numbers.
Interesting game. What numbers typically end up standing off to the side, wondering why they even bother coming to these things anymore?
Post a Comment