## 31 July 2007

### how to break up time and space

Squaring the hexagon, from Strange Maps. This map illustrates a proposal for dividing France into perfectly rectangular (well, as rectangular has something on a sphere can be -- but France is small enough that one needn't worry too much about this) regions, of which there were 80 or so. (As you may note, France is not a rectangle.)

This proposal was made around the time of the French Revolution, and it has the ring of proposals from that period; these are the same people that invented the French Republican calendar and the metric system. The metric system has turned out to be a good idea (although some people will argue that many of its units don't correspond to anything on a human scale). The Republican calendar, for those who aren't familiar with it, broke up the year into twelve thirty-day months, of three ten-day "decades" each (these are analogous to weeks); there seems to be no a priori reason why this doesn't work, except that we're used to seven-day weeks. It's rather inconvenient that seven is prime, in fact; it would be useful to have a unit of the "half-week".

If you look at a Major League Baseball schedule some time, you'll see that that's actually the fundamental unit of baseball scheduling; teams generally play two series a week, one of them running from Monday to Wednesday, Tuesday to Thursday, or Monday to Thursday, and the other from Thursday or Friday to Sunday. You might notice that Thursdays are kind of ambiguous in this scheme; sometimes Thursday is the end of a series, sometimes it's the beginning, and sometimes it's neither. Teams often have off on Mondays or Thursdays. The weirdness of Thursday in baseball scheduling is a direct consequence of the fact that seven is odd.) In fact, I'd argue that the fact that the week has seven days is a lot more inconvenient than a thirteen-month year would be; the fact that thirteen is prime, and that therefore no simple fraction of a hypothetical 13-month year is a whole number of months, is pretty much irrelevant. When was the last time you ever did something that took exactly three months (one quarter) or six months (half a year)?

Returning to geography, drawing straight lines as boundaries often seems to be a bad idea; lines that are drawn without regard for population centers inevitably, if you draw enough of them, pass through some population center and divide it up. This causes the people in charge on either side of the line to ignore the other side in government, making the region less able to function as a whole. Surprisingly, it's hard to think of population centers in the U.S. that lie along a state border that's just a line on the map; most of the big population centers near state borders are along rivers, such as Philadelphia, New York, or Washington (note that even if the District of Columbia didn't exist, Washington would be on the Maryland-Virginia border). This isn't surprising; rivers are natural dividing lines. But I suspect that the situation would be different in more densely populated France. And drawing lines on a map without regard for settlement patterns is a cause of the chaos in the Middle East.

The difference between the U. S. and Europe is that in Europe the lines have evolved along with the historical population patterns (which haven't changed that much even in centuries), whereas in the U. S. a lot of the lines between states were drawn before the states in question were settled. One can't expect the people making the maps to forecast in advance where people are going to choose to live, especially in modern times when the population centers grow up along roads (which people can build) and not rivers (which are where they are).

A lot more thoughts on how territory is often divided up -- countries into states, states into counties, etc. -- can be found in Ed Stephan's book The Division of Territory in Society, text available online. His hypothesis is that there's a relationship between the size of a state, county, etc. and its population density; smaller states within a country, counties within a state, and so on tend to be denser. This can be derived from the assumption that "social structures evolve in such a way as to minimize the time expended in their operation", although it only gives a relation between density and county area. It doesn't make forecasts about shape, and it seeems to assume that densities are locally uniform when this is very far from the case.

### A note from the management

I recently switched this blog from a full-text feed to a feed which shows just the beginning of each post. The reason I did this was in order to encourage more people to come to the actual site, which I am hoping will encourage more comments and discussion. I'm open to going back to the old format, though. If you have any thoughts on this matter please let me know.

edited, 6:02 pm: I've gone back to the full-text feed.

## 30 July 2007

### seven trees in one! and how laziness can be bad.

Sum divergent series, II, from The Everything Seminar. We learn

(1 + 1 + 2 + 5 + 14 + 42 + ...) = (1-√(-3))/2

in some sense (this follows from the usual generating function for the Catalan numbers, by letting z approach 1). This is a bit surprising, because we add a bunch of positive numbers and get something which isn't even real, but weird things happen with infinity. It turns out that this is a sixth root of unity, and therefore in some sense

(1 + 1 + 2 + 5 + 14 + 42 + ...)7 = (1 + 1 + 2 + 5 + 14 + 42 + ...)

which suggests that there ought to be a simple bijection between 7-tuples of trees (counted on the left) and trees (counted on the right). What that means is that we should be able take seven trees and in some way make one big tree out of them, and that furthermore this scheme is reversible -- that is, given the one big tree, we can determine which seven small trees it came from. And, in fact, there is! You can read about it here. The actual theorem is that "There is a very explicit bijection between the set of all seven-tuples of trees and the set of all trees", where "very explicit" has a certain technical sense. The bijection is fairly simple (see page 3) and takes about a quarter-page to specify; proving that is in, in fact, a bijection takes a page; the rest of the paper is taken up with describing why anyone would think such a crazy thing would exist in the first place.

A divergent series that comes up a lot in combinatorics is

P(z) = 1 + z + 2z2 + 6z3 + 24z4 + 120z5 + ...

where the coefficients are the factorials; I don't know of any way to assign a meaningful number to this sum, although one might exist. It is the generating function of the permutations; that is, the coefficient of zn is the number of permutations of [1, 2, ..., n], or the number of ways in which we can write the numbers from 1 to n in some order. It turns out that although this series is divergent, we can calculate with this in a useful way. For example, if we want to know the number of so-called indecomposable permutations, we can write

I(z) = 1 - 1/P(z)

where the division is in the sense of a formal power series, and then I(z) turns out to be the generating function of the indecomposable permutations. (The details can be found on p. 82 of Analytic Combinatorics by Flajolet and Sedgewick; the book is still in preparation, so the page number might change.) It would be possible to do this without using divergent generating functions -- but a lot harder. This is true of a lot of results with generating functions -- to transform a lot of basic combinatorics into routine computations. The advantage of this is clear -- by making problems that used to be hard into routine exercises, that frees up people to do more work that builds on those exercises. It's often been said that mathematicians are naturally lazy, which is a statement that might seem quite strange to the layperson. But what this means is that we try to come up with ways to automate a computation so that we don't actually have to do it ourselves. The unfortunate byproduct of this is that it often seems that doing the actual computation is useful for getting a feeling for the problem. If I were actually doing some work that involved indecomposable permutations, I would almost certainly write them out for, say, n=3 and n=4; that would give me more of an intuitive sense for the problem, even if it would be a very inefficient way to work if I just wanted to know how many of them there are.

### Danica McKellar's "Math Doesn't Suck", and Erdos-Bacon numbers

The book itself appears to be a mixture of mathematics tips and motivational prose; I will refrain from commenting on the book, because I haven't read it. There are fully-worked-out solutions to the problems in the book, which might help you figure out what mathematics it covers; it appears to include the highlights of a pretty standard middle-school mathematics curriculum in a U. S. school. McKellar has focused some of her philanthropic efforts on the importance of quality education, including the Figure this! campaign; her web page also includes a section where she offers math help to people who write in with questions. (Will we ever see a mathematician who offers acting tips on their web page?)

McKellar is one of the eighteen or so people with a finite Erdos-Bacon number. A person's Erdos number is defined to be zero if they are Paul Erdos; otherwise it is the minimum of the Erdos numbers of the people with whom they have cowritten scholarly papers, plus one. More informally, it is the length of the shortest chain of collaboration that connected a person to Erdos. If a person can not be connected to Erdos by such a chain, their Erdos number is said to be infinite. Bacon numbers are defined similarly, but Erdos is replaced by Kevin Bacon and "have cowritten a scholarly paper" is replaced by "have appeared in the same movie". A person's Erdos-Bacon number is the sum of their Erdos number and their Bacon number; thus to have a finite such number a person needs to have appeared in a movie and written an academic paper. The canonical source on Erdos numbers is Jerry Grossman's Erdos number project; I'm not aware of a single canonical source for Bacon numbers.

Another person with an Erdos-Bacon number who's been in the news a lot lately is, believe it or not, Hank Aaron, he of the 755 home runs -- although this one is a bit facetious; Hank Aaron and Paul Erdos both autographed the same baseball, giving him an Erdos number of 1; Hank Aaron also appeared in the baseball movie Summer Catch as himself, thus connecting him to the acting world. It turns out his Bacon number is 2, giving him an Erdos-Bacon number of 3. (Some people claim that appearances as oneself don't count towards Bacon numbers.)

If you don't count people who appeared as extras or as themselves -- but only people who are credited as appearing as someone other than themselves -- the lowest Erdos-Bacon number is either 4 or 5, and it belongs to Dave Bayer, a mathematician who played a small role in A Beautiful Mind.

### Language Log dissects science journalism

From Language Log: Two simple numbers and Thou shalt not report odds ratios by Mark Liberman.

The first of these, from a week ago, suggests the following rule:

Today's prescription is a trivial rule of scientific rhetoric. When there's a claim that some genomic variant is associated with some phenotypic trait -- whether it's breast cancer or homosexuality or conservatism or stuttering -- we need to know four simple numbers. Specifically: (A) the number of "case subjects" in the study (people with the trait in question); (B) the number of "control subjects" in the study; (C) the proportion of the case subjects with the genomic variant in question; and (D) the proportion of the controls with the genomic variant in question.

If four numbers are too many, leave out (A) and (B), as long as they're not really small. But stick with (C) and (D) -- they're the medicine that really does the work here.

This is something that I've often worried about; in one of the examples that Liberman cites, (C) and (D) are 77% and 66%.

Also, there's a link to a New York Times article (July 19) with the headline Scientists Find Genetic Link for a Disorder (Next, Respect?). Does a disease need a genetic basis in order for people to take diagnoses of it seriously? All of someone's genes are determined before they're born; this seems to imply that things which happen during a person's life which affect their health don't matter. (Please don't get me started on people who think that homosexuality is okay if, and only if, it's genetic. And even if there is a "gay gene", it's not like everyone who has it is gay and everyone who doesn't have it isn't. If the inheritance patterns for homosexuality were that simple we'd have figured it out already.

But, you know, numbers scare people. If you put numbers in a newspaper article they'll throw up their hands and turn on some reality television.

At least in the first case I had realized that there was missing information. The second of these seems more insidious to me, because I'd never thought about it before, and I'm smarter than most people about these things. (You probably are, too, if you're reading this. If you don't believe me, get out of the house some time.) A recent study was reported in the popular press with phrases such as this (from the New York Times):
Doctors are only 60% as likely to order cardiac catheterization for women and blacks as for men and whites.

As it turns out, the referral rate for white men was 90.4%, and for women and blacks 84.7%. (While I'm on the subject: conflating "women" and "blacks" like this seems kind of silly. And by "men and whites" they apparently actually meant "white men".) The study reports an "odds ratio"; the odds of a white man being referred are 9.6 to 1, and the odds of a black person or woman being referred are 15.5 to 1. The ratio of these numbers is where the 60% comes from.

The following sentence would actually be pretty close to true:
Doctors are only 60% as likely to not order cardiac catheterization for white men as for women and blacks.

The relevant percentages are 9.6% and 15.3%, which are close enough to zero that the results don't get distorted too badly by all this manipulation. When it's put that way, it's hard to understand, but if we take not ordering catheterization as some sort of negligence you can see how it would come about. Still, it's the sort of sentence with lots of quantifiers that only a mathematician could love.

It seems that odds ratios are often given in the medical literature due to the fact that they arise more naturally from certain statistical tests. But the media has a responsibility to translate the facts into language that the hypothetical "educated layperson" can understand. And the schools have a responsibility to create "educated laypeople" who can then read such an article and understand it, but this is not a post about education.

## 29 July 2007

From The Everything Seminar: Sum Divergent Series, I". Matt suggests that

1 + 2 + 4 + 8 + 16 + 32 + ... = -1

and explains why. He promises more in this vein. As many of you might see, this formula comes from taking the well-known sum of a geometric series,

1 + x + x2 + x3 + ... = (1-x)-1

and letting x=2. This is true in the 2-adic numbers. It is also true in certain sorts of computer architectures; read item 154 of HAKMEM to find out which ones. (Read the rest of HAKMEM while you're at it. Well, except for the hardware part at the end. Hardware is stupid.)

Something that comes to mind when I think about series like this is that the Riemann zeta function is often defined in popular books (such as the various recent books on the Riemann hypothesis) as

ζ(s) = 1-s + 2-s + 3-s + 4-s + ...

and this series only converges when the real part of s is greater than 1. These books then go on to talk about how all the zeroes of the zeta function are either negative even integers (the "trivial zeroes") or are believed to have real part 1/2 (this last part being the Riemann hypothesis). Yet as far as I remember, they never point out that the definition above doesn't properly make sense for any of the values of s just mentioned; you have to analytically continue the function from the half-plane where it's already defined. There's a unique way to do this, so it's not a big problem, but it's a little bit annoying. Still, though, when you do that correctly ζ(-2) = 0. But if you look at that sum,

ζ(-2) "=" 12 + 22 + 32 + 42 + ...

So does this mean that the sum of all the squares is zero? Similarly, the sum of all the fourth powers should be ζ(-4) = 0. So the sum of all the squares which are not fourth powers must also be zero, and so on ad nauseam.

From The n-Category cafe: David Corfield writes about Rota's distinction between "Algebra 1" and "Algebra 2", roughly speaking algebraic geometry and number theory versus the more combinatorial parts of algebra. My first thought here is that it's a useful distinction to make. My second is that the names are unfortunate, because when I hear "Algebra 1" and "Algebra 2" I think of classes one typically takes in high school where one learns to manipulate linear, quadratic, etc. equations.

From Math Notations: percentages are confusing to students>

From Statistical Modeling, Causal Inference, and Social Science: a map of the results of some U.S. election (I'm not sure which one, but if I had to guess I'd say 2004 presidential) with red and blue dots representing Republican and Democratic voters; each dot represents a certain number of people. I like this idea; if you remember seeing the usual political maps there's a lot more red than blue, which seems wrong because the country is evenly split (as we saw in the 2000 and 2004 presidential elections). This seems to be a nice solution. I find myself fascinated by the fact that in the middle part of the country the population centers seem to fall on a square grid. I suspect this is due to the influence of the road system consisting mostly of north-south and east-west roads, with population centers appearing at the intersections of these roads; my instinct is that a triangular lattice would be more "efficient" somehow, although I have some difficulty articulating why. Someone speculates this is the work of Robert Vanderbei, whose web site is full of pretty pictures of things mathematical.

### How much is it worth to win at Jeopardy!?

Most weekday evenings at 7pm I watch the television show Jeopardy! One day I'll be on the show, if I actually get around to auditioning and then I get lucky enough to get picked. For now, I just scream "how could you not know that!" from the comfort of my living room. I bet all the contestants do that.

For those of you who aren't familiar with the game, it is a game where three players competing against each other answer trivia questions (although Jeopardy! has this silly trope where your answer has to be "in the form of a question", that's entirely irrelevant here). If they get the questions right, they gain money; if they get them wrong, they lose money. Those players who have a positive amount of money after the first sixty questions (which occur in two rounds of thirty; each round has six categories of five questions, worth varying amounts) get to participate in "Final Jeopardy!". Most players end up with a positive amount of money, since they know themselves well enough to only attempt to answer questions which they think they know the correct answer to. The players are told the category of the question; they then can wager any or all of their money that they'll get it right. Then the question is stated and they have thirty seconds to write down the answer.

Fans of the show have put together a Jeopardy! archive. There's a wagering calculator available online that takes into account common wagering strategies. But this does not take into account the following facts:

• only the player who wins the game gets to keep their money (probably an average of \$20,000 or so); the second- and third-place players get \$2,500 and \$1,000 respectively. (Incidentally, the show's host, Alex Trebek, seems to refer to in-game totals as "points", which I suspect is tied to this;

• perhaps wagering strategy should depend on how likely one thinks one is to get the question right, and how likely one thinks one's opponents are

• the player who wins get to come back the next day; the others don't.

The first two here, I plan to address in a later post. It's the third one I want to talk about right now.

So first we must answer the question -- what's the probability of winning one's second game, given that you've won the first? Of winning one's third game, given that you've won the first two? It's obvious that a defending champion is probably better than the average player (because they've won at least one game), but how much better?

Fortunately, the Jeopardy! Archive can help us answer that. It would be enough to know what proportion of champions win one game, two games, three games, and so on. The archive compiles an impressive set of statistics for each season, but this is not one of them, so I have to do it myself. There's a natural cutoff point in the data -- for a long time Jeopardy forced its champions to leave after five games, but they don't do that any more, since the beginning of the 2003-04. (This led to Ken Jennings' historic 75-game run in 2004.) I originally planned to go back to the beginning of that season, but they only go as far back as the beginning of the Ken Jennings run, near the end of that season.

Since June 2, 2004, there have been:
155 one-game winners
61 two-game winners
21 three-game winners
6 four-game winner
9 five-game winners
1 six-game winner
1 eight-game winner
1 nineteen-game winner
1 74-game winner

Now, if there was no effect like the one I just mentioned, you'd expect there to be three times as many one-game winners as two-game winners, two-game winners as three-game winners, and so on. It actually seems like if there is such an effect, it's not that strong. I attribute the big peak at five games to psychology; it's probably hard to win a sixth game because you're going into some sort of "unknown" territory. Notice that only four players -- Ken Jennings, David Madden, Tom Kavanaugh, and Kevin Marshall -- have won their sixth game.

So I'll assume that there is no "memory effect" -- that if you win today, you have a one-in-three chance of winning tomorrow. This seems believable -- the categories can be very different from day to day -- but I've never seen this analysis before. (It wouldn't surprise me if other Jeopardy! hopefuls have done it, though, because they seem to be That Sort Of People.)

Thus, when wagering in Final Jeopardy, one should wager as if the prize is not just the money you're going to win -- but one and a half times that much, since you can expect to win half a time more. The average champion is a one-and-a-half game winner.

But there are two problems:
- how do you use that information? Does the amount of money one expects to win really affect proper wagering strategy?
- more importantly, you only get to play at Jeopardy! once. I think the rules say that; in any case, I've never heard of somebody who's played twice on the Alex Trebek version of the show. (There are a few cases of people who played on the Alex Trebek version and on some prior version.) So anything you say about "expected value" is meaningless! What's the point of an operation that talks about the average amount you expect to win if you don't get to play long enough for that average to take effect?

## 28 July 2007

### baseball commentators say silly things

Heard just now on FOX, which is airing the Braves-Diamondbacks game:

First, the TV commentator claims that the Diamondbacks are a very streaky team this year, because they've had three separate five-game losing streaks and have won their last seven.

In fact, the Diamondbacks have won 57 games out of 105, for a winning "percentage" of 0.543; thus their probability of losing five straight games is (1-57/105)5 = 0.0200. They've played one hundred and five games so far, so there are 101 games in which they could have started a five-game losing streak; thus their expected number of five-game losing streaks is something like (0.0200)(101) = 2.01. (Yes, that's right; the figure 0.0200 is rounded.) So it's not all that surprising that they've had three such streaks.

Similarly, the expected number of seven-game winning streaks is 99(57/105)7 = 1.37; the fact that the Diamondbacks have had one such streak is not at all surprising. (If I had to guess, I'd say that 1 is actually the most likely number of such streaks, but I'm not interested enough to do the analysis.)

Of course, not every game is independent. A more sophisticated analysis would take into account which teams were playing, and so on. An even more sophisticated analysis of streaks in baseball ought to take into account the pitching rotation; the existence of a pitching rotation reduces the likelihood of streaks. Let's say your team wins one-half of its games; then the probability of winning five straight games is 0.03125. But now say you have five starting pitchers, and your team wins in 70%, 60%, 50%, 40%, and 30% of their games respectively. If each pitcher pitches every fifth game, then the probability of winning five consecutive games is now (0.7)(0.6)(0.5)(0.4)(0.3) = 0.0252.

See The Hot Hand in Sports for more of this sort of analysis.

Second, the Braves are, according to the television guy, "exactly one percentage point" behind the Phillies. The Braves are 54-50 going into today's play; the Phillies are 53-49. Baseball winning "percentages" are conventionally reported to three decimal places; the Braves are at .519, the Phillies at .520. For those of you who don't know, it's conventional to say that one team is ahead of the other by "percentage points" in a situation such as this where both teams have the same difference between their number of wins and number of losses; in this case both teams have won four more games than they've lost. But what bothers me is the "exactly one" here; of course those figures are rounded. As it turns out, the Braves' winning percentage is 0.519230...; the Phillies; is 0.519608...; the difference is 0.000377..., or not even half a point. If baseball truncated winning percentages, instead of rounding them, the two teams would be "tied".

The first of these things -- the streakiness comment -- is the one that bothers me more, though. The "percentage points" comment is just a matter of a convention that disagrees with the one the rest of the world makes. (Why doesn't baseball report winning percentages to just two decimal places? Because that wouldn't be enough accuracy; baseball teams play 162 games a season.) But the streakiness comment is the sort of thing that shows that people don't understand the nature of randomness; people read something into "streaks" that is really just good luck.

### probabilities in "Set for Life"

On Friday, July 20, on ABC, a program called Set For Life premiered; another episode aired last night.

It's July. It's a prime-time game show. As you may have guessed, this is the stupidest game show ever. By "stupidest" I don't mean "the show is a bad idea" -- although I think it is, and it probably won't last long, nobody ever went broke by betting on the stupidity of the American public. But what I mean is that it involves absolutely no skill.

It reminds me of Deal or No Deal, in that no skill is involved except the still of knowing when to stop. In fact, the New York Times compared this show to Deal or No Deal back in October, writing: "On “Deal or No Deal,” Mr. Mandel does not even pretend that skill is involved. He says upfront that his game is about “giving away a ton of money,” not winning a ton of money."

Well, this is in the same vein. Here's how it works. First, there's a round that I didn't see, in which somehow it is determined how much the player will win per month; I don't know how they do this. (The Wikipedia article implies it doesn't air; people on various game show forums, like this one, seem to think that this part is probably more interesting than what actually airs. I agree, because what actually airs is less interesting than staring out my window. To be fair, staring out my window is pretty interesting.) Then, in the second round, it's determined how long the player will receive this monthly amount for. There are fifteen lights embedded in the stage; eleven are white, four are red. The player picks a light, and if it's white they go one step "up the ladder"; if it's red they go one step "down the ladder". After picking a white light, the player can elect to get out of the game and keep the money; after a red light, they are not allowed to make this choice. If they pick all four red lights, they go home with no money. The ladder contains the following time amounts:

zero, 1 month, 6 months, 1 year, 2 years, 3 years, 5 years, 10 years, 15 years, 20 years, 25 years, "set for life" (40 years).

(It's not clear to me what happens if you pick a red light on the first step. This turns out not to matter in my analysis.)

They have someone in the audience (in the first episode it was the contestant's nephew) giving "advice" on which lights to pick, and there's the pretense that some level of "skill" is involved in picking which lights are going to be white and which are going to be red, but there's actually no pattern at all. So it's really just a game of chicken. There's something of a morality play embedded in this show -- sometimes greed is good, but sometimes it can backfire and you get screwed over.

(The show's gimmick is that there's someone in an "isolation pod" and they can decide to end the game as well, but the person outside doesn't know when that's happened; the person in the isolation pod can choose to stop the game; but I'll ignore this.)

At least with Who Wants To Be A Millionaire?, which started this whole modern game-show craze, some knowledge was required. What I like about the new shows is the amount of drama that's invested in them -- each "decision" to keep going comes with dramatic music, as if it Really Matters. As if the players had any sort of control over the game. But for some reason the format of uncovering lights that have already been set works a lot better, on TV, than if there were just a random number generator simulating the light picking. For example, if there were four red lights left and seven white lights left, it would say "you win" with probability 7/11 and "you lose" with probability 4/11.

The first question that comes to mind -- what is the probability that a player wins the 40 years of monthly checks, given that they don't decide to "chicken out" at some mearlier step? To win this, the player has to pick all eleven white lights and no red lights, The probability of this is 1/C(15,4) = 1/1365.

Now, what's the right "strategy" for this game, in terms of trying to maximize your expected winnings? There are, at any point after you've picked a white light, two things you can do -- stop or keep going. If you "keep going", then we want to know what the probability of you being at various points on the ladder after Let's say, for example, that you've found six white lights and two red lights so far, so you're at the "2 years" position on the ladder, and five white lights and two red lights remain. You pick a light at random. The probability is 5/7 that it's white, and so now you end up with "3 years" winnings, 2/7 * 5/6 = 10/42 that you pick a red light and then a white light and end up back where you began, and 2/7 * 1/6 = 2/42 that you pick two red lights and lose everything. So the expected winnings (in years) are 3(5/7) + 2(10/42), which is greater than 2, so it's a good bet.

This same sort of reasoning works for any combination of lights. If you've so far picked zero to seven white lights and zero red lights, it turns out to be the right move to keep going. If you've picked eight white lights and zero red lights -- thus being at the "15 years" level -- staying or going are both equally good. If you've picked nine white lights and zero red lights, always stay; there are only two white lights and four red lights left! Similarly:

• if one red light has already been picked, and eight or less white lights have been picked, it makes sense to keep going; if nine or more white lights have been picked, stop.

• if two red lights have been picked, go if nine white lights or less have been picked; stop if ten white lights have been picked.

• if three red lights have been picked, keep going if seven or less white lights have been picked; if eight or nine have been picked, staying and going are equally valuable; if ten white lights have been picked, stop.

The usual caveat applies; calculating "expected value" is kind of meaningless when you only get to play once. People tend to be risk-averse with their winnings, so I'd expect to see people stopping before this strategy dictates it. For example, if two red lights and nine white lights have already been picked (leaving two of each), I don't see people taking the one-in-six risk of picking both of those red lights and losing it all.

## 27 July 2007

### checks for nothing, and why English is useful

Karl Fogel attempts to pay a bill for \$0, because of course you have to pay bills for zero, because otherwise the companies that issue them keep sending them.

In this case, the bill was for the purchase of a book from the Mathematical Association of America, so he wrote a check for e+1 dollars. They didn't deposit it, because "check needs to be wrote out in U. S. dollars", as they put it.

I can see a more legitimate reason for rejecting the check. Usually, when writing a check, one puts, say, "3.14" in the little box on the right, and "Three and 14/100" on the line. The reason for writing out the value of the check in both figured and words is for redundancy. (Although then why don't we write "three dollars and fourteen cents" on the line? I suppose redundancy doesn't matter quite as much when we're talking about sub-dollar amounts.)

So you might say he should have written "e to the i π plus one dollars" on the line. But even that seems a bit suspect, because e, i, and π are themselves bits of mathematical notation. It seems that he really should have written something like

"The base of the exponential function, raised to the product of the imaginary unit and the ratio of a circle's circumference to its diameter, plus one"

for the number of dollars he wanted. Of course, this is the sort of thing that makes it obvious why having a compact mathematical notation is a good idea. I am not enough of a mathematical historian to have looked at the way things used to be written, but from what I understand this is the sort of thing they would have written five centuries ago, and I can't imagine working like that.

Unfortunately, the fact that we have such a good mathematical notation creates another problem -- people think that they can just put a bunch of symbols on a page and not explain what they mean by them, and that's "mathematics". Terry Tao, at his blog, has lots of writing advice; of particular interest in this discussion is his advice to take advantage of the English language. Here he gives a couple dozen ways to say that two statements are true, which are logically equivalent but have a wide variety of connotations. To take two examples of his examples at random, "P(x) is true. Unfortunately, Q(y) is also true." and "P is satisfied by x. Similarly, Q is satisfied by y." might be logically equivalent but are philosophically (psychologically, emotionally, morally -- what's the right word here?) quite distinct. I think that mathematicians as a whole are not sensitive enough to the connotations of their words; this is useful when doing formal mathematics but not so useful when trying to express the results of it. Perhaps we kneel too much at the altar of Bourbaki.

### everything happens somewhere

With Tools on Web, Amateurs Reshape Mapmaking -- today's New York Times. (I think you have to register, but it's free.)

The headline basically says what the article's about, and points me to some things I didn't know about -- for example, Flickr, the photo-sharing service, now allows people to tag their photos with information about their location.

It'll be interesting to see which locations are overrepresented and which are underrepresented in the ones where people take photos. I've spent a fair bit of time Googling various intersections in Philadelphia and seeing which ones get a lot of hits; obviously intersections which are landmarks of one sort or another get a lot of hits, but even intersections of two quiet residential streets will have vastly differing numbers of hits, depending on -- it seems -- how wired the neighborhood in question is. It appears to not just be a question of socioeconomic status, but also of the age of the people living in the neighborhood; neighborhoods with lots of young adults have a higher profile on the web, which isn't surprising. Also, Philadelphia seems to be a good city in which to do this, because the streets form a grid and so most intersections are at least nominally equivalent.

My favorite among these mapmaking services is the simplest one I know of -- the gmaps pedometer, which allows you to overlay a walking route on a map and find out how long it is. Since I walk everywhere this is somewhat valuable. However, I wish that it were possible to get directions on some mapping service that didn't respect one-way streets, avoided highways, and so on. A friend of mine who just got a new apartment wrote recently:
Our apartment is at [street address] It's close to the T! Here are Google Maps' directions if you are a car: [link] . If you are not a car, I recommend walking from the T [...]
The Google Maps directions put her new apartment at six-tenths of a mile from the T; on foot, it looks to be more like three-tenths of a mile.

But in general, everything happens somewhere, and giving people the ability to harness that fact can only be a Good Idea.

I think there will be another revolution, though. Google, Microsoft, etc. are working on these technologies to allow people to connect online data with real-life locations. But so much of what happens right now isn't really wedded to any location, and that will only continue in the future. This blog, for example, physically exists on a server somewhere... I don't know where. Oddly enough, I am reasonably sure it is not at 365 Main, a data center in San Francisco, because they had a power failure Tuesday afternoon and Blogger was still up. A lot of heavily trafficked sites were down, though, including Craigslist, Technorati, and Livejournal. It was a bit strange to see that sites which had nothing to do with each other in the virtual world nevertheless were tied together by their location in the physical world. But I don't care where the server is. (It is probably more accurate to say that insofar as my blog has a physical location, it is my kitchen table, because that's where I do most of my writing. But it is probably even more accurate to say that my blog's physical location is wherever my brain happens to be at any given moment.) What I care about is how it relates to the other information out there on the web, which I can get from, for example, seeing the "reactions" page at Technorati, looking at the service which tracks the hits this web page gets, noticing the blogs that people who link to my blog also link to, and so on. That tells me what's close to me in this virtual space.

And I wonder if this virtual space will turn into a physical space, if we will find a way to make a picture of it that makes sense. (A primitive example of what I'm thinking of is given by A Subway Map of Web Trends 2.0, from Strange Maps. Unfortunately this map is hampered by the fact that the makers didn't do their own graphic design, but rather based it on a map of the Tokyo subway system, and there's no reason that the Web should be isomorphic to the Tokyo subway. In fact, that would mean that the Web looks a lot like the actual city of Tokyo.) So far we have lists of what's close to each other. But a list of distances is not a map. Our minds are very good at making sense of spatial information, probably for evolutionary reasons; this is probably why one of the first things we tell our students to do, when it's at all relevant, is to draw a picture. But right now we only have the means to draw primitive pictures. That will change.

## 26 July 2007

### Greater-Than Sudoku

(I apologize if this post is giving you trouble when you read it; for some reason Blogger doesn't seem to like the large numbers of inequality signs in this post.)

A sort of puzzle that I find incredibly frustrating is the "Greater-Than Sudoku".

This is like Sudoku, except that none of the numbers are given. But greater-than or less-than sign indicate which of any two adjacent squares contains the greater number. (The adjacencies are only within the 3-by-3 squares that make up the larger 9-by-9 square, so there's no possiblity that these could be equal.)

An example can be found about halfway down at Ed Pegg Jr's column, "Sudoku Variations", which appears to be part of his approximately monthly column Math Games at MAA Online. Not surprisingly, this gives a wide variety of Sudoku variations. Personally I prefer the variant called "Sums Sudoku", in which the 9-by-9 grid is broken up into regions containing a few cells and the sum of the numbers in each of these is given. I went through a whole book of those last summer. They were fun.

There's a greater-than sudoku in the July 26 issue of the Philadelphia City Paper, by Matt Jones, the author of "Psycho Sudoku", who believe it or not is not the guy behind Jonesin' crosswords, which that paper also includes. (Jonesin' crosswords are by Matt Gaffney.) You can find a whole bunch of Jones' variant Sudokus at the Boston Phoenix. These include Greater-Than Sudoku, Sum Sudoku, Stepping-Stone Sudoku, and Kaidoku, which really has nothing to do with Sudoku (it's a word puzzle) but has a similar-sounding name. The reason I like the first three of these is because they require more mathematical thinking than the standard Sudoku, which is exactly the reason that most people probably like them less. I have often found things explaining Sudoku to people who haven't seen it yet that say "it doesn't involve any math, just logic" -- and that's true, at least with the common understanding of those words.

The problem with not being given any numbers, as in the Greater-Than Sudoku, is that there's nowhere, really, to start. The way I've usually started these puzzles is as follows: the number 9 must be in a square that contains a number greater than that in each of the adjacent squares. That allows one to rule out most squares from containing 9; if you're lucky, it rules out enough squares that, combining this with the rule that there is exactly one 9 in each row, column, and 3-by-3 square, you know where all the 9's are. Replacing "greater than" with "less than", you can determine where the 1's go. Next, the 8's can be placed -- they must be in squares that are greater than everything adajcent to them, except they could be less than an adjacent 9. This seems to lead to more possible places where 8's can go, in general, than 9's, so the "ruling out" is harder. By symmetry, one can do the same thing with the 2's. My theory is that if I could follow this far enough, I would have enough numbers that I could solve basically like a regular Sudoku puzzle; unfortunately this doesn't seem to work. (And even when it does seem to be working, it requires scratch paper -- it gets too messy to fit all the notes I want to make in the squares -- which just feels wrong for a newspaper puzzle.)

Another method that has occurred to me is to try to determine what range the number in each square falls in. For example, consider the bottom-left 3-by-3 square in the Greater-Than Sudoku in Ed Pegg's article. It has
`A > B < Cv   ^   vD < E < Fv   v   v G < H > I`

where v and ^ are meant to be downward-pointing and upward-pointing analogs of < and >, and A, B, ..., I are 1, 2, ..., 9 in some order. We have, of course, the twelve inequalities that are given explicitly: B<A, B<C, D<A, B<E, F<C, D<E, E<F, G<D, H<E, I<F, G<H, I<H. These specify a partial order of A, ..., I (for those who don't know what this means: it just means that we know some of them are larger than others, and that if x is larger than y and y is larger than z, then x is larger than z); what we want to do is extend this to a total order. If we knew, for example, that G<D<E<F<C<B<H<I<A (incidentally, this isn't true), then we'd have G = 1, D = 2, E = 3, ..., A = 9.

Since the relation "less than" is transitive, we end up with various "chains" of inequalities, the longest of which is G<D<E<F<C. We also know that G<A (since G<D and D<A). Finally, G is less than H, and we can't compare G with B or I. So there are six numbers greater than G; thus G is at most 3. I suspect that doing this sort of reasoning for each square would help.

However, the twelve comparison signs in any 3-by-3 square are not themselves enough to specify how the numbers 1, ..., 9 are arranged. There are only 212 = 4096 ways we can set them up, and 9! = 362880 ways to arrange the integers 1, ..., 9 in those boxes; on average, if we arrange the < and > signs randomly, there will be 9!/212 = 88.59375 ways to fill in the numbers 1 through 9 in that square which are consistent with that choice of inequality signs. And it is clearly possible for there to be many less. For example, we can never have a square with
`A > B^   vD < E`

(the rest being irrelevant), since we have A<D<E<B<A and thus A<A. Thus, to offset this, in some cases there will be many more; we need the Sudoku constraints to help fit things together. The total number of Sudoku puzzles is widely reported to be 6,670,903,752,021,072,936,960 (see Russell and Jarvis, which also gives a nice heuristic that gets very close); this is about 272, so it's at least plausible that the 108 comparison signs in a 9-by-9 Greater-Than Sudoku give enough information to specify a unique solution.

### fundamentalists and π

Fundamentalist math, at ooblog, via Vlorbik on Math Ed. The article quotes a "fundamentalist math textbook", which apparently actually exists. It says things like:
Carl Friedrich Gauss first proved the fundamental theorem of algebra. There are many fundamental theorems: of arithmetic, calculus, and so on. These are so “fundamental” that many other theorems are derived from them. In the Bible, there are also fundamentals, without which Christianity would not exist—the deity of Christ, His substitutionary atonement, and the inspiration of the Bible, to name a few.
Basically, this sentence says that "things, in various intellectual frameworks, are derived from other things". What else could you expect? Standing on the shoulders of giants, and all that. Something valuable for kids to know, surely, but why try so hard? I'd rather at least see a fundamentalist approach to mathematics that was based on saying "math is beautiful, and that shows us that God is great, because God invented mathematics." I wouldn't agree with this, but I could at least sympathize with it.
The passage on "fundamental" seems, to me, like a real stretch; mathematicians are well-known for using words in different ways than the general population. (So are fundamentalist Christians, I hear, although I don't know this for sure.) Mathematicians actually use "fundamental" in much the same way that Normal People do; Wikipedia has a list of fundamental theorems. I would suggest one more addition to this list: the "fundamental theorem of combinatorics", which is that if there are m ways to do one thing, and then there are n ways to do some other thing, then there are mn ways to first do one of the first things and then one of the second things. I will not edit Wikipedia to reflect this, though, as this usage seems non-standard; but insofar as combinatorics has a fundamental theorem, this is it.

(Oh, by the way, if someone refers to the "fundamental theorem" without saying what it's the fundamental theorem of, what would you think they meant? I'm not sure; I'd think they meant the F. T. of either algebra, arithmetic, or calculus.)

The most famous "math in the Bible" is, of course, the verse that can be interpreted as saying that π equals 3, due to some circular object which is thirty cubits around and ten cubits across. (Link goes to Good Math, Bad Math; the Bible verse in question is 1 Kings 7:23.) Surprisingly, everyone seems to assume that this means the writer actually thought π = 3, and comes up with complicated explanations for this. For example, some people assume that 30 cubits is the inner circumference and 10 cubits is the outer diameter. I think I even once saw someone who seriously suggested that in the time of the Bible, circles were actually hexagons; the length of a regular hexagon is exactly three times the distance between opposite corners. The obvious interpretation, though, is that either:
• The object in question wasn't perfectly round. This seems rather doubtful, though, as you just need a stake in the ground and a big long rope in order to make a circle. If they wanted to make circles, they could.
• Rounding error. What is called "30 cubits" here could be as large as 30.5 cubits; what is called "10 cubits" could actually be as small as 9.5. Thus the ratio of circumference to diameter could be as large as 30.5/9.5 = 3.21. Similarly, it could be as small as 29.5/10.5 = 2.81. So the Bible is only saying that 2.81 < π < 3.21, which is true.
Personally I think there are plenty of reasons to doubt the literal truth of the Bible (starting with Genesis 1 and 2, which are two creation stories that contradict each other!) -- but there's no need to nitpick by saying that they didn't know what π was.

And I'm trying to picture a world in which circles were hexagons. This seems to imply that the world is in fact a triangular lattice in the plane, which is preposterous.

## 25 July 2007

### how much should used furniture cost?

In an online forum which I frequent, someone suggested that the price of used furniture decays exponentially. So if I have a desk which I bought today for \$100, and I want to sell it a year from now, it'll fetch \$80; two years from now, \$64, and so on, assuming that it depreciates at twenty percent per year. (The "twenty percent" is a number I just made up so that the arithmetic would be easy.)

But a flaw in this argument was quickly seen. For one thing, furniture depreciates essentially immediately the moment you take it out of the furniture-dealer's showroom. (One might make an exception for IKEA; first it depreciates, because you take it out of the store, then it appreciates because you put in some of your own labor in transforming a bewildering array of pieces of wood into something you can actually sit on, sleep in, or put your possessions in.) This was ascribed to the fact that when you go to a furniture store, you have to do a lot less work to find the particular piece that suits your needs than if you're trying to find such an item on Craigslist.

Then another model for the rate at which furniture depreciates was claimed: namely, that furniture depreciates by some fixed percentage immediately when it leaves the store, and then at a constant percentage rate per year thereafter. (The depreciation percentage upon leaving the store, at least for IKEA goods, could probably be determined by watching Craigslist; a surprisingly large amount of the IKEA furniture for sale there is said to be less than one year old. Also, a surprisingly large amount of the furniture for sale on Craigslist is IKEA; I suspect this is because the type of people who sell things at Craigslist are the same upwardly mobile, transient types that buy things at IKEA. Note that I have done absolutely no market research on this subject, unless you count "what kind of furniture do my friends have?" And I didn't even actually ask my friends, I asked myself.)

But there's a problem with this one, too. I am sitting at a nineteen-year-old kitchen table. I know it is nineteen years old because my parents bought it when they (we, since I lived with them at the time) moved into a new house in 1988. I inherited this kitchen table from them in 2005, because they moved into a house for which they wanted new furniture at the same time as I needed a table. I do not know how much they paid for it, and I do not know how much it is worth. However, I suspect that if it was worth, say, \$400 when it left the showroom in 1988, and \$100 right now, then it was probably worth less than \$200 in 1997. (Pretend that 1997 is exactly halfway between 1988 and now.) That is, I suspect it depreciated by a larger percentage in the first half of its lifetime than in the second half. In fact, I suspect that by now this table is not depreciating at all. The exponential decay also suggests that in a reasonably short amount of time my table will be worth nothing, which also seems silly.

(Incidentally, all I can find online about furniture depreciation is information for tax purposes for furniture owned by businesses; the scheme used there assumes that furniture depreciates in a straight-line fashion over the first seven years of its life. This is so far from the obvious reality that I will not even take it into account.)

Why do I say this? Well, my table looks like it's in good shape. And let's say that you know it's twenty years old. There's no reason why it shouldn't hold up for another, say, twenty years -- the fact that it's been around this long tells you that it's built well. (I know I'm invoking the Copernican principle here, which I've criticized before in reference to space exploration, but furniture isn't space exploration.) So this should make you be willing to pay more for it; if you are buying a table, the amount you are willing to pay for it is roughly how much you would pay to have a table for a year, times how many years you expect the table to last. (If you sell the table before it stops working, this still applies, because the price you can get for it is the result of the next buyer making this same calculation.) As time goes on, the table accumulates scratches and dings -- making it worth less per year -- but its life expectancy might go up. (I know this seems somewhat paradoxical; you might think that a table has, say, a 5% chance of breaking in any given year -- a table breaking might be a Poisson process. But I'm claiming that the fact that this table is still in one piece should lead you to believe that it's a Good Table. I'm saying this mostly because it seems to lead to results about the used-furniture market that feel right.) When a piece of furniture is new this first effect is the larger one; when it is old the second effect is the larger one. At the point at which the second effect is larger, that's when the price of the thing starts going up.

People also attach a certain value to "antique" furniture, which would also raise how much they're willing to pay for it; however, I don't think my table is anywhere near old enough for that to matter.

As it turns out, I live above a used furniture store. However, I suspect that the owner of that store doesn't like me much, so I don't think I'll ask him what he thinks about this.

### fracta;s. space-filling curves, and scientific revolutions

Mark Chu-Carroll at "Good Math, Bad Math" writes about space-filling curves. These are really counterintuitive things -- curves that eventually fill up, say, an entire square. There's a nice article about them at Wikipedia.

It won't surprise you to learn that these aren't "curves" in the sense that you might think of them; if I ask you to draw a "curve" you'll probably draw something that's what mathematicians would call "piecewise smooth". What this means, roughly, is that you can draw a piece of it without having any "kinks", then turn, then draw another such piece, and so on, doing this only a finite number of times. Space-filling curves don't have this property; they are made up of infinitely many such "pieces". Not surprisingly, they also have infinite length. These curves are made by an iterative process; in the case of the Hilbert curve:

• on the first iteration the curve has length 3/2 and each point is within √2/4 of the curve;

• on the second iteration the curve has length 15/4 and each point is within √2/8 of the curve;

• on the third iteration the curve has length 63/8 and each point is within √2/16 of the curve;

• on iteration n the curve has length 2n - 1/2n (it is made of 4n-1 segments of length 2-n) and each point is within √2/2n+1 of the curve.

The maximum distance halves and the length doubles with each step; as we iterate, each point in the square is arbitrarily close to a point on the curve (thus on the curve) and the curve is infinitely long.

Andrew Cook at "Statistical Modeling, Causal Inference, and Social Science" writes about the fractal nature of scientific revolutions, pointing to this earlier post of his. The idea is that science moves forward in what the evolutionary biologists call "punctuated equilibrium" -- at most points "not much" is getting done but occasionally big moves are made and in the end science gets done. (This is a bit unfair, though, because the scientists who are doing the "not much" are often collecting the sort of data that is exactly what the revolutinaries doing the paradigm shift will turn out to need.) If this is true, then we might say that all the science that will get done between year 0 ("now") and year 81 (which turns out to be 2088) gets done either in the first third of that period (between 0 and 27) or the last third (between 54 and 81). But then something similar happens on each of those periods -- all the science gets done between 0 and 9, 18 and 27, 54 and 63, or 80 and 81. If we repeat this, ad infinitium, we get that the set of times at which science is being done is the Cantor set, which has measure zero; furthermore the rate of scientific progress, when scientific progress is happening, must be infinite in order for any science to happen at all!

Of course, this is ridiculous. But it makes sense that science happens in bursts, and that each burst is made of smaller bursts, and so on; that there are periods of stasis between these bursts, but that some of these periods of stasis are more static than others; and so on. It's only the mathematician's insistence on taking the limit that makes this model not work. Furthermore, there's more than one kind of science, and it could happen that one discipline's burst is another discipline's period of stasis. And maybe a model like this is more likely to hold for the individual scientist (who has periods when they Get Things Done and periods when they don't) than for science as a whole.

But the periods when it looks like the scientist isn't doing anything might be essential. The subconsious is often doing work then. Perhaps there is something about the way our subconscious works -- in which bigger breakthroughs need longer fallow periods to precede them -- that leads to this fractal nature, with bursts upon bursts.

### Propp's self-referential aptitude test

Check out the Self-referential aptitude test by Jim Propp. The test consists of twenty questions, each with five possible answers; the answer to each question depends in various ways on the other questions! There are various routes to deriving the answers. I'd say to see if you can solve it, but that takes quite some time!

I tried to come up with the answers by first assuming that the answers were a random string of letters from A to E and then changing the answers to various questions in order to agree with each other; unfortunately this didn't work, because the way in which the answers depend on each other is too complicated. I expected this process to converge to a solution fairly quickly, but it didn't.

There is a probabilistic solution, though, via genetic algorithms; Mark Van Dine writes about it. The idea is that we can tell how many of the questions are correct, and then "mutate" the answers a bit hoping that we gradually find the right answer. This took thousands of generations, though, which is why I couldn't do it by hand.

I give a (deterministic) solution below. First, I want to give a couple thoughts on it. Presenting a solution like this is tricky; one wants to minimize the number of "intermediate results" where we know, say, that a question can have one of two or three answers but we don't know which it is. These sorts of results are hard to hold in one's head. (My original path to the solution had more such intermediate results; I don't claim that this way of writing the solution is optimal.) Another thing one wants to reduce is contradictions that take a long time to derive; there are a couple points in the solution where I know that a question has two possible answers, so I assume one of them and seek a contradiction. If it takes a long time to find the contradiction, then one forgets which assumptions need to be "undone" when the contradiction is finally reached. These two principles that I've given here are probably good principles for the writing of mathematical proofs, as well. It's a good idea to minimize the number of things your reader has to remember at any given time, because remembering things is hard.

It looks to me like there's no trick to finding a solution, from looking at the other solutions given at Propp's web site; the other solutions given there seem about as complicated as mine.

Anyway, here we go.

First, the answer to #12 has to be (A): the question is "the number of questions whose answer is a consonant is: even, odd, a perfect square, prime, divisible by 5". So I assume on that there is only one "right" answer to this question; therfore we need a number between 0 and 20 which is exactly one of those things. Clearly any number is even or odd, but not both, so we need a number which isn't a square, prime, or divisible by 5. Thus the number of questions whose answer is a consonant is 6, 8, 12, 14, or 18, and the answer is (A), for "even".

I also let knowledge from the outside world enter; the answer to #20 is (E). #12 and #15 have the same answer, so #15 is (A). This means that #13 must be (D). And the answer to #5 must be (E), by the principle that questions have only one correct answer and (E) is always a correct answer to that question.

Next, #6 and #17 only refer to each other, so we can hope to work out the answers to them without reference to the other questions. First, neither one can have the answer (E), since "all of the above" doesn't make sense in this context. Thus neither can have answer (C), because if either 6 or 17 has answer (C) then the other must have answer (E). Reasoning similarly, neither can have answer (A). So both #6 and #17 have either (B) or (D) as the answer; they must be one of each but we can't determine the order.

Similarly, #10 and #16 only refer to each other; they're (A) and (D), respectively.

So far, then, we have the answer string

where the two *s are one B and one D. Furthermore, we know that 6, 8, 12, 14 or 18 answers are consonants. So we move on to problem 8, which asks how many answers are vowels; this must be 2, 6, 8, 12, or 14. The only ones of these which are among the choices are (C) 6 or (E) 8. Looking at #7, then, we must have DC, DE, or EE for the answers to #7 and #8.

#9 can't be C (because #12 isn't C). And it can't be (E), because #13 is (D) and so the "next question with the same answer as this one" can't be #14.

#1 asks: "The first question whose answer is B, is question: (A) 1 (B) 2 (C) 3 (D) 4 (E) 5". It can't be (A) (because then the question would contradict itself) or (E) (because the answer to #5 isn't (B).) It can't be (C), because we just showed the answer to #3 isn't (B). So it must be either (B) or (D). Now, it seems that we have to make an arbitrary choice. We say #1 is (B). Thus #2 is also (B). So #7 and #8 must be the same; we see that they both have to be (E). Thus #5, #7, #8, and #20 are all (E); this is four (E)s, so #3 must be (E). But now there are five (E)'s, which isn't possible by examining the choices to #3. So our assumption that #1 had answer (B) is wrong; #1 must be (D).

Thus #2 isn't (B), meaning that #7 and #8 aren't the same. And #4 is (B). Thus the answers to #3, #4, #7, and #8 must be (D), (B), (D), (E) respectively, by looking at our previous list of possible answers to those four questions. The answers so far are

and we can at least see the solution taking shape.

Next, we look at #2 again. The answer to #10 is (A). Now, #11 can't be (A), because one of the questions preceding it has answer (B). Thus the answer to #2 isn't (E). Next, #2 can't be (C), because then #8 and #9 would be the same; but #8 is (E) and #9 can't be (E). So #2 is (A) or (D). But #2 can't be (D), because #2 refers to "the only two consecutive questions with identical answers" and #1 is already (D). So #2 is (A). Ths means #6 and #7 have the same answer, (D). Thus #17 is (B). The answer string so far is

There are five answers (A); we know this from #4. But there are less than five answers (C), since there are only five blanks left and #9 is not (C). So #18 isn't (B). Similarly, there are more than five answers (D) from the choices to #14, and exactly three answers (E) from #3. So #18 isn't (C) or (D) either; thus #18 is (A) or (E). But we've already accounted for the three answers (E) -- they're #5, #8, and #20. So #18 is (A). We now know there are the same number of answers (A) and (B), namely five of each. We've identified the five (A) answers, and two of the answers (B). The answer string so far is

and there are only two answers (B) and four blanks left. Three of the blanks have the answer (B).

Now, #11 must be (B) or (C), since there are 1 or 2 (B) among the first ten answers. Say it's (C); this means that there are two (B)s among the first ten answers, so #9 must be (B). But this means that #9 and #11 have the same answer, and we just said they didn't. So #11 is (B). This means that there is one (B) among the first ten answers, so #9 is not (B). (A) and (E) have already been ruled out for #9 (as the answers to the adjacent questions), and (C) was previously ruled out. So the answer to #9 is (D). The answer string is now

with three (B)s and two blanks; there are five answers (B), so the blanks must be (B), and we have

## 24 July 2007

### predicting baseball attendance

Back in July I wondered where baseball attendance numbers come from, and pointed to J. C. Bradbury's post about the Harry Potter effect on baseball attendance; he came up with a quick-and-dirty estimate of the number of people who didn't go to baseball games because they were reading the new Harry Potter book. This got me thinking -- how could we know how many people we expect to go to any given baseball game?

This paper by Brian Stoner gives the results of a regression analysis which forecasts the average attendance of a baseball team, over the course of a season, as a function of average ticket price, number of seats in the team's stadium, the previous year's attendance, the team payroll in the current year and number of wins in the previous year, the team's television market size, and whether or not the team has a new stadium; these turn out to explain most (98%) of the variation over the sample period. For the rest of this post, let's assume that we can predict a team's average attendance; we know it'll be, say, 32,000 per game. (Equivalently, we can predict the attendance for the entire season, since the average is just this figure divided by 81.)

But what I was wondering about is not average attendance, but attendance at a specific game, or equivalently how much the attendance is expected to be above or below the average. If you look at the attendance figures for a given team, their deviation from the average attendance depends on other figures which vary from day to day. First, there are factors that probably affect any outdoor activity, not just baseball:

• day of week -- weekend baseball is more popular than weekday baseball

• time of day -- day games and night games are different, although which is more popular probably depends on day of week and the season

• weather. Pre-purchased tickets probably depend on the average weather; for example, my family has a long-standing policy of not going to day games in June through August, because there's too much risk of the weather being unbearable. I suspect fans in colder-weather markets than Philadelphia avoid games in early April. (Philly can be cold in early April; I wore a winter coat to the Phillies' second game of the season this year.) After the many games canceled due to poor weather in April (including that Indians-Mariners series in Cleveland that got snowed out!),Gregory Goodrich, a meteorologist, examined the frequency of such "miserable baseball days". Walkup tickets, by contrast, probably depend quite heavily on the actual weather. Also, weather matters a lot less in a domed stadium; only in particularly miserable weather people won't even want to leave their homes.

I don't think these three variables could be treated independently; I'd probably prefer a day game in April (the nights can be very cold!) but a night game in July. And people probably care less about weekday versus weekend when school isn't in session.

There are then the factors that are unique to sports:

• quality of the visiting team. In general I suspect people want to see the home team play a good opponent more than a bad opponent. But at the same time people like to see the home team win. Perhaps people most want to see a visiting team with the same record as the home team, which maximizes the chances of a competitive game. (Note that I'm assuming that the quality of the home team is factored into the average attendance.)

• identity of the visiting team. When two teams in the same division play each other, I expect the attendance is higher than usual. Similarly, when teams have a long-standing rivalry more people come to games. This is distinct from the "quality of the visiting team" -- Phillies fans are more likely to come down to the ballpark to see the Mets than the Brewers or Dodgers, who have similar records as of this writing. Giants-Dodgers, Red Sox-Yankees, Cubs-Cardinals, etc. games probably sell more tickets than other intradivisional games (Giants-Diamondbacks, Red Sox-Orioles, Cubs-Astros, etc.)

• time of season. This is distinct from "weather" because while one might expect the same weather in May and September, the two are very different in the context of a baseball season; a game near the end of the season is usually seen to be "more exciting", at least if at least one of the two teams involved has the potential of making the playoffs.

• playoff importance. Games that are important in determining who ends up in the playoffs should have more attendance; for example, games where the two teams are first and second in their division or in a wild-card race.

I don't know if anyone's done this analysis (and if they have, it wouldn't surprise me if it's a baseball-team front office that isn't talking!) but I'd like to see the results.

### where do baseball attendance numbers come from?

The Numbers Guy asks: Are Sports Teams Juicing Attendance Figures? The two most popular ways of measuring attendance are paid attendance and number of people who came through the turnstile.

I'm not sure what the best way to count attendance is. Is a team more interested in how many people were actually watching them, or how many tickets they sold? If you want to gauge, say, the level of public support for the team, the first of these might be more important. If you want to know how much money you've made, the second seems more important -- except that it doesn't actually matter how many tickets were sold, but rather how much those tickets were sold for. A lot of teams have adopted a policy of having different prices on different nights; 30,000 tickets sold at an average of \$15 and 30,000 tickets sold at an average of \$25 are clearly quite different things.

But I think you'll generally see paid-attendance numbers rather than turnstile numbers because the first one is bigger. And it probably doesn't matter which one teams report so long as they all do it the same way; I doubt that different teams in the same league are going to have substantially different no-show rates. (I'd be intrigued to learn I'm wrong, though.)

On a related note, my hometown baseball team -- the Philadelphia Phillies -- have been selling out a lot of games this season. You can see the attendance figures at baseball-reference.com. All three games of the series against the St. Louis Cardinals on July 13-15 were called "sellouts"; the attendance at those games was 43,838, 45,050, and 44,872 respectively. (See the AP article if you don't believe me o the first of those three games. Also, I was there and they said it was a sellout.) The Phillies have reported attendance as high as 45,537 (Sunday, June 17, against the Detroit Tigers). Apparently "sellout crowds" can differ as widely as 4%. This fact at first makes me suspect that the Phillies use the number of people to come through the gates, because they're not just magically making another 1700 seets appear some days that aren't there others. (I suppose it's possible they might be -- if so, let me know how!) The article on the game of July 13 calls it the "13th sellout of the season", meaning there have been fifteen sellouts so far. It turns out that the game of Friday, July 13 is the fifteenth-most-attended game this season; the attendance at the fifteen most attended games has been, in rank order:

45,537: 6/17
45,289: 7/1
45,165: 6/29 (2)
45,153: 6/2
45,129: 5/13
45,107: 4/29
45,102: 6/16
45,050: 7/14
45,026: 5/12
45,003: 6/30
44,872: 7/15
44,742: 4/2
44,336: 4/13
44,323: 6/28
43,838: 7/13

The Phillies claim a seating capacity of 43,500; for certain games they sell standing-room tickets, and I had believed the number of these was 500 per game. Yet fourteen times this season they've broken 44,000. Where are these people?

Finally, J. C. Bradbury at Sabernomics asks how many people didn't go to baseball games because of the new Harry Potter book; he concludes about six thousand. I'm curious how he predicted what the attendance would have been in the absence of the new book; given the small size of the effect I'm not sure if his numbers are at all valuable, and he admits as much.

### housing rent maps

From Information Aesthetics (originally from boingboing): maps of San Francisco where different colors represent neighborhoods which are more or less expensive to rent in, from craigslist data. This is the work of Ethan Garner. The site appears to be overworked right now, so I can't actually check it out.

Anyway, I've been thinking that a map like this would be useful. However:

- as people have pointed out, the scale is relative, not absolute, so "yellow" on one map doesn't correspond to "yellow" on the other;
- I'd kind of like to see numbers. What I'd really want is a map that tells you "it'll probably cost you around \$X per month to live here". The screenshot on boingboing is good enough that I can tell it's not there
- the implicit scale of .5 miles (the color for each point was determined by looking at places up to half a mile away) sounds too large, to me, at least for the city I know best (Philadelphia). There are plenty of neighborhoods I know which really shouldn't be lumped in with things which are half a mile away. Then again, there may be an issue of the sample size here; if you reduce that radius too much there's the possibility of wild fluctuations due to a few particularly expensive or particularly cheap apartments.

What I'd really like to see is something that separates out the various elements of the price of an apartment -- for example, how much more does a two-bedroom, two-bathroom apartment cost than a two-bedroom, one-bathroom? How much more does a place with central air cost than one without? How much is being a block closer to downtown, or to a subway station, worth? (This last one, I think, has probably been studied; I've heard a rule of thumb that people value their travel time at one-half of their hourly wage, so if you know where people living in a certain area tend to go, and how much money they make, you've got an answer.) I've developed various rules of thumb while looking for housing, but just when I think they work I find too many counterexamples. But I'm not sure if they're actually counterexamples or if the model is sound and some people just price weirdly, and I don't have enough data -- or enough statistical knowledge -- to go any further.

As for "why San Francisco?" I've seen a similar neighborhood map of San Francisco, which gets its data mostly from Craigslist housing posts, and I've never seen maps like this generated from actual data for any other city. Craigslist probably has better coverage of San Francisco than any other city. However, my instinct is that Craigslist is still a flawed source, because it overrepresents the sort of apartments in the sort of neighborhoods that young people who are on the Internet a lot like. I don't know of any better source, though.

I don't trust this "neighborhood project", though, for a very simple reason: craigslist housing posts come from landlords, and landlords will lie about properties that are near the border of two neighborhoods if one neighborhood is seen as significantly "better" than the other. (In Philadelphia, for example, I've seen places at 56th and Arch called "University City" when even the University City District doesn't get within three-quarters of a mile of there, and their definition is widely thought to be very generous.) My instinct is that the data coming from people looking for roommates is substantially different, because people are less likely to lie to people they have to live with than to people they're just going to take money from.

Also, neighborhood names change with time; Philadelphia's "Graduate Hospital" neighborhood didn't exist twenty years ago. (And it now seems like a really stupid name, because the hospital's not called that any more. Residents are probably likely to use the name that a neighborhood had when they moved in. Unfortunately it would be nearly impossible to create a map that said "this is where various neighborhood boundaries were in 1950; this is where they are today", because getting the data for where people thought neighborhoods were in 1950 would require combing through far too many old newspapers and such.

## 23 July 2007

### life expectancies

The Numbers Behind Life Expectancy, from Carl Bialik's "The Numbers Guy" column at the Wall Street Journal.

Michael Moore said in Sicko that life expectancy is higher in Cuba than in the U.S.; CNN says it's the other way around; it turns out that so much computation goes into these calculations that there's probably a substantial amount of error. You might naively think that if life expectancy is, say, 77 years, that means that the average person born 77 years ago (in 1930) is just now getting around to dying. But the problem is that medical care isn't static, so this doesn't tell us how long people being born now should expect to live. So what's actually done is that one looks at how many people of age N in, say, 2005 survive to age N+1 (in 2006), and then these are chained together to tell us how many people would live to, say, age 80 if medical care remained as it is today and so the mortality rates remained constant. Basically, life expectancy is a moving target, because medical care changes substantially during a single person's life.

However, although the number "77" might not be that meaningful, I would guess that differences between those numbers for different populations which had been computed in the same way are valid to look at. A society where this number is 80 is probably healthier than one where it's 70.

But as many people point out, the Cuban statistics might not reflect what's actually going on in that country. It's difficult to know for sure.

Also, this means that the low life expectancies for countries in sub-Saharan Africa which have been affected by the AIDS epidemic are probably lower than one would naively expect; one hopes that the AIDS epidemic won't keep killing people at the same rates that it is now.

### proofs and deceptions by dissection

A friend of mine pointed me to this puzzle, which shows a triangle broken up into four pieces, and then the four pieces are rearranged in such a way that it appears that they now have total area one less than they did before. [edit, 5:43 pm: an anonymous commenter said that the link was broken. It's fixed now.]

What's really happening is that the "triangle" on the top and the "triangle" on the bottom are not triangles. However, it's nearly impossible to tell this by eye. The top "triangle" actually has its hypotenuse bent slightly inward; the bottom "triangle" has its hypotenuse bent slightly outward. If you superimposed the bottom "triangle" on the top one, there would be a long, thin sliver of area which would be contained in the bottom triangle but not in the top one; this turns out to have an area of exactly 1, making up for the "missing square".

One naturally expects the large "triangles" -- which appear to be right triangles with leg lengths 13 and 5 -- to be similar to the two small triangles. But the dark blue triangle has legs 8 and 3, and the green one has legs 5 and 2. It seems natural to compare the three right triangles by looking at the size of their smallest angle (since they're right triangles, this determines the shape uniquely); these are arctan(5/13), arctan(3/8), and arctan(2/5), or 21.04, 20.56, and 21.80 degrees, respectively. So the "bending" is on the order of one and a quarter degrees, which is hardly visible to the naked eye, especially since we naturally expect that a line that "looks" straight really is straight.

It's not a coincidence that the various numbers which are involved in this figure are Fibonacci numbers (2, 3, 5, 8, 13). The Fibonacci numbers have the property that the ratio between any two consecutive members of the sequence is very close to φ = (1+√5)/2, or about 1.618; the sequence approaches this limit quite quickly. For example, 13/8 = 1.625. The relevant ratios here are the ratios between Fibonacci numbers which are two positions apart (say, 5 and 13); these very quickly approach φ2 = 2.618...

I hadn't seen this particular example before, but there are other, similar puzzles which rely on these facts about the Fibonacci numbers; see Dissection Fallacy at MathWorld, in which an 8-by-8 checkerboard is broken up into pieces which appear to be rearranged to have area 63 or 65. The underlying idea is very much the same -- that we don't see the slight "imperfections of shape".

This isn't particularly special to the Fibonacci numbers, but can actually occur whenever the slopes of two lines are rational numbers which are close to each other; see, for example, the Curry Triangle at MathWorld, which relies on the fact that 2/5 and 3/7 are approximately equal. A bunch of Java applets illustrating such "dissection fallacies" can be found at Cut The Knot, and another illustration of this particular dissection can be found at Missing square puzzle on Wikipedia.

There's also a classic proof by dissection of the Pythagorean theorem (see about a third of the way down); essentially we can divide the two squares on the legs of a right triangle into pieces and rearrange them to form a square on the hypotenuse. Pedants will complain that you can't do this without proving that when you move regions around and rotate them their areas don't change -- and I suppose this is technically true -- but I think that these sorts of proofs give a much better intuition for why the Pythagorean theorem is true than, say, Euclid's proof.

### five finger keyboards? I don't think so.

From Slashdot: Five Finger Keyboards from Trevor's Trinkets. Trevor suggests the possibility that one could have five-key "keyboards" that do everything that normal keyboards do, simply by having the possibility of pressing multiple keys at once and having every possible combination of keys mean something.

This seems like a bit of abuse of combinatorics to me. First, he points out that it would be possible to have 31 (25-1) possible "keys" by allowing any combination of "pressed" and "not pressed" for the five keys. This seems just barely feasible to me. The next suggestion is to take into account the order in which the keys are depressed, giving rise to 5 + 5*4 + 5*4*3 + 5*4*3*2 + 5*4*3*2*1 = 325 combinations. In general (although he doesn't point this out), for an n-key keyboard this would give rise to

n + n(n-1) + n(n-1)(n-2) + ... + n(n-1)(n-2)...1
= n! [1/(n-1)! + 1/(n-2)! + ... + 1/0!]

combinations, which is very nearly e(n!), since we have e = 1/0! + 1/1! + 1/2! + 1/3! + ... and the term in brackets above is this series with the very small terms omitted. (One can't always just chop off the "small terms" like this, but the terms decrease quickly enough that I can say this here.) In fact, it's e(n!)-1, rounded down to the nearest integer. (For the record, 120e = 326.19...)

Having key combinations that do something that are prefixes of other valid key combinations, though, seems like a Bad Idea. On your computer, for example, you might press Ctrl-C to copy something. C by itself does something. But Ctrl by itself doesn't do anything. That way if your fingers slip you haven't accidentally done Something Else. There's a notion of the prefix code (or "prefix-free code") which takes this into account. Huffman coding is perhaps the best example of this and are quite useful for compression of data, since they allow "letters" that occur more commonly to be encoded with shorter strings. (Morse code works similarly.)

Then he points out that if we take the order in which the keys are released into account, we'd have 15,685 combinations; this is again true, but requires far more finger dexterity than I think we can have the average person to have. Furthermore, this seems like an incredible tax on memory. (And remember that desigining for the "average" person is actually foolish, because about half of people are below average.)

In terms of memory, Trevor suggests that menus of some sort could appear which tell people which numbers lead to which keys, for example saying "1. a-i, 2. j-r" and so on; this seems to me like it would only work if there's some natural order to the characters to be entered. This means that Trevor's suggestion that this could be used for entering, say, Chinese characters wouldn't work so well; there's no natural order to those characters, as far as I know.

There's actually a fairly old convention for the input of alphabetic data that I rather like; I remember seeing it used for, say, getting stock quotes by phone. The convention was that one had A = 21, B = 22, C = 23, D = 31, E = 32, ..., Z = 94; that is, the usual telephonic pairing of letters with numbers, with a second digit required to uniquely specify the letter. (In terms of keystrokes per letter, this has very nearly the same efficiency as the code that a lot of present-day phones use, namely A = 2, B = 22, C = 222, D = 3, E = 33, ..., Z = 9999, about two keystrokes per letter.)

But as one of the people who has already commented at Trevor's blog pointed out, it's probably better to have voice-based interface than try to develop this any further. It may be theoretically possible to have very complicated input strings, but in designing a device for actual human beings to use we must take into account that people make mistakes.