29 February 2008

Leap days

Mark Dominus has written an interesting post about leap days, of which today is one. I welcome this turn of events, because it means I don't have to! But I have a few things to add.

The reason that today is February 29 -- and not March 1 like it would be in a normal year -- is because the year is almost a quarter of a day longer than 365 days. So we add an extra day every four years. But not quite -- it's actually more like 365.24 days -- so we leave out one of these extra days every century. But no, it's really more like 365.2425 days -- so we add back in one of the days we got rid of every four centuries. (We did that in 2000, as you may remember. 2000 was a leap year.)

The real number of days per year is something like 365.24219; thus what one wants to do is to find a rational approximation p/q to .24219 and add p leap years every q years. The approximation we use, 97/400, actually isn't that good for a number with such a large denominator; 8/33, a convergent of the continued fraction of .24219, is better. Dominus proposes having leap years in those years congruent to 4, 8, 16, ... , 32 mod 33, a scheme that was also suggested in 1996 by Simon Cassidy on Usenet, complete with the same choice 4, ..., 32 of leap years in each 33-year cycle. The reason for this coincidence is that 1984, 1988, ..., 2012 are leap years in both systems.

And although dividing by 33 is hard, as Dominus points out, it's not that hard to reduce a number modulo 33 in your head. The year abcd (where a, b, c, and d are digits) reduces modulo 99 to ab + cd; for example, 2008 reduces to 28 modulo 99. Then just subtract 33 or 66 as necessary. I'd rather have to do lots of reduction modulo 33 in my head than, say, modulo 37. (And with 33 there's probably some slick Chinese-remainder-theorem method that allows you to reduce a number mod 33 by reducing it mod 3 and 11, both of which are easy.) In any case, there's no reason we couldn't have a calendar that requires more complicated arithmetic than the current one; most people are not in the business of calculating calendars. (And calendars are calculated by computers now; Dominus points out his new rule actually requires less computation than the old one. The 97/400 rule is an artifact of the fact that we work in decimal.)

As is pointed out both by Cassidy and by this page, Jesus is traditionally said to have lived for 33 years; for some people that might lend additional appeal to a 33-year cycle.

Edit (8:23 am): There's also something to be said for a system that includes 31 leap years every 128 years -- have leap years in years divisible by 4, except those divisible by 128 -- this was apparently suggested by Johann Heinrich von Mädler, and would only be off by something like a quarter-second a year. But striving for such accuracy is silly, because the length of the year isn't actually constant.

27 February 2008

Number sense?

Numbers Guy, from the March 3, 2008 issue of The New Yorker, on human beings' intuitive "number sense".

Interestingly, we're quicker at comparing numbers that are further apart, which seems to imply some intuitive sense of "number line" -- and our number line perhaps has some sort of strange metric (not the usual one) in which the distance between 2 and 3 is longer than the distance between 7 and 8, since we're faster at saying 3 is larger than 2 than we are at saying 8 is larger than 7. (See, for example, yesterday's post on hyperbolic discounting which exploits a similar phenomenon with respect to timelines.)

And here's an interesting fact:
Because Chinese number words are so brief—they take less than a quarter of a second to say, on average, compared with a third of a second for English—the average Chinese speaker has a memory span of nine digits, versus seven digits for English speakers. (Speakers of the marvellously efficient Cantonese dialect, common in Hong Kong, can juggle ten digits in active memory.)

I wonder if this sort of thing is true in general -- does this correlation extend beyond just a pair of languages? Wikipedia's article on the "seven plus or minus two" phenomenon backs this up -- the actual limit is something like two seconds of speech -- although unfortunately there's no citation to follow.

(Via Seed's Daily Zeitgeist.)

26 February 2008

Combinatorial quotients

Consider a combinatorial class (in the sense of Flajolet and Sedgewick) or combinatorial species (in the sense of Bergeron, Labelle, and Leroux). There are certain generating functions associated with combinatorial classes/species (I'm conflating the two for the moment because the difference between the two theories aren't important here), and the operations of sum and product have nice "combinatorial" interpretations -- addition of generating functions corresponds to disjoint unions of classes/species and multiplication of generating functions what I'll call here "combinatorial product", i. e. for two species F and G, an FG-structure on the set [n] is a pair which consists of an F-structure on some subset S of [n] and a G-structure on [n]-S. Then the generating function of the FG-structures is the product of the generating function of the F-structures and that of the G-structures.

Other natural operations on functions have combinatorial interpretations: composition of functions, taking exponentials or logarithms, differentiation, integration, etc. Even subtraction has an interpretation, although one has to be careful: this is the "virtual species". (A virtual species is a pair of ordinary species (F, G), written as F-G, where (F, G) = (H, K) if F + K = G + H. Replacing species with integers and addition with multiplication, this looks like the construction of the rational numbers from the integers.)

But is there a natural quotient operation? If there is, I can't find a mention of it in Flajolet and Sedgewick, and Marko Petkovsek (who I was just talking about this with) tells me it's not in Bergeron et al. either; this is one of the sources he's using for our current topics course in combinatorics. Let F be a species with 1 structure on the set of size 0, and let Fn be its restriction to size n. Let F+ be its restriction to all positive sizes. Then
F^{-1} = (1 + F_+)^{-1}

where 1 is the multiplicative identity species (hence the name); it has one structure on a set of size 0. We can of course write this as
1 - F_+ + F_+^2 - F_+^3 + F_+^4 - \cdots

and we collect the "positive" and "negative" terms to get
F^{-1} = \sum_{k=0}^\infty F_+^{2k} - \sum_{k=0}^\infty F_+^{2k+1}

So the inverse of the species F exists, and it's a virtual species in which the positive part is even-length tuples of nonempty F-structures, and the negative part is odd-length tuples of nonempty F-structures. This doesn't quite seem like a "combinatorially natural" operation -- but on the other hand I was able to describe it in a single sentence, so it seems plausible that it could be a natural thing to occur somewhere.

Notice that this also proves that if
F(z) = \sum_{n=0}^\infty f_n {z^n \over n!}, G(z) = \sum_{n=0}^\infty g_n {z^n \over n!}

and F(z)G(z) = 1, f0 = 1, then all the gn are integers, since the generating function G(z) actually counts something! (Well, a virtual something -- but that's close enough. It won't surprise you to learn that for two species H and K, the generating function corresponding to the virtual species H-K is H(z) - K(z). G here is a virtual species, so the gn are (possibly negative) integers. Of course, there are other ways to prove this, but I think this is a cute "combinatorial proof".

"One Giant Leap For Babykind"

One Giant Leap for Babykind, from the (Feb. 25?) New York Post.

A mother born February 29, 1980 is due to have a baby on February 29, 2008. (Her doctors have said they'll induce labor that day if it doesn't happen naturally.)

Now, one out of every 1461 days is a leap day; assuming that the events of a mother and a baby being born on that day are independent (which seems reasonable), this should be true for about one in every (1461)2 = 2,134,521 mother-baby pairs. So about 140 people in the U. S. should be born on February 29 and also have a mother born on that day.

I was actually reading the article expecting them to make some sort of mathematical mistake -- this feels like the sort of place where an "expert" is consulted and gives some ridiculous figure -- but they managed to restrain themselves.

25 February 2008

Hyperbolic discounting

Greg at The Everything Seminar posts about "hyperbolic discounting". Roughly speaking, theoretically one should value a payoff of an amount P of money at time t in the future with the same value as Pe-rt today, where r is the inflation rate. But people actually seem to value P at that time in the future like they value P/(1+ct) today for some constant c. People probably have a decent sense of what the inflation rate is; thus Pe-rt and P/(1+ct) should agree locally, so I'd guess c = r. (Set the two expressions equal to each other, so ert = 1 + ct, and take the first two terms of the Taylor series.)

Greg writes:
I think something like a logarithmic measure on actual time might give the hyperbolic discounting model.

That's true. Let's say we live at time 0; the correct (exponential discounting) value of a payoff of 1 at time t is e-rt. The value of a payoff of 1 at time T under hyperbolic discounting is 1/(1+rT). Setting these equal, we get
e^{-rt} = {1 \over 1 + rT}
.
Solving for each variable in terms of the other,
t = {\log(1+rT) \over r}, T = {e^{rt}-1 \over r}


So roughly speaking, from looking at the first equation, the discounting that people actually use instinctively is obtained by taking the logarithm of the time T they're discounting over (up to some scaling, which really just sets the units of time), and then applying the correct (exponential) model. This reminds me of a logarithmic timeline, but in reverse. People see the period from, say, 16 to 32 years ago as being as long as the period from 32 to 64 years ago. This is also why I don't believe in a technological singularity even though I'd like to; the arguments often seem to be based on "look! lots has changed in the past hundred years, more than changed in the hundred years before that!" but our memories of "change" are somewhat selective.

24 February 2008

Wikipedia on 0.999...

Wikipedia has an article entitled 0.999...

The article is quite good, and includes at least sketches of the various proofs that 0.999... = 1 and some interesting historical tidbits; I won't recapitulate those here.

The talk page and the arguments page are interesting, in a different way; one gets to watch people who don't really understand why this should be true, but want to understand it. (Of course there is the occasional crackpot. It might give you a headache, though. It's like sausage being made.

23 February 2008

links for 2008-02-23

From A neighborhood of infinity: What is topology?, attempting to provide some intuition for the standard axioms that define a topology in terms of "observable sets" (which are just open sets, with different intuition attached to them!)

Also, on a totally unrelated note: John Baez has online notes from the course he gave in Fall '03, Winter '04, Spring '04. The winter notes are particularly interesting to me, because they combine combinatorics (which I know well) and a lot of category-theoretic notions that I'm not too familiar with.