## 26 February 2009

### LaTeX equation labels

When writing a paper in LaTeX, you often want to put a numerical label on a displayed equation, say the number (1). So you write some code like
$$\label{eq:basel-problem} \sum_{n=1}^\infty {1 \over n^2} = {\pi^2 \over 6}$$
which compiles to give something that looks like

Then later I can insert code like (\ref{eq:basel-problem}) and (1) appears in my docuemnt.

Now, as you may have noticed, I picked an equation that had a nice name, and I labeled it with that name. (The "eq:" in the label, of course, stands for "equation", a convention that I use to tell what sort of entity I'm referencing -- other things I use in that position are def:, thm:, prop:, cor:, lem:, and the like.)

But what do you do when the displayed equation doesn't have a nice "name" -- it's just an equation that occurs somewhere in the course of a calculation? For a while I tried to come up with a name, but I ended up with way too many generic names like "integral" and "sum" and "thing-with-binomial-coefficients". (Okay, so I'm exaggerating on the last one.) These names took time to think of but didn't make things easier on me later. So now I find myself using labels like \label{eq:feb-24-kappa} for the 10th labelled equation that I inserted on February 24. (Why do I use Greek letters? I tried using numbers, but it's too easy to get those confused with the actual numbers that are used to label equations.) But I'm wondering what sort of conventions people use for this; since it's the sort of thing that you can only see when you're looking at other people's LaTeX source, it's hard to know.

Somewhere, somebody is saying that I'm using LaTeX incorrectly. It might be you!

(Yes, I'm taking a break from rewriting a paper. How did you guess?)

## 24 February 2009

### Richard Stanley tells a joke

In the graduate course Richard Stanley is currently teaching, on symmetric functions (also known as "the chapter of Stanley that I haven't really read that carefully", which is indeed the text he's using), students have two options for an end-of-term paper. They can either hand in "a treatise of at least 200 pages on some area of symmetric functions, consisting primarily of original work" which "must contain (correct) proofs of at least two important, longstanding open problems" or an eight-page expository paper.

Somehow I think nobody will choose the first option. (Although if they do, that would be an instant PhD thesis.)

Possibly also of interest: Stanley is working on a second edition of Enumerative Combinatorics, Volume 1, and a draft version of Chapter 1 is available (198-page PDF) This appears to be a substantial extension of Chapter 1 of the original.

## 23 February 2009

### Math and pancakes

In honor of Shrove Tuesday (tomorrow for me, although it's past midnight in the UK), have a meaningless formula that will tell you how to make the perfect pancake.

(It seems to me that these silly formulas usually come from the UK. Why?)

## 21 February 2009

### The large sieve and its applications

You should vote at thebookseller.com for Emmanuel Kowalski's The Large Sieve and its Applications, which has been shortlisted for the "Diagram Prize for Oddest Book Title of the Year". (Apparently "large sieve" sounds like it has something to do with cooking, if you're not a mathematician.)

## 20 February 2009

### Motion of zeroes of complex polynomials

Consider the polynomial f(z) = (z-1)(z-2)...(z-20). Clearly it has 20 roots; these are 1, 2, ..., 20.

Now consider the polynomial g(z) = -z20. It also has 20 roots, namely the origin with multiplicity 20.

And consider h(z) = t f(z) + (1-t) g(z), as t varies from 0 to 1. (Most of the "action" happens when t is very near 0 or 1, so this probably isn't the best parametrization.) Now, as t varies, you can find the roots numerically. Imagine the roots as twenty particles moving around in the plane. What happens, basically, is that as t increases roots start by sliding along the real axis, towards each other in pairs -- this is what you expect if you just plot f(z) as a real polynomial. (Interestingly, the collision appears to be perfectly elastic.) They then bang into each other and head off in the positive and negative imaginary directions. And eventually they curve around and approach the origin, on paths spaced 18 degrees apart. (I can try to produce graphics.)

There's nothing special about these polynomials -- that is, I suspect that something like this happens more generally. This is actually just an extension of an example in Peter Henrici's Applied and computational complex analysis (volume 1, p. 282) -- Henrici says that J. H. Wilkinson looked at the polynomial f(z) - 2-23z19 and saw that it had five pairs of complex conjugate zeroes.

But it seems like there should be some sort of general theory of the way that roots of families of polynomials "move around" in the plane. (And if there isn't, why not?) Does the situation I've described ring a bell for anybody?

### Why do the Catalan numbers grow like they do?

So "everybody knows" that the nth Catalan number is given by , and furthermore that they have the asymptotic form

(Okay, I'll confess: I knew the first term, and I got Maple to calculate the others just now.)

So I found myself wondering -- why this n-3/2? Let Dn = 4-n Cn. Then
and so we get
; furthermore that sum is about -(3/2) log n, for large n, and so Dn is about n-3/2. The replacement of 1-x with exp(-x) obviously would need to be justified (and such justification would explain the presence of the mysterious π) but I'm still amused that this simple computation got the exponent.

## 19 February 2009

### The mathematician and the lumberjack

You remember the whole "mathematicians have the best job in America" survey from jobsrated.com (which was based on some fairly questionable criteria)? Well, apparently Sean Hurley at NPR did a story a couple weeks ago, where he talked to a mathematician (Peter Winkler, of Dartmouth) and a lumberjack. Peter Winkler likes his job. So does the lumberjack, although the pay sucks and trees occasionally fall on you, which sucks too.

## 18 February 2009

From today's New York Times, real estate section, referring to rural Normandy:
But, on average, real estate prices here generally range from 1,500 to 2,000 euros ($1,940 to$2,585) a square foot and a typical three-bedroom house sells for 250,000 euros (\$323,165), according to Manuela Marques, a broker with Objectif Pierre, a local real estate agency.
Of course, this doesn't check out, unless a typical three-bedroom house is around 150 square feet (and Normandy has suddenly turned into Manhattan). The resolution is that that's a price per square meter, which is how a French real estate broker would quote things.

### Pizza seminars

In Penn's math department, there is a "Pizza Seminar". It is on Fridays at noon, only graduate students in the math department are allowed to come, and there is free pizza. Each week a graduate student gives a talk that is intended to be accessible to most graduate students; sometimes they focus on some accessible piece of their research, but more often these talks are expository. (Occasionally, maybe three times a term, a professor speaks -- but still only graduate students are allowed to attend the talk.)

If you Google "pizza seminar", most of the results are similar series in math or closely allied fields (physics, computer science). Is there some reason that such a format wouldn't work well in more distant fields, or is this just historical accident?

## 17 February 2009

### Sigurdur Helgason is not dead

Sigurdur Helgason dies, says the New York Times.

That's the Icelandair executive, not the MIT mathematician. The mathematician is, as far as I know, alive.

(Also, the New York Times actually refers to "Colombia University" in their obituary.)

### Changing subfields

A statistic I overheard today, from a source that shall remain nameless (in case the source is wrong): three-fourths of mathematics graduate students leave graduate school in a different subfield of mathematics from when they came in.

Can anybody point me to a source?

### Pavages aléatoires par touillage de dominos

Pavages aléatoires par touillage de dominos, by Thierry de la Rue et Elise Janvresse. ("Random tilings by domino shuffling", roughly.) This expository article describes work, mostly by Jim Propp and coauthors, on the generation of random tilings. As you can guess from the title, it's in French. If you don't read French you can look at the pretty pictures.

Also, this article is on a web page; the people who make the web page have used Javascript in such a way that in various places, if you want more detail, there's a link that you can click on and more detailed explanations will appear in place in the article. This may be a useful presentation technique, although it's hard to know because this is the first time I've seen it.

### On publishing your trash can

I'm rereading de Bruijn's book Asymptotic Methods in Analysis (which, sadly, appears to be out of print again!) -- one of the great mathematical expositions, of asymptotic methods in analysis as they stood at midcentury. It's one of the most readable math texts I know.

de Bruijn writes in the preface:
Many things in this book are not presented in the shortest possible form, as an attempt has been made to reveal, to a certain extent, the motives that lead to certain methods. Naturally one cannot go too far in this respect; a mathematician cannot possibly publish his waste-paper basket.
This seems worth remembering; terseness is not always a virtue.

## 14 February 2009

### College kids are better than monkeys

This is in reference to the work of Elizabeth Brannon, who actually claims that certain monkeys have an intuitive "number sense" which is as good as humans; see for example this article. Despite what those of us who teach college students may occasionally think, they are better at mathematics than monkeys.

## 13 February 2009

### Two questions on document preparation

1. Why are the default margins in LaTeX so wide? It's kind of useful, because it means that there's a lot of space to scribble in when editing, but it seems that by default they're wider than in just about any other program.

2. Why are dissertations usually double-spaced? I associate double-spaced with draft documents, because you can write things between the lines of text. But the dissertation isn't supposed to be a draft. It's supposed to be a final document!

### Complices are made up of simplexes, or something like that

How come the plural of "simplex", in standard mathematical usage, is "simplices", but the plural of "complex" isn't "complices"?

My first thought is that it's because "complex" is also a noun in standard English, so it pluralizes like the English noun, while "simplex" isn't.

(As you may have guessed, I'm reading something that mentioned simplicial complexes.)

## 12 February 2009

### The Arbesman limit

Samuel Arbesman talks how to get something named after yourself. Of course, he names something after himself -- the "Arbesman limit", which is the number of things that one person can have named after themselves. (Gauss, Euler, etc. provide a lower bound for this limit.)

Supposedly Banach originally named his spaces "spaces of type B" or something like that, figuring that people would see the B, assume it standed for Banach, and start calling them Banach spaces. If that's true, it worked.

## 10 February 2009

### Two quasi-mathematical Jeopardy! clues

Two clues from yesterday's episode of Jeopardy!. (I meant to post this yesterday, and in fact wrote the post yesterday, but didn't want to give out spoilers, so you're getting it now.)

1. "In math, it's the degree of correctness of a quantity or expression." Answer: "What is accuracy?" (This was in a category which carried the stipulation that every answer had to begin with A and end with Y.)

2. "While writing Principia Mathematica, this twentieth-century British thinker was a lecturer at Cambridge." Answer: "Who is Bertrand Russell?"

My objection to the first one is more that this isn't a mathematical concept; it seems to me that that use of "accuracy" is more common in, say, the experimental sciences.

And to the second one, I believe Russell was the answer given by a contestant and accepted, but Whitehead also might fit this description. (Whitehead left Cambridge just as the first volume of the Principia was published, in 1910.)

## 09 February 2009

### Permutatinos

One of my most common typos is "permutatinos" for "permutations". A friend points out that this is quite apt.

### Brian Hayes on the continental divide

Brian Hayes recently made a fascinating post about determining the location of the Continental Divide. Apparently that's what those pictures on the cover of this month's Notices were. This is to go with David Austin's review of Hayes' Group Theory in the Bedroom, and Other Mathematical Diversions. The book consists, apparently, of reworked versions of Hayes' essays for The Sciences and American Scientist, essays which in general I highly recommend; the post by Hayes that I linked to is his reflections on redoing this map with the data that's available now, which is better than the data he had in 2000 when he wrote the original column. (Not surprisingly, the major discrepancy between the divides obtained then and now is in Wyoming, where there are some pathological areas that don't drain to either ocean.)

## 08 February 2009

You should read today's Foxtrot comic strip. (I won't tell you why, because if I told you why you won't follow the link.)

## 07 February 2009

### Beijing celebrates the mean value theorem

Beijing celebrates the mean value theorem.

Philadelphia, I suppose, celebrates counting, with its numbered streets. (And perhaps Descartes and his coordinate plane, since the streets form a grid?)

### Feed change

I just transferred the RSS feed on this blog to a Google account (Feedburner, the service I'd been using, has recently been integrated into Google). There should be no problems, but if there are let me know.

## 03 February 2009

### Electoral hex redux and African colonialism

At Gil Kalai's blog I came across a map in which counties which voted Democratic in some election are colored blue, and counties which voted Republican are colored red. (It is not the 2008 presidential election map, or at least it doesn't match Mark Newman's map.)

Anyway, Kalai suggests "hex voting" -- like the game of Hex, the Republicans win if there's a continuous path in red counties from north to south, and the Democrats win if there's a continuous path of blue counties from east to west. (In the map he gives, the Republicans win, but only barely -- there are some places where the red region is only one county wide.)

It turns out that the British and French played a similar game in Africa early in the last century; here's a map of European claims to Africa in 1913, in which the British were attempting to create a continuous path of red (British) colonies from north to south, and the French were attempting to create a continuous path of blue (French) colonies from east to west. (At least, that's how the Wikipedia article on Cecil Rhodes puts it.)

### Divisibility by 7

There's a pretty well-known test for divisibility by seven that came up over coffee today. Namely, to check if a number written in base 10 is divisible by 7, remove the last digit; then subtract twice that digit from the "rest" of the number. The input is divisible by 7 if and only if the output is; since this test makes numbers smaller, that's fine.

For example, 4728402 is divisible by 7. Why? Remove the last digit (2) and subtract its double (4) from the "rest" of the number, giving 472840 - 4 = 472836. Repeating, we get:
47283 - 12 = 47271
4727 - 2 = 4725
472 - 10 = 462
46 - 4 = 42
4 - 4 = 0
which is divisible by 7. (The number isn't special in any way; I just pressed a bunch of number keys and multiplied by 7 to create it.)

But I'd never seen a proof that this works. Here's a quick one. We want to show that 10k + m is divisible by 7 if and only if k - 2m is.

Say 10k + m is divisible by 7. Then 3k+m is, so 10k+m - 3(3k+m) = k-2m is.
Conversely, say k-2m is divisibile by 7. Then 3(k-2m) + 7(k+m) = 10k+m is.

## 02 February 2009

### A strange hole in the Inverse Symbolic Calculator

I'd been fooling around with a problem, and I had identified the constant C = 0.0676676416183063 (yes, actually to that kind of accuracy!) as being important in it.

The Inverse Symbolic Calculator doesn't recognize C as 1/(2e2). But it does recognize it as 1/(2 exp(√2)√2), and it told me that 10C was (√5)2/(e2).

I am a bit perplexed by why their tables would include these but not the more "obvious" form 1/(2e2).