## 01 January 2008

### Notation again

I just came across this article: Florian Cajori, History of symbols for n-factorial. Isis, Vol. 3, No. 3 (Summer, 1921), pp. 414-418. Available from JSTOR, if you have access. Cajori was the author of A History of Mathematical Notations, which is the canonical source on the subject of the history of mathematical notations; I will confess I have never seen a copy of his book.

I didn't realize how many historical notations there have been for the factorial. n! is of course the most common one these days. Γ(n+1) is seen sometimes, although I personally find it a bit perverse to use this notation if you know that n is a positive integer.

Supposedly Gauss used Π(n). Someone named Henry Warburton used 1n|1, a special case of
an|1 = a(a+1)...(a+(n-1)). (This is a variant of the Pochhammer symbol. It's not clear to me what the 1 in the superscript means.) Other notations include a bar over the number and writing the number inside a half-box (with lines on the left and below). Augustus de Morgan is mildly famous for not using a symbol, and once said: "Among the worst of barabarisms is that of introducing symbols which are quite new in mathematical, but perfectly understood in common, language. Writers have borrowed from the Germans the abbreviation n! to signify 1.2.3.(n - 1).n, which gives their pages the appearance of expressing surprise and admiration that 2, 3, 4, &c. should be found in mathematical results." (I'm copying this from Earliest Uses of Symbols in Mathematics, although I learned it from somebody's office door. I found the Cajori article while looking for this quote just now.)

Apparently adopting any sort of symbol was resisted by some people, at least in their more elementary writings (textbooks for undergraduates and the like), because they didn't want to overload their students with symbols. I'm not sure if I agree with this for the factorial. A little thought experiment, though -- why don't we have a symbol for 1 + 2 + ... + n? (Although a bit of reflection convinces me that the reason is because we have an explicit formula for this, namely n(n + 1)/2.) But n! probably arises more often.

While I'm on the subject, you should read Knuth's Two notes on notation (Amer. Math. Monthly 99 (1992), no. 5, 403--422; arXiv:math/9205211v1), which suggests the notation [P] for "1 if P is true, 0 if P is false"; this turns out to be a quite useful generalization of the Kronecker delta. It also suggests notation for the Stirling cycle and subset numbers (those are, um, the Stirling numbers of the first and second kinds, respectively? or the second and first kinds? See, those names are better.)

Happy New Year!