03 November 2007

Fun with gamma and zeta

I'd recently dealt with the following problem in the class I'm teaching: use double integrals to approximate the sum
\sum_{i=1}^n \sum_{j=1}^n {1 \over i+j}.

The best estimate that one can get without Euler-Maclaurin summation or other such trickery is
\int_{1/2}^{n+1/2} \int_{1/2}^{n+1/2} {1 \over x+y} \, dx \, dy

which in the case we cared about (n = 100) was about 133.71. (The limits come from what I've heard called the ``continuity correction'' -- basically, when approximating a Riemann sum by an integral, the points at which one needs to evaluate the integrand to get that particular sum should be in the middle of the intervals into which one partitions the domain of integration.) This integral is (2n+1) log (2n+1) - (2n+2) log (n+1); it's not too hard to write an asymptotic series for that,
(2 log 2)n - log n - 1 + log 2 - (3/4n) - (7/24n2) + O(n-3})
if you just remember that log (1 + x-1) has a Taylor series in 1/x: x-1 - x-2/2 + x-3/3 + O(x-4)
There's no estimate of the error here, which is kind of frustrating; however, note that we can rewrite the original sum as
{1 \over 2} + {2 \over 3} + \cdots + {n-1 \over n} + {n \over n+1} + {n-1 \over n+2} + \cdots + {1 \over 2n}

since it contains one term 1/2, two terms 1/3, and so on. This is easily rewritten as
n - \left( {1 \over 2} + {1 \over 3} + \cdots + {1 \over n+1} \right) + \sum_{k=2}^n {n-k+1 \over n+k}

and now we've reduced the problem to doing two single sums; one is well-known and Euler-Maclaurin summation easily applies to the other one. I won't go through the details, but one turns out to get an asymptotic series for this sum
(2 log 2)n - log n - 1/2 - γ + log 2 - (5/8n) + (7/48n2) + O(n-3)

which differs from the crude estimate by (γ - 1/2) + O(n-1). The Euler-Mascheroni constant γ is about 0.577, so the crude estimate is quite good, and basically shows that I wasted my time trying to come up with a better one.

But then I got to thinking -- how do we know that γ = 0.5772156649...? For those of you who don't know, this constant is usually defined as
\gamma = \lim_{n \to \infty} \left( \sum_{k=1}^n {1 \over k} \right) - \log n.

So I suppose you could just take n really large, and evaluated this; it turns out that the error in this estimate is about 1/(2n), so we wouldn't want to use this for computation. If we exponentiate both sies of the definition of γ, then we get
\exp(\gamma) = \lim_{n \to \infty} \left( {\prod_{k=1}^n e^{1/k} \over n} \right)

and rewriting the denominator as a telescoping product, this is
\exp(\gamma) = \lim_{n \to \infty} \left( {\prod_{k=1}^n e^{1/k} \over \prod_{k=1}^{n-1} {k+1 \over k}} \right).

Combine the two products into one; the numerator has one extra term, so we pull it out. This gives
\exp(\gamma) = \lim_{n \to \infty} \left( e^{1/n} \prod_{k=1}^{n-1} {e^{1/k} \over 1 + {1 \over k}} \right).

The exp(1/n) term goes away as n goes to infinity; we can replace n-1 by n to get
\exp(\gamma) = \lim_{n \to \infty} \left( \prod_{k=1}^n {e^{1/k} \over 1 + {1 \over k}} \right).

Taking logs gives
\gamma = \lim_{n \to \infty} \sum_{k=1}^n \left( {1 \over k} - \log \left( 1 + {1 \over k} \right) \right)

and writing the logarithm as a Taylor series in 1/k gives
\gamma = \lim_{n \to \infty} \sum_{k=1}^n {1 \over 2k^2} - {1 \over 3k^3} + {1 \over 4k^4} + \cdots.

Then we distribute the k-sum over the Taylor series (this isn't quite kosher since there are positive and negative terms, but it works, and it can probably be justified by taking the terms of the Taylor series in pairs) to get
\gamma = \lim_{n \to \infty} \left( \sum_{k=1}^n {1 \over 2k^2} - \sum_{k=1}^n {1 \over 3k^3} + \sum_{k=1}^n {1 \over 4k^4} + \cdots \right)

which is just
\gamma = {1 \over 2} \zeta(2) - {1 \over 3} \zeta(3) + {1 \over 4} \zeta(4) - {1 \over 5} \zeta(5) + \cdots.

This sum doesn't converge all that quickly; ζ(n) is very nearly 1 for large n. But I thought it was kind of cute nonetheless. (It's mentioned in the Wikipedia article, along with a bunch of other, similar sums, some of which are a lot more useful for actual computation.

3 comments:

Mike said...

I kind of wish I were taking this class. What is it?

Anonymous said...

Wait, so how do we know that the E-M constant is close to.. whatever? You did some nice juggling and came up with an identity, but half the terms on the right are unknown. You'd secure your place in history to calculate even one of them.

Michael Lugo said...

Mike,

the class is multivariate calculus, and I'm the TA; we assigned the problem of approximating the Riemann sum by a double integral, which got me thinking about how we would know if the approximation was any good. The question of how good the approximation is is not formally a part of the class, but I'll probably write something about it and post it on my web site because some students might be interested.

John,

it would be useful if the values of the zeta function were known. (Or even if they were not known exactly, but we could compute them approximately and they were small, so that the series would converge quickly.) But as you've pointed out we don't know them, which is why this particular result is kind of useless. The Wikipedia article gives other series which converge much faster and don't require knowing hard-to-compute values of special functions.