I'd recently dealt with the following problem in the class I'm teaching: use double integrals to approximate the sum
The best estimate that one can get without Euler-Maclaurin summation or other such trickery is
which in the case we cared about (n = 100) was about 133.71. (The limits come from what I've heard called the ``continuity correction'' -- basically, when approximating a Riemann sum by an integral, the points at which one needs to evaluate the integrand to get that particular sum should be in the middle of the intervals into which one partitions the domain of integration.) This integral is (2n+1) log (2n+1) - (2n+2) log (n+1); it's not too hard to write an asymptotic series for that,
(2 log 2)n - log n - 1 + log 2 - (3/4n) - (7/24n2) + O(n-3})
if you just remember that log (1 + x-1) has a Taylor series in 1/x: x-1 - x-2/2 + x-3/3 + O(x-4)
There's no estimate of the error here, which is kind of frustrating; however, note that we can rewrite the original sum as
since it contains one term 1/2, two terms 1/3, and so on. This is easily rewritten as
and now we've reduced the problem to doing two single sums; one is well-known and Euler-Maclaurin summation easily applies to the other one. I won't go through the details, but one turns out to get an asymptotic series for this sum
(2 log 2)n - log n - 1/2 - γ + log 2 - (5/8n) + (7/48n2) + O(n-3)
which differs from the crude estimate by (γ - 1/2) + O(n-1). The Euler-Mascheroni constant γ is about 0.577, so the crude estimate is quite good, and basically shows that I wasted my time trying to come up with a better one.
But then I got to thinking -- how do we know that γ = 0.5772156649...? For those of you who don't know, this constant is usually defined as
So I suppose you could just take n really large, and evaluated this; it turns out that the error in this estimate is about 1/(2n), so we wouldn't want to use this for computation. If we exponentiate both sies of the definition of γ, then we get
and rewriting the denominator as a telescoping product, this is
Combine the two products into one; the numerator has one extra term, so we pull it out. This gives
The exp(1/n) term goes away as n goes to infinity; we can replace n-1 by n to get
Taking logs gives
and writing the logarithm as a Taylor series in 1/k gives
Then we distribute the k-sum over the Taylor series (this isn't quite kosher since there are positive and negative terms, but it works, and it can probably be justified by taking the terms of the Taylor series in pairs) to get
which is just
This sum doesn't converge all that quickly; ζ(n) is very nearly 1 for large n. But I thought it was kind of cute nonetheless. (It's mentioned in the Wikipedia article, along with a bunch of other, similar sums, some of which are a lot more useful for actual computation.
03 November 2007
Subscribe to:
Post Comments (Atom)
3 comments:
I kind of wish I were taking this class. What is it?
Wait, so how do we know that the E-M constant is close to.. whatever? You did some nice juggling and came up with an identity, but half the terms on the right are unknown. You'd secure your place in history to calculate even one of them.
Mike,
the class is multivariate calculus, and I'm the TA; we assigned the problem of approximating the Riemann sum by a double integral, which got me thinking about how we would know if the approximation was any good. The question of how good the approximation is is not formally a part of the class, but I'll probably write something about it and post it on my web site because some students might be interested.
John,
it would be useful if the values of the zeta function were known. (Or even if they were not known exactly, but we could compute them approximately and they were small, so that the series would converge quickly.) But as you've pointed out we don't know them, which is why this particular result is kind of useless. The Wikipedia article gives other series which converge much faster and don't require knowing hard-to-compute values of special functions.
Post a Comment