I'd be interested to see how that worked.
But on the other hand, I'm not sure how valuable it is. Sometimes finite numbers are large enough that one doesn't want to deal with them as finite numbers, and one wants to make infinitesimal approximations. Basically, sums are hard, and integrals are easy. This is the insight behind the Euler-Maclaurin formula, which approximates sums by integrals and then lets you know how good the approximation is. It's hard to come up with the formula for the sum of the first n 8th powers; but do you really need the exact number, or do you just need to know it's about n9/9, which is what integration would tell you?
A commenter at an earlier post from the same blog wrote:
Your example from Computer Science reminds me of something often forgotten or overlooked in present-day discussions of infinite structures: that some of the motivation for the study of the infinite in mathematics in the 19th and early 20th centuries came from physics where (strange as it may seem to a modern mathematician or a computer scientist) the infinite was used as an approximation to the very-large-finite.
I didn't realize that historical point. But somehow that doesn't seem strange to me. This may mean I spent too much time hanging around physicists in my formative years.
1 comment:
I'm hard-pressed to think of someone who would find it strange. We do it every time we say, "we're dividing a finite quantity among a huge number, so it's pretty much zero." We do it whenever we talk about an iteratively-defined sequence (Newton's method) converging: do this over and over again and after infinite [a very large number of] repetitions you get a fixed point.
And it shows up in practice, too. I was raised around physics to a certain extent, but before that (and before I realized it) I was raised around someone who studied things like Arrow's theorem in the situation where there are so many voters they can be treated as a continuum.
Post a Comment