Showing posts with label history of science. Show all posts
Showing posts with label history of science. Show all posts

20 January 2008

Math happened here

The story goes that Hamilton figured out the definition of quaternions while walking across Broom Bridge in Dublin.

What I didn't know is that there's a plaque there commemorating this. The text of the plaque says:
"Here as he walked by on the 16th of October 1843, Sir William Rowan Hamilton in a flash of genius discovered the fundamental formula for quaternion multiplication i2 = j2 = k2 = ijk = -1" carved (?) on a stone of this bridge."

There's also a sign commemorating ENIAC, the "first computer", across the street from my office. I didn't know it was there until about a year after I came to Penn, because it was obscured by construction. It says "ENIAC, the Electronic Numerical Integrator and Computer, was invented by J. Presper Eckert and John Mauchly. It was built here at the University of Pennsylvania in 1946. The invention of this first all-purpose digital computer signaled the birth of the Information Age." Mark Jason Dominus pointed me to a picture in the Wikipedia article.

What other signs do you know of that say, roughly, "math happened here"?

17 December 2007

What is infinity, anyway?

At this post from A Dialogue on Infinity, Alexandre Borovik writes about an experiment that one of his teachers tried once, in which the calculus teacher at his boarding school tried to build all of calculus in terms of finite elements. The logic here is basically the following: the main use of calculus is to solve differential equations. The differential equations basically come from approximating some discrete calculation as a continuous one. (The example that comes to mind for me is the way in which the equation of the catenary, the shape that a chain hangs in if it's only under the influence of gravity, is usually derived; see, for example, this page, which asks us to "Consider the equilibrium of the short length of chain subtending a distance dx on the x-axis." And then if you're reduced to solving a differential equation numerically (a common state of affairs), this is basically done by this same sort of finite element analysis -- Euler's method and the like.

I'd be interested to see how that worked.

But on the other hand, I'm not sure how valuable it is. Sometimes finite numbers are large enough that one doesn't want to deal with them as finite numbers, and one wants to make infinitesimal approximations. Basically, sums are hard, and integrals are easy. This is the insight behind the Euler-Maclaurin formula, which approximates sums by integrals and then lets you know how good the approximation is. It's hard to come up with the formula for the sum of the first n 8th powers; but do you really need the exact number, or do you just need to know it's about n9/9, which is what integration would tell you?

A commenter at an earlier post from the same blog wrote:
Your example from Computer Science reminds me of something often forgotten or overlooked in present-day discussions of infinite structures: that some of the motivation for the study of the infinite in mathematics in the 19th and early 20th centuries came from physics where (strange as it may seem to a modern mathematician or a computer scientist) the infinite was used as an approximation to the very-large-finite.

I didn't realize that historical point. But somehow that doesn't seem strange to me. This may mean I spent too much time hanging around physicists in my formative years.

26 November 2007

Graph paper

The history of graph paper, from Alexandre Borovik's Mathematics under the Microscope.

I often find myself thinking that graph paper is an innovation whose time has passed. Its main purpose, in my life, is to make my students' homework harder to read; there are some of them who write on graph paper despite the fact that we are very rarely asking them to graph something by hand, and the vertical lines end up distracting my eyes. And on those occasions when I do want to graph something, I use a computer.

Similarly, I can see how graph paper would be useful for numerical computation, because it makes it easier to line up the digits of various numbers one wants to add or subtract; but I use a computer for that, too.

There was a time when I was playing around with tilings of the square lattice a lot; during that period I liked graph paper. In the same vein, a few days ago Borovik illustrated multiplication of the Gaussian integers on (a simulation of) graph paper.

25 July 2007

fracta;s. space-filling curves, and scientific revolutions

Mark Chu-Carroll at "Good Math, Bad Math" writes about space-filling curves. These are really counterintuitive things -- curves that eventually fill up, say, an entire square. There's a nice article about them at Wikipedia.

It won't surprise you to learn that these aren't "curves" in the sense that you might think of them; if I ask you to draw a "curve" you'll probably draw something that's what mathematicians would call "piecewise smooth". What this means, roughly, is that you can draw a piece of it without having any "kinks", then turn, then draw another such piece, and so on, doing this only a finite number of times. Space-filling curves don't have this property; they are made up of infinitely many such "pieces". Not surprisingly, they also have infinite length. These curves are made by an iterative process; in the case of the Hilbert curve:

  • on the first iteration the curve has length 3/2 and each point is within √2/4 of the curve;

  • on the second iteration the curve has length 15/4 and each point is within √2/8 of the curve;

  • on the third iteration the curve has length 63/8 and each point is within √2/16 of the curve;

  • on iteration n the curve has length 2n - 1/2n (it is made of 4n-1 segments of length 2-n) and each point is within √2/2n+1 of the curve.


The maximum distance halves and the length doubles with each step; as we iterate, each point in the square is arbitrarily close to a point on the curve (thus on the curve) and the curve is infinitely long.

Andrew Cook at "Statistical Modeling, Causal Inference, and Social Science" writes about the fractal nature of scientific revolutions, pointing to this earlier post of his. The idea is that science moves forward in what the evolutionary biologists call "punctuated equilibrium" -- at most points "not much" is getting done but occasionally big moves are made and in the end science gets done. (This is a bit unfair, though, because the scientists who are doing the "not much" are often collecting the sort of data that is exactly what the revolutinaries doing the paradigm shift will turn out to need.) If this is true, then we might say that all the science that will get done between year 0 ("now") and year 81 (which turns out to be 2088) gets done either in the first third of that period (between 0 and 27) or the last third (between 54 and 81). But then something similar happens on each of those periods -- all the science gets done between 0 and 9, 18 and 27, 54 and 63, or 80 and 81. If we repeat this, ad infinitium, we get that the set of times at which science is being done is the Cantor set, which has measure zero; furthermore the rate of scientific progress, when scientific progress is happening, must be infinite in order for any science to happen at all!

Of course, this is ridiculous. But it makes sense that science happens in bursts, and that each burst is made of smaller bursts, and so on; that there are periods of stasis between these bursts, but that some of these periods of stasis are more static than others; and so on. It's only the mathematician's insistence on taking the limit that makes this model not work. Furthermore, there's more than one kind of science, and it could happen that one discipline's burst is another discipline's period of stasis. And maybe a model like this is more likely to hold for the individual scientist (who has periods when they Get Things Done and periods when they don't) than for science as a whole.

But the periods when it looks like the scientist isn't doing anything might be essential. The subconsious is often doing work then. Perhaps there is something about the way our subconscious works -- in which bigger breakthroughs need longer fallow periods to precede them -- that leads to this fractal nature, with bursts upon bursts.