Division, of course, is basically just taking things and sorting them into piles of the same size, and then counting the piles; the result usually known as the division algorithm, which is not actually an algorithm but rather an existence theorem, makes this clear. I wonder if children learning arithmetic know this or if they think that "division" is just some magical algorithm that takes a bunch of numbers and pushes them around and spits out another number; it's been a very long time since I learned about division, and I've also learned not to trust my own introspection to tell me how "most people" see a given mathematical procedure. If I were "most people" I wouldn't be a mathematician.

For the most part I don't do arithmetic by the methods one learns in grade school. From just

*doing*a lot of arithmetic (often in the course of experimenting with some combinatorial problem) I've quasi-memorized a lot of the particular arithmetic problems that come up most often. (In my case these seem to usually be products of numbers with small prime factors.) The set of arithmetical facts that I know off the top of my head is idiosyncratic; for example, I wouldn't know 39

^{2}or 43

^{2}off the top of my head, but I can instantly say that 37

^{2}= 1369 because it is in the name of 1369 Coffee House, which I frequented quite a bit during my undergrad years at MIT. But I also wouldn't multiply 37 by 37 by taking a bunch of things and arranging them in a 37 by 37 square and counting them. For one thing, I don't have 1,369 similarly-sized things. Also, I don't have a flat surface large enough for that in my apartment. This is why we invent calculation algorithms that don't directly mirror the definitions at the lowest level.

The grade-school arithmetic algorithms come pretty close to doing that, though, if you accept that you can arrange your objects in piles of 10, 100, 1000, and so on. But these aren't the most efficient methods; for multiplication, Strassen's method for integer multiplication has time-complexity Θ(N log N log log N) for N-bit integers, compared to Θ(N

^{2}) for the grade-school method. This paper of Martin Furer supposedly has better asymptotic behavior, although it wouldn't surprise me to learn that it's better only for very large numbers. (Strassen's algorithm, in turn, is only the best known method once the numbers get above 10,000 digits or so.)

## 1 comment:

Isabel,

I happen to KNOW 38^2=1444, 39^2=1521, AND 43^2= 1849. The last value is the california gold rush, to know 38^2, you get 62^2 for free (3844). 39^2=(40-1)^2=40^2-80+1. The speed at which these are recalled (actually I fumbled at 43) cannot be expressed on a blog. BUT, if you learn your squares from1-50, you have a good chance at being able to multiply 2-digit numbers in your head. As a mathematician you might not value this, but there is a bunch of stuff out there, and a bunch of your readers read you for educational reasons. My point in learning how to square things quickly was to demonstrate that algebra (the high school sort) comes from arithmetic. All of your (x+a)(y+b) sort of facts have arithmetical consequences.

You know this for sure. Do your less mathematically inclined readers?

Post a Comment