From today's Foxtrot, you can learn what division really is. It also illustrates quite vividly the inefficiency of unary notation.
Division, of course, is basically just taking things and sorting them into piles of the same size, and then counting the piles; the result usually known as the division algorithm, which is not actually an algorithm but rather an existence theorem, makes this clear. I wonder if children learning arithmetic know this or if they think that "division" is just some magical algorithm that takes a bunch of numbers and pushes them around and spits out another number; it's been a very long time since I learned about division, and I've also learned not to trust my own introspection to tell me how "most people" see a given mathematical procedure. If I were "most people" I wouldn't be a mathematician.
For the most part I don't do arithmetic by the methods one learns in grade school. From just doing a lot of arithmetic (often in the course of experimenting with some combinatorial problem) I've quasi-memorized a lot of the particular arithmetic problems that come up most often. (In my case these seem to usually be products of numbers with small prime factors.) The set of arithmetical facts that I know off the top of my head is idiosyncratic; for example, I wouldn't know 392 or 432 off the top of my head, but I can instantly say that 372 = 1369 because it is in the name of 1369 Coffee House, which I frequented quite a bit during my undergrad years at MIT. But I also wouldn't multiply 37 by 37 by taking a bunch of things and arranging them in a 37 by 37 square and counting them. For one thing, I don't have 1,369 similarly-sized things. Also, I don't have a flat surface large enough for that in my apartment. This is why we invent calculation algorithms that don't directly mirror the definitions at the lowest level.
The grade-school arithmetic algorithms come pretty close to doing that, though, if you accept that you can arrange your objects in piles of 10, 100, 1000, and so on. But these aren't the most efficient methods; for multiplication, Strassen's method for integer multiplication has time-complexity Θ(N log N log log N) for N-bit integers, compared to Θ(N2) for the grade-school method. This paper of Martin Furer supposedly has better asymptotic behavior, although it wouldn't surprise me to learn that it's better only for very large numbers. (Strassen's algorithm, in turn, is only the best known method once the numbers get above 10,000 digits or so.)