Problem 1.1(c). Most present-day [mid-1980s] computers only have one central processor -- to use our analogy, one clerk. This single file clerk sits there all day long working away like a fiend, taking cards in and out of the store like mad. Ultimately, the speed of the whole machine is determined by the speed at which the clerk -- that is, the central processor -- can do these operations. Let's see how we can maybe improve the machine's performance. Suppose we want to compare two n-bit numbers, where n is a large number like 1024; we want to see if they're the same... [suppose] we can hire n file clerks, or 2n or perhaps 3n; it's up to use to decide how many, but the number must be proportional to n. [How can you get the comparison time to be proportional to log n?]
The proof is by induction. The base case, when n = 1, is clear -- we can check that two 1-bit numbers are the same in one step. We want to show that if n clerks can do this in time t(n), then 2n clerks can do it in time t(n) + c, where c is a constant. But that's easy -- split each of the two n-bit numbers a and b into half: a = a1a2, b = b1b2. If a1 = b1 AND a2 = b2, return TRUE; otherwise return FALSE. That's just one more time step once you've figured out if a1 = b1 AND a2 = b2.
The picture I get in my head of this situation makes me smile. You need n clerks -- n parallel processors. The kth clerk compares the kth bit of the first number with kth bit of the second number; ey returns a 0 if the two bits are different and a 1 if they're the same. (This is, of course, bitwise
The next subproblem in Feynman is to add two n-bit numbers in logarithmic time by using such parallel processing. (This is harder because you have to worry about carrying!) Again, it reduces to a problem of combining information on words of one length to words of twice that length -- if you have a = a1a2 (where a1 is the "high", or more significant half, and a2 is the low half), and b = b1b2, what is a+b? Breaking it down into its high half and its low half,
a + b = (a1+b1+c)(a2 + b2)
where c denotes a carry from the addition of a2 and b2, and a2 + b2 denotes addition modulo 2k where k is the length of the summands. We assume a1+b1, c, and a2 + b2 are already known. The problem is... what if a1+b1 ends in a bunch of 1's, and c is 1? We have something like
...0111111 + 1
and we want to add the 1, so we think "okay, I'll just change the last bit in the first summand from 0 to 1"... but it's 1... so you change the last two bits from 01 to 10... but no, that won't work either... so you change the last three bits from 011 to 100...
On average this isn't a problem. On average the number of 1s at the end of a1+b1 is unity.
But not always. Somebody could be deliberately feeding you numbers that are hard to add. Some sort of sadistic grade school teacher.
edited, 10:48 am: Welcome to those of you who came here from reddit. Be aware that I am not a computer scientist -- I'm a mathematician -- so I might not really know what I'm talking about here.