Jon Bentley gives a lecture called Three Beautiful Quicksorts, as three possible answers to the question "what's the most beautiful code you've ever written?" (An hour long, but hey, I've got time to kill.)
Watch the middle third, in which some standard code for quicksort is gradually transformed into code for performing an analysis of the number of comparisons needed in quicksort, and vanishes in a puff of mathematical smoke.
Although I must admit, I'm kind of annoyed that he slips into the idea that an average-case analysis is the most important thing somewhere in there. The first moment of a distribution is not everything you need to know about it! Although I admit that at times I subscribe to the school of thought that says "the first two moments are everything", but that's only because most distributions of normal.
(Note to those who don't get sarcasm: I don't actually believe that most distributions are normal.)
10 July 2008
Subscribe to:
Post Comments (Atom)
3 comments:
Wow, that was actually an interesting talk.
Thanks :)
I agree. Thanks for the link Isabel. I just picked up Bentley's book awhile ago and just started working my way through it.
I haven't done a lot of prob-stats study, so I'd be pretty interested to read an elaboration on your thoughts re: the comment "The first moment of a distribution is not everything you need to know about it!" Maybe if you get bored some time ;)
Dan,
Moments are a generalization of computing the mean of a distribution. Normally, you compute the mean of a function f(x) by summing/integrating over xf(x). For the rest of this, I'm going to talk about summing, with sum(xf(x)) for computing the mean. This can be generalized to integral(xf(x) dx) trivially, but it's bulkier for what's below.
One of two main generalizations used are the "moments around c", which changes the formula from sum(xf(x)) to sum((x-c)f(x)). For the mean, this isn't interesting, because sum((x-c)f(x)) = sum(xf(x))-sum(cf(x)) = mean - c*sum(f(x)) = mean - c (for a distribution, sum(f(x)) = 1), so it just shifts the mean over by c.
Often, this generalization is then made more specific, and c is taken to be the mean, and the resulting moments are called "central moments".
The other common generalization is to take the part multiplied by the distribution and raise it to an integral power n. So the first centralized moment is sum((x-mean)*f(x)) = 0; the second centralized moment is sum((x-mean)^2 f(x)), the third is sum((x-mean)^3 f(x)), etc.
The 2nd, 3rd, and 4th centralized moments of a distribution have names: the "variance", the "skew", and the "kurtosis", and are used in analyzing distributions. The square root of the 2nd centralized moment is the "standard deviation".
One further variant of the moment is the "normalized nth moment", which replaces sum(x^n f(x)) with sum(((x-mean)/stDev)^n f(x)). This takes a lot of the variation out of the picture and makes it easier to compare distributions to normal distributions: Any normal distribution will have a normalized 1st moment (mean) of 0, a normalized 2nd moment (variance) of 1, a normalized 3rd moment (skew) of 0, and a normalized 4th moment (kurtosis) of 3, etc.
Post a Comment