- you try to get the ball in the hole;
- scoring works by counting the number of times you hit the ball with the club; lower scores are better;
- "par" is really good, not average.
Anyway, Tiger Woods won the U. S. Open, and Ian Ayres, guest-blogging at Freakonomics asks why golf commentators don't give the probability that a putt from distance x will go in. Commentators in just about every other sport do; Ayres' example is free-throw percentage in basketball, mine would have been batting average in baseball.
Ayres then gives a plot showing the success rate of golf putts as a function of difference from the hole, taken from this paper. Not all that surprisingly, the probability of making a putt from distance x scales like 1/x; essentially you can assume that the angular error in putting doesn't change as the distance of the putt increases. But the apparent size of the target is smaller at longer distances. Basically, from twice as far away the hole looks half as big.
It turns out that's what Andrew Gelman and Deborah Nolan thought, too. (Andrew Gelman and Deborah Nolan, "A Probability Model For Golf Putting", Teaching Statistics, Volume 24, Number 3, Autumn 2002. (This is the source for Ayres' figure.)) Their actual model is a bit more complicated, because they actually do the trigonometry correctly, and they assume that errors in putting are normally distributed while I'm assuming they're uniform. Read it, it's two and a half pages.) This fixes the problem that my crude model would have. At five feet, pros make 59 percent of their putts; thus it would predict that at two and a half feet, they make 118 percent!
The result of Gelman and Nolan is that the probability of the success of a putt from distance x is
where R, r are the radii of the ball and the hole; 2R = 4.25 inches and 2r = 1.68 inches. σ is the standard deviation of the angular error of a shot (in radians), which can be found from empirical data to be 0.026 (about 1.5 degrees). Φ is the standard normal distribution.
If you assume that x is large enough that (R-r)/x is small enough that we can make the small angle approximation arcsin x ≈ x, then this becomes 2Φ((R-r)/(σ x)) - 1. But Φ(z) is linear near z = 0, with Φ(0) = 1/2 and Φ'(0) = 1/(2π)1/2. So the probability of succeeding from distance x, for large x, is approximately
R-r is 1.285 inches, or .10708 feet.
So we get that the probability of making a putt from distance x, in the limit of large x, is about (3.29 feet)/x, although this is really only a good approximation above x = 6 feet or so. This has the advantage of being easy to remember -- well, somewhat easy, because you still have to remember the constant. But if you measure in meters, there are 3.28 feet in a meter, so the constant is basically 1; clearly golf should be done in metric.
Incidentally, I think it's a good idea to put citations that at least include the author and title in blog posts, even though strictly speaking they're not necessary as pointers if a link is provided. Why? Because that makes it more likely that people who Google the paper's title or authors find the people talking about it. (Occasionally I've recieved comments from self-Googling authors.)
14 comments:
I've been thinking of removing so many mentions (and links as well!) because it makes my blog show up higher on Google result pages.
This has the unpleasant effect of my adviser, potential employers, research partners, etc., stumbling across my blog. I'm not always too careful with what I put up, and I make mistakes with it that may appear rather foolish to someone in the know.
That's vain, I know. But still.
Isn't a major problem with 1/x that it blows up near zero, so it can't possibly be a probability of anything?
I guess you could normalise to make the blow-up reach 1 arbitrarily close to zero (say, within half a golf-ball radius), but that seems kinda adhoc.
Cool! Do you know if anyone has checked the statisticians' model against actual data?
thecooper: I guess you could normalise to make the blow-up reach 1 arbitrarily close to zero (say, within half a golf-ball radius), but that seems kinda adhoc.
You are obviously not a particle physicist, my friend. :) To get calculations in quantum electrodynamics to come out right, you have to impose an arbitrary minimum possible distance between electrons. Otherwise, integrals blow up. Changing the minimum distance makes various things break, so you have to change certain internal parameters to make up for it... but physical quantities, like the "effective" charge/mass ratio of the electron, always come out the same!
The whole process is called "renormalization," and I'm told it involves a lot of extremely shady math. But it works. Although it shouldn't. :)
Aaron,
take a look at their paper; it does check the model against actual data. That's because they're statisticians. I, as a probabilist, ignored that part.
@ thecooper:
Er, wait a minute... after perusing your blog, I see that you obviously are a particle physicist. Or at least that you were at some point. Or at least that you have had to insert Feynman diagrams into a LaTeX document, which is more than I've ever had to do with Feynman diagrams!
@ Isabel:
Uch! All these people telling me to read things all the time... don't they know it's summer? :P
I should thank you for pestering me, though---it's a good paper! This reluctance to read things is beginning to bother me... I'm thinking more and more about that Nicholas Carr article...
Aaron
No, I'm not and I never was a particle physicist (or a physicist of another kind!). I just have an affinity for drawing diagrams of all sorts. (Tee hee!)
I think the problem with physicists (not just particle physicists, but in general) is that they get rid of degrees of freedom by just integrating things, without knowing those integrals exist (or what the space they're integrating over looks like, or what measure they're integrating with respect to, etc.). This maybe leads to a general sloppiness when using estimates, or something, but it drives mathematicians nuts.
In a lot of cases, too you can do like in PDE, back off from the singularity by an epsilon, calculate the backed-off integral, and then let epsilon go to zero. Or sometimes you can cancel the epsilon term with something else.
But of course that takes a lot of time, and it's easier just to assume minimum distances.
Aaron and thecooper are both obviously not Max Planck. There are all sorts of ways to prevent an ultraviolet catastrophe.
The Phillies lost.
I think they ignore the speed of the ball. At a given angle some putts with spin around the hole and not go in, yet a slightly slower ball will drop.
@thecooper: Isn't a major problem with 1/x that it blows up near zero, so it can't possibly be a probability of anything?
Well, in such an approximative theory, the regime of x << 1 can be interpreted as a superprobabilistic regime wherein it is "super probable" that the ball will make it into the hole.
Pr[putt] = 1: "I'm totally going to make this putt."
Pr[putt] = 5: "OMG, I'm so totally going to make this putt."
Pr[putt] = 25: "This putt will succeed unless all the planets align to gravitationally suspend the ball from the hole."
Pr[putt] = 625: "A failure to putt is worthy of a Nobel prize, as it represents a violation of conservation of energy."
Michael,
you're right that they ignore the speed of the ball; this is mentioned towards the end of the paper.
The asymptotic behavior 1/x becomes obvious after a moment of reflection, because the size of the interval of the angles of the successful shots is also asymptotic to 1/x. The formula d(arg(z)) = the imaginary part of dz/z also comes to mind.
Post a Comment