Here's a question I don't know the answer to, which Sam Shah asked: how does the TI-83 do ∫-zz exp(-x2) dx, or more generally numerical integration? Of course as z goes to ∞ this approaches √π, with very small tails. (The link goes to an old post of mine that unfortunately has broken LaTeX; you can read the alt text for the images. The idea is that in∫z∞ exp(-x2) dx, the integrand can be bounded above by the exponential exp(-z2-2xz); integrating this, the original integral is less than exp(-z2)/2z and this is pretty tight. And yes, I know, I should switch to a platform with LaTeX support.)
So you expect to get something near √π for any reasonably large value of z. But if z is large enough -- say 1000 -- then you get a very small value, in this case on the order of 10-7. Presumably if the range of integration is wide enough, then the integration method used by the calculator doesn't actually pick up the central region where the action is actually happening.
24 February 2011
18 February 2011
Experimental mathematics for small children
a six year old solves the subset sum problem: a post on an operations research science fair project.
Roll five dice. Can you make a subset of the numbers appearing on them add up to 10? to 12? Laura McLay's six-year-old daughter did this experiment as a science fair project, solving the problem visually with bars of length 1 inch to 6 inch fitting in a 10-inch or 12-inch frame.
This is both the cutest and most awesome thing I have seen so far this morning. (But be aware that I have only been awake for half an hour.) There should be more things like it.
It's also what I'd do if I were seeing this problem for the first time. (Although, being a Grownup, I'd skip the manipulatives. Which would probably make it less fun.)
Roll five dice. Can you make a subset of the numbers appearing on them add up to 10? to 12? Laura McLay's six-year-old daughter did this experiment as a science fair project, solving the problem visually with bars of length 1 inch to 6 inch fitting in a 10-inch or 12-inch frame.
This is both the cutest and most awesome thing I have seen so far this morning. (But be aware that I have only been awake for half an hour.) There should be more things like it.
It's also what I'd do if I were seeing this problem for the first time. (Although, being a Grownup, I'd skip the manipulatives. Which would probably make it less fun.)
13 February 2011
Which comes first, the nominal or real record high?
A quick puzzle: say that gas prices hit a record high in nominal dollars. And say they also, around the same time, hit a record high in real (inflation-adjusted) dollars. Which comes first?
Say the nominal price is f(t) and the real price is h(t) = f(t)g(t), where g is monotone decreasing. Then h'(t) = f'(t) g(t) + f(t) g'(t). So if the nominal price is at a maximum at time T, then f'(T) = 0 and so h'(T) = f(T) g'(T). f(T) is positive because it's a price and g'(T) is negative by assumption. So h'(T) is negative and at the time of the nominal high, the real price is already decreasing. The real high comes first, and there's a short period in which the real price is decreasing but the nominal price is still increasing. This makes sense - a nominally constant price is decreasing in real dollars.
This is brought to you by an example from Geoff Nunberg's book Talking Right, on how the Right has distorted political language in the United States in an effort to marginalize the Left. At the particular point I'm commenting on, argues that the right likes to proclaim "record highs" in gas prices which are basically always in nominal dollars and therefore make the gas price rise look worse than it is. My argument, however, says nothing about that - taking derivatives makes this a local problem, and "record highs" (nominal or real) are global maxima.
Say the nominal price is f(t) and the real price is h(t) = f(t)g(t), where g is monotone decreasing. Then h'(t) = f'(t) g(t) + f(t) g'(t). So if the nominal price is at a maximum at time T, then f'(T) = 0 and so h'(T) = f(T) g'(T). f(T) is positive because it's a price and g'(T) is negative by assumption. So h'(T) is negative and at the time of the nominal high, the real price is already decreasing. The real high comes first, and there's a short period in which the real price is decreasing but the nominal price is still increasing. This makes sense - a nominally constant price is decreasing in real dollars.
This is brought to you by an example from Geoff Nunberg's book Talking Right, on how the Right has distorted political language in the United States in an effort to marginalize the Left. At the particular point I'm commenting on, argues that the right likes to proclaim "record highs" in gas prices which are basically always in nominal dollars and therefore make the gas price rise look worse than it is. My argument, however, says nothing about that - taking derivatives makes this a local problem, and "record highs" (nominal or real) are global maxima.
09 February 2011
How to know if your hiring practices work?
So one of the courses I'm teaching this semester is introductory probability, with a calculus prerequisite. Not surprisingly this attracts a lot of students in various subjects that are math-heavy but that are not actually mathematics or statistics. They also tend to be juniors and seniors.
In office hours today, one of my students got to criticizing the fact that lots of consulting, finance, etc. jobs like to ask questions like "how many ping pong balls fit in a 747" in interviews, on the basis that this has nothing to do with what they'd actually have to do on the job. I pointed out that there's some difficulty in knowing whether your hiring practices are working. You'd like to compare the success of the people that you hire into your company with the success that the people that you don't hire would have had in your company. The former might be difficult to measure, but it's probably not impossible assuming your company does something quantifiable. But the latter is essentially impossible.
So what's the right way to design this sort of study? Is there any serious research on which interview techniques actually succeed in finding the best people?
In office hours today, one of my students got to criticizing the fact that lots of consulting, finance, etc. jobs like to ask questions like "how many ping pong balls fit in a 747" in interviews, on the basis that this has nothing to do with what they'd actually have to do on the job. I pointed out that there's some difficulty in knowing whether your hiring practices are working. You'd like to compare the success of the people that you hire into your company with the success that the people that you don't hire would have had in your company. The former might be difficult to measure, but it's probably not impossible assuming your company does something quantifiable. But the latter is essentially impossible.
So what's the right way to design this sort of study? Is there any serious research on which interview techniques actually succeed in finding the best people?
06 February 2011
Correlation in betting on the NFL.
Nate Silver points out that just because the spread in today's Super Bowl is small (the Packers are something like a three-point favorite) doesn't mean that the game will necessarily be close. It just means that it's almost equally likely to be a blowout in one team's favor as in the other's.
Not surprisingly, though, the regression line for margin of victory, as predicted from point spread, is very close to having slope 1 and passing through the origin. As it should, because otherwise bettors would be able to take advantage of it! Say that 7-point favorites won, on average, by 9 points. Assume that the distribution of actual margin of victory, conditioned on point spread, is symmetrical; then half of 7-point favorites would win by 9 points or more, so more than half would win by 7 points or more, and one could make money by betting on them. On the other hand, say that 7-point favorites won, on average, by 5 points; then you could make money by betting agsinst them.
(For what it's worth, I don't have a particular interest in this game. In fact I probably won't even watch it. I have no connection to either Pittsburgh or Green Bay, and as longtime readers know, I'm a baseball fan.)
Not surprisingly, though, the regression line for margin of victory, as predicted from point spread, is very close to having slope 1 and passing through the origin. As it should, because otherwise bettors would be able to take advantage of it! Say that 7-point favorites won, on average, by 9 points. Assume that the distribution of actual margin of victory, conditioned on point spread, is symmetrical; then half of 7-point favorites would win by 9 points or more, so more than half would win by 7 points or more, and one could make money by betting on them. On the other hand, say that 7-point favorites won, on average, by 5 points; then you could make money by betting agsinst them.
(For what it's worth, I don't have a particular interest in this game. In fact I probably won't even watch it. I have no connection to either Pittsburgh or Green Bay, and as longtime readers know, I'm a baseball fan.)
Pixar mathematics
Pixar's use of harmonic functions (by David Austin) describes mathematical techniques used by Pixar. Incidentally, apparently there exists something called Pixar University, which I learned when I went to the excellent Pixar exhibit at the Oakland Museum of California. As far as I can tell they are not hiring, they're really an internal training program, and anyway I don't know anything about animation. (The exhibit's next stop is in Hong Kong.
The problem that the article addresses is basically this: 3-D animated characters have many "control points" on their surface. Animators would not like to have to specify how all of these move individually, so they don't; they only specify the motion of a subset of these called the "cage". How do we interpolate? When there are just three control points this is obvious -- use barycentric coordinates in both triangles. This is basically exploiting the fact that there's a unique linear transformation taking a generic triangle to another. generic triangle
But what if there are more than three control points? One generalization is "mean value coordinates", which are a linear transformation, but which are problematic when the cage is not convex. Since the cage is often, say, the outline of a human being, this is a real problem! Apparently the newer technique is "harmonic coordinates" -- define the coordinates of the n boundary points of a cage to be (1,0,...,0), (0,1,...,0), ..., (0,0,...,1). Then for any point p in 3-space, let hi(p) be linear on the edges of the cage, and harmonic (having zero Laplacian) elsewhere. Then the non-cage control point p has harmonic coordinates h1(p), ..., hn(p); if you move the cage points then these points keep the same harmonic coordinates but change their coordinates in 3-space. Unfortunately inverting the h-function is a bit harder but it seems worth the trouble.
There's a video explaining this, with lots of images!
The problem that the article addresses is basically this: 3-D animated characters have many "control points" on their surface. Animators would not like to have to specify how all of these move individually, so they don't; they only specify the motion of a subset of these called the "cage". How do we interpolate? When there are just three control points this is obvious -- use barycentric coordinates in both triangles. This is basically exploiting the fact that there's a unique linear transformation taking a generic triangle to another. generic triangle
But what if there are more than three control points? One generalization is "mean value coordinates", which are a linear transformation, but which are problematic when the cage is not convex. Since the cage is often, say, the outline of a human being, this is a real problem! Apparently the newer technique is "harmonic coordinates" -- define the coordinates of the n boundary points of a cage to be (1,0,...,0), (0,1,...,0), ..., (0,0,...,1). Then for any point p in 3-space, let hi(p) be linear on the edges of the cage, and harmonic (having zero Laplacian) elsewhere. Then the non-cage control point p has harmonic coordinates h1(p), ..., hn(p); if you move the cage points then these points keep the same harmonic coordinates but change their coordinates in 3-space. Unfortunately inverting the h-function is a bit harder but it seems worth the trouble.
There's a video explaining this, with lots of images!
05 February 2011
Computing distance in the Facebook graph
Is there some nice algorithm for computing the distance between two nodes in a graph, that gracefully fails when they're far apart? I'm asking this prompted by this metafilter question on how to compute the distance between two individuals in the Facebook graph; it seems to me that if someone is at distance 5 versus 6 from me, for example, I don't really care which it is.
Dijkstra's algorithm, for example, runs in time proportional to the number of edges of the graph. (Well, O(E+V log V), but I'm guessing that log V is smaller than E/V.) This is the search algorithm I learned in an algorithms class once. And most algorithms that I've seen -- of course this isn't an area I know a huge amount about -- tend to run in time at least as large as the number of edges of the graph. (Of course this makes sense because one could have to actually look at all the edges of the graph -- such are the pitfalls of worst-case analysis.)
It seems like bidirectional search would work -- if my friends of friends overlap with your friends of friends, then we're at a distance of most 4, for example. (I didn't realize this had a name until just now; it just seems like an obvious idea to me.) But is there some reason I'm overlooking that this wouldn't be practical?
Dijkstra's algorithm, for example, runs in time proportional to the number of edges of the graph. (Well, O(E+V log V), but I'm guessing that log V is smaller than E/V.) This is the search algorithm I learned in an algorithms class once. And most algorithms that I've seen -- of course this isn't an area I know a huge amount about -- tend to run in time at least as large as the number of edges of the graph. (Of course this makes sense because one could have to actually look at all the edges of the graph -- such are the pitfalls of worst-case analysis.)
It seems like bidirectional search would work -- if my friends of friends overlap with your friends of friends, then we're at a distance of most 4, for example. (I didn't realize this had a name until just now; it just seems like an obvious idea to me.) But is there some reason I'm overlooking that this wouldn't be practical?
03 February 2011
Snowdecahedron redux
A few days ago I posted about the Porter Square snowdecahedron. Here are more pictures of it and pictures of some smaller ones. The project is due to Dan Beyer. He previously made a dodecahedron out of a tree stump (apparently the trick is to make a wedge corresponding to the dihedral angle of the dodecahedron beforehand) and has proposed dodecahedra built of traffic cones as public art for construction sites.
(I found Dan Beyer's site via metafilter; I don't recall how I found the original picture on flickr linked to in the previous post.)
(I found Dan Beyer's site via metafilter; I don't recall how I found the original picture on flickr linked to in the previous post.)
02 February 2011
How to win the Ontario lottery's tic-tac-toe game
Jonah Lehrer writes for Wired on breaking a scratch-off game in the Ontario lottery. In 2003, Mohan Srivastava, a geostatistician, figured out a way to crack a tic-tac-toe game that the Ontario lottery was running at the time. In this game, you're given a set of eight three-by-three grids with numbers between one and thirty-nine on them (seventy-two numbers in total) -- these are visible to you when you buy the tickets. After buying the ticket, you then scratch off a set of "your numbers"; if three of these numbers appear in a row in one of the grids, in tic-tac-toe fashion, you win. Since there are 72 numbers on the ticket and they are between 1 and 39, there is much repetition. It turned out that if a ticket contained three non-repeated numbers in a row it was very likely to be a winner.
The article doesn't say how the tickets are turned out this way, though; what sort of algorithm could produce this behavior? But for Srivastava's purpose of demonstrating that it's possible to tell winning tickets from losing tickets with high probability, this was not necessary. Srivastava also points out that this isn't worth it as a way to make money, unless possibly if you could hypothetically get your hands on a pile of tickets, go through them at home, and return the losing ones to the store.
(I learned about this from metafilter. The commenters there, a usually reliable bunch, seem to be split on whether you could return the losing tickets or not.)
The article doesn't say how the tickets are turned out this way, though; what sort of algorithm could produce this behavior? But for Srivastava's purpose of demonstrating that it's possible to tell winning tickets from losing tickets with high probability, this was not necessary. Srivastava also points out that this isn't worth it as a way to make money, unless possibly if you could hypothetically get your hands on a pile of tickets, go through them at home, and return the losing ones to the store.
(I learned about this from metafilter. The commenters there, a usually reliable bunch, seem to be split on whether you could return the losing tickets or not.)
A database of applied math problems
A friend of mine teaches at the British Columbia Institute of Technology. They are building a database of applied math problems, at the 11th or 12th-grade level. Their goal is to give students a better idea of "why do we need to learn this?", which is the bane of all math teachers.
They're not asking for calculus problems. But I've taught calculus and I often had the sense, while teaching the "applied" problems, that they were just straight-up asking the students to do derivatives or integrals, with some words added purely as a red herring to confuse the students. I mean, really, if a ladder leaning against a wall falls down, is there any situation in which one cares how quickly the area underneath the ladder is changing? My memory of pre-calculus classes is hazy, because I haven't taught at that level, but I do remember having a pervasive sense that the applications were contrived.
They're not asking for calculus problems. But I've taught calculus and I often had the sense, while teaching the "applied" problems, that they were just straight-up asking the students to do derivatives or integrals, with some words added purely as a red herring to confuse the students. I mean, really, if a ladder leaning against a wall falls down, is there any situation in which one cares how quickly the area underneath the ladder is changing? My memory of pre-calculus classes is hazy, because I haven't taught at that level, but I do remember having a pervasive sense that the applications were contrived.
Subscribe to:
Posts (Atom)