MSRI (the Mathematical Sciences Research Institute) is located at 17 Gauss Way, Berkeley, California. Here's a picture.
Of course, Gauss constructed the 17-gon with ruler and compass and was very proud of this. This article says it's not a coincidence, and so does this official MSRI document.
And rather surprisingly, that's not the only thing on Gauss Way. The Space Sciences Laboratory is at 7 Gauss Way. I'm not sure what significance 7 has, if any.
31 March 2009
What time is it?
I looked at my watch at 12:05. I wasn't sure, for a moment, whether it was 12:05 or 1:00; I had to carefully look to determine which of the two hands was the longer one.
A question for you: how many times in a given twelve-hour period could I have this problem? More rigorously, suppose I have an ordinary twelve-hour analog clock, with an hour hand and a minute hand but no second hand. Furthermore suppose I can measure the position of the hands absolutely precisely, and they're "sweep" hands (i. e. they move at a constant angular rate, without "ticks"). At how many times between (say) noon and midnight could I interchange the hands of the clock and still have the hands in a position that corresponds to some time -- but not the time that it actually is? Noon, for example, is not such a time; if I interchange the minute and hour hands at noon I get a valid position of the hands, but that's the position the corresponds to noon. (I won't give an example of a valid time because giving one would be a big hint.)
Bonus: what are these times?
Another bonus: Add a second hand; are there still times which give rise to ambiguous hand configurations? (I don't know the answer to this one.)
(No fair looking up a solution; this is actually a pretty well-known brainteaser. It's well-known enough that I probably knew it existed, somewhere in the back of my mind, before I reinvented it today.)
edit (1:14 pm): Boris points out that he wrote a very similar question as question 23 of this test (PDF).
A question for you: how many times in a given twelve-hour period could I have this problem? More rigorously, suppose I have an ordinary twelve-hour analog clock, with an hour hand and a minute hand but no second hand. Furthermore suppose I can measure the position of the hands absolutely precisely, and they're "sweep" hands (i. e. they move at a constant angular rate, without "ticks"). At how many times between (say) noon and midnight could I interchange the hands of the clock and still have the hands in a position that corresponds to some time -- but not the time that it actually is? Noon, for example, is not such a time; if I interchange the minute and hour hands at noon I get a valid position of the hands, but that's the position the corresponds to noon. (I won't give an example of a valid time because giving one would be a big hint.)
Bonus: what are these times?
Another bonus: Add a second hand; are there still times which give rise to ambiguous hand configurations? (I don't know the answer to this one.)
(No fair looking up a solution; this is actually a pretty well-known brainteaser. It's well-known enough that I probably knew it existed, somewhere in the back of my mind, before I reinvented it today.)
edit (1:14 pm): Boris points out that he wrote a very similar question as question 23 of this test (PDF).
Fermi problems
Here's a quiz full of Fermi problems (like "how many people are airborne over the US at any moment?") and an article by Natalie Angier in today's New York Times, in which she suggests that some basic quantitative reasoning skills wouldn't kill people. The article was inspired by a recent book of such problems, Guesstimation: Solving the World's Problems on the Back of a Cocktail Napkin, by John Adam and Lawrence Weinstein. Adam also has a forthcoming book entitled A Mathematical Nature Walk which may be interesting.
27 March 2009
What is "classical"?
John Cook quotes a definition of "classical", due to Ward Cheney and Will Light in the introduction to their book on approximation theory. Basically, something is "classical" if it was known when you were a student.
The problem with this definition is that it depends on the speaker, which is really not a good property for a definition!
The problem with this definition is that it depends on the speaker, which is really not a good property for a definition!
26 March 2009
Some facts about time to the PhD
I just wondered -- what is the typical age of a PhD recipient? A bit of Googling turned up this table from Inside Higher Ed, which conveniently sorts by discipline; it reports on an NSF brief. Mathematics and physics are tied for second lowest median age at 30.3; chemistry is the only discipline that's lower, at 29.6.
The table I linked to also gives the median time from getting the bachelor's degree to getting the PhD; by subtraction one can get some number that is a "typical" age of bachelor's degree receipt for students who eventually get a PhD. The median time from bachelor's degree to PhD in mathematics is 7.9 years. Subtraction, 30.3 - 7.9, gives 22.4 as a "typical" age (the difference of medians, which isn't really meaningful) for students getting a bachelor's degree who eventually go on to get a PhD in math. (The highest typical age at bachelor's degree is 25.3, for people getting PhD's in education.) This is the minimum among all eighteen disciplines covered here. It's hard to imagine a median much lower than that given the age at which students typically enter formal education and the number of years it takes.
I interpret this as saying that students who get PhD's in mathematics are less likely to take time away from formal education between high school and college or to take longer than the traditional four years to graduate from college. I'd be interested to see if this is because students who spend time away from formal education "lose" whatever mathematics they knew and have trouble picking it back up again; it's a popular conception that mathematics is more "hierarchical" and so this is more of a problem there than in other fields. (Not having much experience with other fields, I can't say.)
Also, chemistry has a median registered time to degree (time from entering a doctoral program to receiving the PhD) of 6.0 years; the next lowest is mathematics at 6.8. Why is chemistry such an outlier?
The table I linked to also gives the median time from getting the bachelor's degree to getting the PhD; by subtraction one can get some number that is a "typical" age of bachelor's degree receipt for students who eventually get a PhD. The median time from bachelor's degree to PhD in mathematics is 7.9 years. Subtraction, 30.3 - 7.9, gives 22.4 as a "typical" age (the difference of medians, which isn't really meaningful) for students getting a bachelor's degree who eventually go on to get a PhD in math. (The highest typical age at bachelor's degree is 25.3, for people getting PhD's in education.) This is the minimum among all eighteen disciplines covered here. It's hard to imagine a median much lower than that given the age at which students typically enter formal education and the number of years it takes.
I interpret this as saying that students who get PhD's in mathematics are less likely to take time away from formal education between high school and college or to take longer than the traditional four years to graduate from college. I'd be interested to see if this is because students who spend time away from formal education "lose" whatever mathematics they knew and have trouble picking it back up again; it's a popular conception that mathematics is more "hierarchical" and so this is more of a problem there than in other fields. (Not having much experience with other fields, I can't say.)
Also, chemistry has a median registered time to degree (time from entering a doctoral program to receiving the PhD) of 6.0 years; the next lowest is mathematics at 6.8. Why is chemistry such an outlier?
25 March 2009
List of free mathematics books
Possibly of interest: free mathematics books online. Many seem to be either quite recent (since, say, around 2000) or more than a few decades old, although I haven't systematically checked this statement; this isn't surprising, as the very old books tend to be in the public domain and the very new books tend to have been produced in a period when computers were more universal than they were in the past.
There are a couple hundred books listed here, which is not anywhere near the number of free mathematics books available (legally) online. Various other lists exist, with varying degrees of overlap. Sometimes I flirt with the idea of attempting a more complete list but I realize it would become out of date quite quickly.
There are a couple hundred books listed here, which is not anywhere near the number of free mathematics books available (legally) online. Various other lists exist, with varying degrees of overlap. Sometimes I flirt with the idea of attempting a more complete list but I realize it would become out of date quite quickly.
20 March 2009
Billions and millions
Yes, I'm still alive. I got out of the blogging groove somehow.
Today's xkcd makes an interesting point about the difference between "billion" and "million".
And although this isn't about math, Carl Sagan's Cosmos can be watched online at hulu.com. (Thanks to Blake Stacey for the pointer.)
Today's xkcd makes an interesting point about the difference between "billion" and "million".
And although this isn't about math, Carl Sagan's Cosmos can be watched online at hulu.com. (Thanks to Blake Stacey for the pointer.)
13 March 2009
Three songs about circles
WXPN is a radio station of the University of Pennsylvania. This statement is a bit ambiguous; they're not a "college radio station", in that they're not student-run, but rather a professionally-run, public (i. e. non-commercial, and every so often they come on the air and beg for money) radio station. WQHS is the student radio station. I've been working from home this week, since it's our Spring Break, so I've been listening a lot.
Every weekday morning at nine they have a "select-a-set": listeners call or e-mail and suggest three songs, which are perhaps somehow related. Somebody suggested the following three songs today, which got played:
Sarah McLachlan, Circle
Edie Brickell & New Bohemians, Circle
Joni Mitchell, Circle Games
Why? (Hint: the select-a-set feature does not exist on Saturdays.)
Every weekday morning at nine they have a "select-a-set": listeners call or e-mail and suggest three songs, which are perhaps somehow related. Somebody suggested the following three songs today, which got played:
Sarah McLachlan, Circle
Edie Brickell & New Bohemians, Circle
Joni Mitchell, Circle Games
Why? (Hint: the select-a-set feature does not exist on Saturdays.)
10 March 2009
Oral exams
I am finding the database of (oral) general exams taken by Princeton math grad students quite amusing, especially since there seems to be a tradition of pointing out the silly things done by one's committee. (I found it by a Google search while looking for information on one of the people listed there.)
Compare the UPenn archive, which I find terrifying, not because it's any different, but because I know a lot of the people involved. (The astute among you will note that my oral exams are not on the UPenn archive. This is because by the time I had recovered sufficiently to write them up, I forgot what I had been asked.)
Compare the UPenn archive, which I find terrifying, not because it's any different, but because I know a lot of the people involved. (The astute among you will note that my oral exams are not on the UPenn archive. This is because by the time I had recovered sufficiently to write them up, I forgot what I had been asked.)
09 March 2009
Knuth on solitaire
I'm browsing through Knuth's The Art of Computer Programming (Volume 1, Volume 2, Volume 3), because it's Spring Break, so I have time. I'm reading the mathematical bits, which are perhaps half the work; I'm less interested in the algorithms.
Anyway, we find on page 158 of Volume 2: "Some people spend a lot of valuable time playing card games of solitaire, and perhaps automation will make an important inroad in this area." This is part of Chapter 3, on the generation and testing of random numbers. Of course, this book was published in 1969; Windows Solitaire didn't exist then. (It's also amusing to see Knuth describing things that will be in, say, Chapter 10; he's currently working on Chapter 7, which will be the first half of Volume 4.)
Anyway, we find on page 158 of Volume 2: "Some people spend a lot of valuable time playing card games of solitaire, and perhaps automation will make an important inroad in this area." This is part of Chapter 3, on the generation and testing of random numbers. Of course, this book was published in 1969; Windows Solitaire didn't exist then. (It's also amusing to see Knuth describing things that will be in, say, Chapter 10; he's currently working on Chapter 7, which will be the first half of Volume 4.)
08 March 2009
The Simpsons and continuous compounding
It appears the Simpsons have a mortgage that has 37% interest compounded every minute.
For the record, if the interest is compounded n times per year, then their interest wud be (1+0.37/n)n-1 compounded annually. If n = 12 (monthly), this is 43.97% per year; if n = 525,600 (every minute), this is 44.7734426% annually; compare 44.7734615% = exp(0.37)-1 for continuous compounding. In other words, compounding every minute might as well be continuous; the difference is one cent per $53,000 or so, per year.
The difference between every-minute and continuous compounding, at an interest rate of r, is the difference
exp(r) - 1 - (1+r/n)n
where n = 525,600; this is asymptotically r2/(2n). (This actually isn't a great approximation here; the next few terms of the series are reasonably large.)
For the record, if the interest is compounded n times per year, then their interest wud be (1+0.37/n)n-1 compounded annually. If n = 12 (monthly), this is 43.97% per year; if n = 525,600 (every minute), this is 44.7734426% annually; compare 44.7734615% = exp(0.37)-1 for continuous compounding. In other words, compounding every minute might as well be continuous; the difference is one cent per $53,000 or so, per year.
The difference between every-minute and continuous compounding, at an interest rate of r, is the difference
exp(r) - 1 - (1+r/n)n
where n = 525,600; this is asymptotically r2/(2n). (This actually isn't a great approximation here; the next few terms of the series are reasonably large.)
A commercial for math
Look, it's a commercial for math!
Okay, so it's really a commercial for IBM. But you don't know that until the very end.
Okay, so it's really a commercial for IBM. But you don't know that until the very end.
06 March 2009
Best bad math joke ever
One of my favorite bad math jokes ever is now in Wikipedia, and no, I didn't add it.
Namely, exercise 6.24 of Richard Stanley's Enumerative Combinatorics, Volume 2 asks the reader to
"Explain the significance of the following sequence: un, dos, tres, quatre, cinc, sis, set, vuit, nou, deu..."
The answer is that these are the "Catalan numbers", i. e. the numbers in the Catalan language. If this seems random, note that exercise 6.21 is the famous exercise in 66 parts (169 in the extended online version, labelled (a) through (m7)), which asks the reader to prove that 66 (or 169) different sets are counted by the Catalan numbers.
I'm telling you about this joke because the Wikipedia article on Catalan numbers begins with a link to the list of numbers in various languages.
An alternative version of this joke (American Mathematical Monthly, vol. 103 (1996), pages 538 and 577) asks you to identify the sequence "una, dues, cinc, catorze, quaranta-dues, cent trenta-dues, quatre-cent vint-i-nou,...", which are the Catalan numbers 1, 2, 5, 14, 42, 132, 429... in the Catalan language. (I'm reporting the spellings as I found them in my sources; the first series is in the masculine and the second is in the feminine, as Juan Miguel pointed out in the comments.)
Namely, exercise 6.24 of Richard Stanley's Enumerative Combinatorics, Volume 2 asks the reader to
"Explain the significance of the following sequence: un, dos, tres, quatre, cinc, sis, set, vuit, nou, deu..."
The answer is that these are the "Catalan numbers", i. e. the numbers in the Catalan language. If this seems random, note that exercise 6.21 is the famous exercise in 66 parts (169 in the extended online version, labelled (a) through (m7)), which asks the reader to prove that 66 (or 169) different sets are counted by the Catalan numbers.
I'm telling you about this joke because the Wikipedia article on Catalan numbers begins with a link to the list of numbers in various languages.
An alternative version of this joke (American Mathematical Monthly, vol. 103 (1996), pages 538 and 577) asks you to identify the sequence "una, dues, cinc, catorze, quaranta-dues, cent trenta-dues, quatre-cent vint-i-nou,...", which are the Catalan numbers 1, 2, 5, 14, 42, 132, 429... in the Catalan language. (I'm reporting the spellings as I found them in my sources; the first series is in the masculine and the second is in the feminine, as Juan Miguel pointed out in the comments.)
Conditioned to rationality
At Uncertain Principles and Unqualified Offerings there has been talk about how students in disciplines where the numbers come with units seem to be conditioned to expect numbers of order unity. (And so are professional scientists, who deal with this by defining appropriate units.)
Mathematicians, of course, don't have this luxury. But we are conditioned to think, perhaps, that rational numbers are better than irrational ones, that algebraic numbers are better than transcendental ones (except maybe π and e), and so on. In my corner of mathematics (discrete probability/analysis of algorithms/combinatorics), often sequences of integers a(1), a(2), ... arise and you want to know approximately how large the nth term is. For example, the nth Fibonacci number is approximately φn/51/2, where φ = (1+51/2)/2 is the "golden ratio". The nth Catalan number (another sequence that arises often) is approximately 4n/(π n3)1/2. In general, "many" sequences turn out to satisfy something like
a(n) ~ p qn (log n)r ns
where p, q, r, and s are constants. There are deep reasons for this that can't fully be explained in a blog post, but have to do with the fact that a(n) often has a generating function of a certain type. What's surprising is that while p and q are often irrational, r and s are almost never irrational, at least for sequences that arise in the "real world". Furthermore, they usually tend to be "simple" rational numbers -- 3/2, not 26/17. If you told me some sequence of numbers grows like πn I'd be interested. If you told me some sequence of numbers grows like nπ. I'd assume I misheard you. Of course, there's the possibility of sampling bias -- I think that the exponents tend to be rational because if they weren't rational I wouldn't know what to do! They do occur -- for example, consider the Hardy-Ramanujan asymptotic formula for the number of partitions p(n) of an integer n:
p(n) ~ exp(π (2n/3)1/2)/(4n √3)).
I know this exists, but it still just looks weird.
(This is an extended version of a comment I left at Uncertain principles.)
Mathematicians, of course, don't have this luxury. But we are conditioned to think, perhaps, that rational numbers are better than irrational ones, that algebraic numbers are better than transcendental ones (except maybe π and e), and so on. In my corner of mathematics (discrete probability/analysis of algorithms/combinatorics), often sequences of integers a(1), a(2), ... arise and you want to know approximately how large the nth term is. For example, the nth Fibonacci number is approximately φn/51/2, where φ = (1+51/2)/2 is the "golden ratio". The nth Catalan number (another sequence that arises often) is approximately 4n/(π n3)1/2. In general, "many" sequences turn out to satisfy something like
a(n) ~ p qn (log n)r ns
where p, q, r, and s are constants. There are deep reasons for this that can't fully be explained in a blog post, but have to do with the fact that a(n) often has a generating function of a certain type. What's surprising is that while p and q are often irrational, r and s are almost never irrational, at least for sequences that arise in the "real world". Furthermore, they usually tend to be "simple" rational numbers -- 3/2, not 26/17. If you told me some sequence of numbers grows like πn I'd be interested. If you told me some sequence of numbers grows like nπ. I'd assume I misheard you. Of course, there's the possibility of sampling bias -- I think that the exponents tend to be rational because if they weren't rational I wouldn't know what to do! They do occur -- for example, consider the Hardy-Ramanujan asymptotic formula for the number of partitions p(n) of an integer n:
p(n) ~ exp(π (2n/3)1/2)/(4n √3)).
I know this exists, but it still just looks weird.
(This is an extended version of a comment I left at Uncertain principles.)
04 March 2009
A fool and his money are soon parted
Did you know that there are people who think that by reducing their income from over $250,000 to under $250,000, they can take home more money? For those of you who aren't aware of this, President Obama is planning to increase taxes on families earning more than $250,000 per year.
Of course, the way the US tax code is set up, the amount of tax you pay as a function of your taxable income is continuous, monotone increasing, and Lipschitz with parameter 1. That is, say that T(x) is the tax due if your taxable income is x. Let y > x. Then T(y) > T(x), and T(y) - T(x) < y - x. As you may note, you can derive from the second of these that
y - T(y) > x - T(x)
which tells us that if you make more money, you get to keep more of your money.
Note that T is actually not differentiable, because it's piecewise linear. Your "tax bracket" is in fact the amount of tax you pay on the last dollar of your income; that is, it's T'(x) where x is your income.
I'm not saying that there are no situations where this sort of thing might make sense. (The tax code is complicated.) But it's certainly not as common as these people would have you believe.
Original article from ABC News; I followed a link from The New York Times via The New Republic.
Of course, the way the US tax code is set up, the amount of tax you pay as a function of your taxable income is continuous, monotone increasing, and Lipschitz with parameter 1. That is, say that T(x) is the tax due if your taxable income is x. Let y > x. Then T(y) > T(x), and T(y) - T(x) < y - x. As you may note, you can derive from the second of these that
y - T(y) > x - T(x)
which tells us that if you make more money, you get to keep more of your money.
Note that T is actually not differentiable, because it's piecewise linear. Your "tax bracket" is in fact the amount of tax you pay on the last dollar of your income; that is, it's T'(x) where x is your income.
I'm not saying that there are no situations where this sort of thing might make sense. (The tax code is complicated.) But it's certainly not as common as these people would have you believe.
Original article from ABC News; I followed a link from The New York Times via The New Republic.
Why isn't it expnormal?
We say that a random variable X has a lognormal distribution if its logarithm, Y = log X, is normally distributed. The normal distribution often occurs when a random variable comes about by combining a bunch of small independent contributions, but those contributions combine additively; when the combination is multiplicative instead, lognormals occur. For example, lognormal distributions often occur in models of financial markets.
But of course X = exp Y, so the variable we care about is the exponential of a normal. Why isn't it called expnormal?
But of course X = exp Y, so the variable we care about is the exponential of a normal. Why isn't it called expnormal?
03 March 2009
Square root day
Today, it appears, is "square root day", 3/3/09. 3 is, of course, the square root of 9.
From 360; it was also pointed out there that square roots, i. e. root vegetables cut into squares, do not taste as good as pi(e). So I will wait until next Saturday for my mathematical holiday needs.
From 360; it was also pointed out there that square roots, i. e. root vegetables cut into squares, do not taste as good as pi(e). So I will wait until next Saturday for my mathematical holiday needs.
Subscribe to:
Posts (Atom)