I think something like a logarithmic measure on actual time might give the hyperbolic discounting model.
That's true. Let's say we live at time 0; the correct (exponential discounting) value of a payoff of 1 at time t is e-rt. The value of a payoff of 1 at time T under hyperbolic discounting is 1/(1+rT). Setting these equal, we get
Solving for each variable in terms of the other,
So roughly speaking, from looking at the first equation, the discounting that people actually use instinctively is obtained by taking the logarithm of the time T they're discounting over (up to some scaling, which really just sets the units of time), and then applying the correct (exponential) model. This reminds me of a logarithmic timeline, but in reverse. People see the period from, say, 16 to 32 years ago as being as long as the period from 32 to 64 years ago. This is also why I don't believe in a technological singularity even though I'd like to; the arguments often seem to be based on "look! lots has changed in the past hundred years, more than changed in the hundred years before that!" but our memories of "change" are somewhat selective.
That's a great point: people think logarithmically. I'd even go farther and say that people perceive logarithmically.
We know it's true for sound, for not just on instance, but two! A constant increase in (perceived) loudness is a constant multiple in power, and a constant increase in (perceived) pitch is a constant multiple in frequency.
Sounds are certainly perceived logarithmically, both the tone scale (the even tuning) and loudness (decibels) are logarithmic scales.
And information too. Assume we have 2 independent events, one has probability p and the other q. When we learn that the first event happened, we are surprised by the amount s(p), and when we learn that the other happened, we are surprised by s(q), so the total surprise is s(p)+s(q) and this should be the surprise of the combined event that is s(pq), so the surprise is (up to a costant factor) a logarithm of probability, and the expected value of surprise of a finite probability space is its entropy!
Good to see this worked out. However, I disagree that the way to equate the two models is to have them agree to first order at time zero.
In that case, the person would never over-value nearer options; instead they would always over-value the further of the two options, at an amount proportional to the distance of the choice.
I think its more realistic to have the hyperbola dip below the exponential for awhile (the 'immediate gratification' zone), and then cross it at some reasonably far away point (the 'correct value' distance). The value of this point is probably extremely variable, and might even have to do with pesky things like whether the question is phrased in 'weeks' versus 'years'.
That's a great point, Greg. Is there a "big number" effect in play? That is, would people make more of a mistake when they choose between 16 and 24 weeks, or between 4 and 6 months? Between 50 and 100 weeks or between 1 and 2 years?
The "correct" discount rate depends on far more than inflation. It should depend on "risk", eg of dying and not getting any use of the money, and of what economists call the pure rate of time reference.
johann richter made exactly the point I should have liked to make: given that the risk of not surviving to realize the gain increases with length of time, it sounds like this "discounting" is just about right.
I came across picoeconomics.com, and this article recently:
Uncertainty as Wealth
Nice little graph on the site illustrating the difference in discount curves.
Post a Comment