A discrete-time Markov chain is what people usually mean when they say "Markov chain". This is a sequence of elements X

_{0}, X

_{1}, X

_{2}, ... selected from some state space S = (s

_{1}, s

_{2}, ...), where

P(X

_{n}= s

_{j}| X

_{n-1}= s

_{i}) = p

_{ij}

(the p

_{ij}are called

*transition probabilities*) and X

_{n}does not depend on the history

*before*X

_{n-1}. Now, from any discrete-time Markov chain it's possible to construct a

*continuous-time*Markov chain. We simply assign a parameter λ(j) to each state s

_{j}. When the discrete-time Markov chain finds itself in state j, we

*wait*for a time which is exponentially distributed with parameter λ(j) (and therefore mean 1/λ(j)); after this time passes we proceed into another state according to the transition probabilities.

There's a problem, though, as Resnick explains in Section 5.2 -- it can happen that the chain "blows up", i. e. it makes infinitely many transitions in finite time.

The "pure birth" process has state space {1, 2, 3, ...}; the transition probabilities are p

_{n,n+1}= 1 with all others zero; the parameters are λ(n) = λn. That is, the process waits in state

*n*for a time which is exponentially distributed with mean 1/(λn) and then advances to state

*n*+1. Not surprisingly, the population grows exponentially fast -- if the population is at

*n*currently, it grows at rate λ

*n*.

But if you let λ(n) = λn

^{2}, then the process "blows up" almost surely. This isn't surprising from the point of view of a differential equation -- if the population is

*n*, it grows at rate λ

*n*

^{2}. If we ignore the stochastic elements and model this as an ordinary differential equation, the population

*P*satisfies dP/dt = λP

^{2}, and solutions to this have vertical asymptotes. Something similar (I don't pretend to know all the details) happens in the stochastic case.

I then thought, when would you get a population that grows like this? Consider not a population of living beings, but a population of

*ideas*. And let us assume that new ideas come into being when two old ideas combine. Then in some idealized world where all pairs of ideas are constantly interacting, one might expect that the rate at which new ideas exist is proportional to the number of

*pairs*of ideas. Of course, this is a silly model, because ideas don't interact all by themselves, but rather in people's brains -- and the population of people grows just exponentially. Still, this feels like it could explain why certain ideas seem to "come out of nowhere" -- at first in some area of intellectual inquiry there isn't much there because new ideas require new insights, but at some point combining

*old*insights in new ways becomes a viable way to come up with new ideas.

(Of course, checking this against reality would be quite difficult. For one thing, how do you count ideas?)

## 1 comment:

For that matter, how do you separate sub-ideas? Or trace the proper genealogy of ideas? Or track their movement and interaction?

This post launched me into a science-fiction dream where, at some point in the future, computational linguistics and semantics and knowledge representation (and whatever other disciplines are necessary) have advanced to the point where we can monitor and track the population of ideas. I don't know what I would

thinkof the data thus obtained, though. (And if that lead to new ideas, what delightful layers of recursion would exist!)Post a Comment