Showing posts with label philosophy. Show all posts
Showing posts with label philosophy. Show all posts

14 October 2008

Schwitzgebel and Cushman's "moral sense test"

Eric Schwitzgebel (a philosopher at U.C. Riverside) and Fiery Cushman (a psychologist at Harvard) have designed a "Moral Sense Test" that asks respondents for their takes on various moral dilemmas. They're looking to compare the responses of philosophers and non-philosophers, so they've asked me to post a link to their test from this blog. They say that people who have taken other versions of this test have found it interesting to ponder the moral dilemmas they ask about. The test should take about 15-20 minutes and can be found here.

The test says "Please do not discuss the questions with anybody else, or consult any texts or outside material, while you are taking the test." I suspect that some of my readers will want to comment on the test, so if you intend to take the test, please don't read the comments to this post until after you've done so.

02 December 2007

Recitation instructors, TV pundits, and Poincare

I've been reading through the archives of Eliezer Yudkoswsky's Overcoming Bias, and I found an interesting posts:

Focus Your Uncertainty talks about the plight of a novice TV pundit who has to get what the market will do in advance, in order to decide how ey will allocate time to preparing remarks, so that ey can explain what happened after the fact.

I face much the same problem in teaching recitations for calculus classes. My recitations are very much driven by the students, in that I spend most of the time answering their questions. Which questions do I prepare for ahead of time, and which do I just decide I'll figure out on the fly? Here there is another wrinkle -- there are some questions that students aren't that likely to ask about, but which will take a long time to prepare.

In the case of the TV pundit, I'd almost say that having a long explanation for a certain action in the market is in itself evidence for that action being unlikely, basically by Occam's razor. This doesn't carry over to the calculus classes -- the things having long explanations (that is, the hard problems) are exactly the ones the students are most likely to ask about.

Yudkowsky writes, in that post:
Alas, no one can possibly foresee the future. What are you to do? You certainly can't use "probabilities". We all know from school that "probabilities" are little numbers that appear next to a word problem, and there aren't any little numbers here. Worse, you feel uncertain. You don't remember feeling uncertain while you were manipulating the little numbers in word problems. College classes teaching math are nice clean places, therefore math itself can't apply to life situations that aren't nice and clean. You wouldn't want to inappropriately transfer thinking skills from one context to another. Clearly, this is not a matter for "probabilities".

That's something to keep in mind -- in working with probability one doesn't feel uncertain. Sometimes I feel the same way, and this may be because we've thoroughly axiomatized away the uncertainty. I was recently reading Poincare's book Science and Hypothesis(*), which includes a chapter on "the calculus of probabilities" -- a lot more uncertainty seems to permeate this chapter than would a similar chapter written now, because Poincare lived before the Kolmogorov axioms. But this is an interesting philosophical fact about probability -- we are saying things about uncertainty, but we know that they're true. And sometimes, as with the "probabilistic method" in combinatorics, this allows us to prove things about structures that don't involve uncertainty. I leave this to the philosophers.

(*) I was proud of myself for picking this up at a used bookstore for $6.95 -- but Amazon's selling it for $6.69! (Of course, I ought to not be too proud, since Penn's library almost certainly has it.)

02 September 2007

Another shot at the Doomsday argument

Robin Hanson critiques the Doomsday Argument. This is an argument on the lifespan of the human species, which begins from the following principle: there is nothing special about present-day humans. "Therefore" we can consider the number of humans who have lived so far as a fraction of the number of humans who will ever live; the probability that this is between p and q is q-p. I put "therefore" in quotes because the implication is tempting, but one could equally well conclude that the amount of time there have been humans, as a fraction of the amount of time there will ever be humans, has this same distribution. (Indeed, I've heard both versions of the argument.) The first version of this argument says, for example, that the probability that there will be sixty billion more humans is at least one-half; the second says that the probability that we as a species will survive for another two hundred thousand years or so is at least one-half. (I'm assuming there have been sixty billion people who've ever lived and that our species is 200,000 years old.)

And indeed there are other classes of beings that you can use as the reference class here. Living things, for example. Or vertebrates, or living cells, or humans, or even such classes as "humans who haved lived after the year X", which get kind of ridiculous. That last one is particularly prone to abuse, as we can simultaneously say that humanity has a fifty percent chance of surviving past 2114 (if we take X = 1900) and past 3014 (if we take X = 1000).

The name "Doomsday argument" is rather misleading, too. "Doomsday" is usually seen as a bad thing. But what comes after humanity might be the "posthumans" that the people who believe in a technological singularity talk about; is that really doom? Hanson gives a quantitative version of this where there are several "toy universes".

I've talked before about how I'm not entirely comfortable with the "Copernican principle" from which this is derived. For some reason I am much more uncomfortable with this than I would be with the equivalent line of reasoning applied to non-human objects. If I had an urn containing balls labeled from 1 to N, and I didn't know N, and I reached in and grabbed a ball marked 100, I'd say in a heartbeat that the urn probably contained around 200 balls. But the difference is that in the Doomsday Argument we don't even know what the urn is.

The Doomsday argument supposedly only is provisional, until such time as we have better knowledge on how long societies tend to last. This is in my mind one of the most useful reasons for trying to find extraterrestrial intelligence; the knowledge that they do exist (or even a thorough search which doesn't turn up anything) would give us substantial information about how long we might expect to last.

When I studied biochemistry I thought something similar. Essentially all known life forms on Earth have similar biochemistry, because we all evolved from the same ancestors. So an introductory biochemistry class essentially consists of the memorization of those mechanisms. What I would have wanted to see is, say, a dozen or so independently evolved biochemistries, and then see which features of our own biochemistry are just accidents of evolution and which are essential to having complex, self-replicating systems.

15 August 2007

Are we living in a simulation?

The Simulation Argument is discussed at George Dvorsky's sentient developments, after being mentioned in an article yesterday by John Tierney in the New York Times. It's due to Nick Bostrom, whose original paper is available online.

The argument is as follows: "posthumans", that is, the people of the future who have much better computers than we do, will use their computers to run simulations of, well, more primitive humans. These simulations will be so detailed that they include a working virtual nervous system for all the people inside. And you have to figure that they're not going to run just one of these simulations; the future people run these simulations for fun! (This seems reasonable; I've spent way too much time playing SimCity to argue against this.) So we should expect that over the lifetime of humanity there are a very large number of such simulations being run, and that it is therefore very unlikely that we live in the "real world".

For me, this bears a superficial similarity to Pascal's Wager, although the mathematics is different; for one thing, Pascal's Wager involves an infinite payoff and there are no infinities here. But it's probably useful to think of there being effectively an infinite number of these simulations, in which case the probability that we're living in the "real world" turns out to be essentially zero. (Bostrom doesn't go this far; he says he feels there's about a 20 percent chance we're living in a computer simulation, which basically means he figures there's a 20 percent chance that civilization gets to the simulated-reality stage, if you neglect the probability that there are simulated realities but we're in the real one.) I suspect the real reason this reminds me of Pascal's Wager is because it seems natural to equate the runner of the simulation with "God".

What's especially strange, at first, is the idea that the simulations could have simulations within them. This reminds me of a cosmological theory that universes "evolve" by spouting black holes; in this theory, black holes are connections to other universes, where the various physical constants are slightly different than in the parent universe; thus there's a sort of Darwinian selection for universes, where the selective pressure is towards making lots of black holes. (Then why don't we live in a universe with lots of black holes?) As to the simulations within simulations -- if you carry this to the logical extreme, we are likely to live in some very deeply nested simulation. The problem is that infinite nesting probably isn't possible. And does the level of nesting even matter? My first instinct is to think that nested simulations would necessarily be of "lower fidelity" than first-level simulations, but since everything is digital this need not be true, as Bostrom himself points out. However, he also points out the disturbing fact that since a posthuman society would require more computing power to simulate -- you've got to simulate what all the computers are doing quite well -- if we head towards being posthuman we might be shut off! Personally I would like to think that the ethics of these simulations require the Simulator to not just shut us off. Presumably the person running the simulation could get on some sort of loudspeaker and let us know what was going on. (Although that might raise other ethical questions -- do simulated realities have some sort of Prime Directive, where you're not supposed to interfere with them?)

The mathematics of the argument seems so simple, though, that I'm almost inclined to throw it out on those means alone, along with other things likes Pascal's Wager and the Copernican principle. Surely proving the existence of God (and that's what this is, although people don't put it this way) can't be so easy!