Eliezer_Yudkowsky comments on The Contrarian Status Catch-22 - Less Wrong

49 Post author: Eliezer_Yudkowsky 19 December 2009 10:40PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (99)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 21 December 2009 05:52:06AM *  1 point [-]

I haven't seen you take into account the relative costs of error of the two beliefs.

A few months ago, I asked:

Suppose Omega or one of its ilk says to you, "Here's a game we can play. I have an infinitely large deck of cards here. Half of them have a star on them, and one-tenth of them have a skull on them. Every time you draw a card with a star, I'll double your utility for the rest of your life. If you draw a card with a skull, I'll kill you."

How many cards do you draw?

I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I'm uncomfortable about playing the lottery with the universe if it's the only one we've got.

If a rational, ethical one-worlds believer doesn't continue drawing cards as long as they can, in a situation where the many-worlds believer would, then we have an asymmetry in the cost of error. Building an FAI that believes in one world, when many worlds is true, causes (possibly very great) inefficiency and repression to delay the destruction all life. Building an FAI that believes in many worlds, when one world is true, results in annihilating all life in short order. This large asymmetry is enough to compensate for a large asymmetry in probabilities.

(My gut instinct is that there is no asymmetry, and that having a lot of worlds shouldn't make you less careless with any of them. But that's just my gut instinct.)

Also, I also think that you can't, at present, both be rational about updating in response to the beliefs of others, and dismiss one-world theory as dead.

Comment author: Eliezer_Yudkowsky 21 December 2009 05:18:28PM 5 points [-]

Not only is "What do we believe?" a theoretically distinct question from "What do I do about it?", but by your logic we should also refuse to believe in spatially infinite universes and inflationary universes, since they also have lots of copies of us.

Comment author: PhilGoetz 21 December 2009 06:03:18PM *  1 point [-]

Not only is "What do we believe?" a theoretically distinct question from "What do I do about it?"

"What do we believe?" is a distinct question; and asking it is comitting an error of rationality. The limitations of our minds often force us to use "belief" as a heuristic; but we should remember that it is fundamentally an error, particularly when the consequences are large.

You don't do the expected-cost analysis when investigating a theory; you should do it before dismissing a theory. Because, If someday you build an AI, and hardcode in the many-worlds assumption because many years before you dismissed the one-world hypothesis from your mind and have not considered it since, you will be committing a grave Bayesian error, with possibly disastrous consequences.

(My cost-of-error statements above are for you specifically. Most people aren't planning to build a singleton.)

Comment author: benelliott 10 August 2011 09:14:26PM 0 points [-]

I can't speak for Eliezer, but if I was building a singleton I probably wouldn't hard-code my own particular scientific beliefs into it, and even if I did I certainly wouldn't program any theory at 100% confidence.