Eliezer_Yudkowsky comments on The Contrarian Status Catch-22 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (99)
I haven't seen you take into account the relative costs of error of the two beliefs.
A few months ago, I asked:
I think that someone who believes in many-worlds will keep drawing cards until they die. Someone who believes in one world might not. An expected-utility maximizer would; but I'm uncomfortable about playing the lottery with the universe if it's the only one we've got.
If a rational, ethical one-worlds believer doesn't continue drawing cards as long as they can, in a situation where the many-worlds believer would, then we have an asymmetry in the cost of error. Building an FAI that believes in one world, when many worlds is true, causes (possibly very great) inefficiency and repression to delay the destruction all life. Building an FAI that believes in many worlds, when one world is true, results in annihilating all life in short order. This large asymmetry is enough to compensate for a large asymmetry in probabilities.
(My gut instinct is that there is no asymmetry, and that having a lot of worlds shouldn't make you less careless with any of them. But that's just my gut instinct.)
Also, I also think that you can't, at present, both be rational about updating in response to the beliefs of others, and dismiss one-world theory as dead.
Not only is "What do we believe?" a theoretically distinct question from "What do I do about it?", but by your logic we should also refuse to believe in spatially infinite universes and inflationary universes, since they also have lots of copies of us.
"What do we believe?" is a distinct question; and asking it is comitting an error of rationality. The limitations of our minds often force us to use "belief" as a heuristic; but we should remember that it is fundamentally an error, particularly when the consequences are large.
You don't do the expected-cost analysis when investigating a theory; you should do it before dismissing a theory. Because, If someday you build an AI, and hardcode in the many-worlds assumption because many years before you dismissed the one-world hypothesis from your mind and have not considered it since, you will be committing a grave Bayesian error, with possibly disastrous consequences.
(My cost-of-error statements above are for you specifically. Most people aren't planning to build a singleton.)
I can't speak for Eliezer, but if I was building a singleton I probably wouldn't hard-code my own particular scientific beliefs into it, and even if I did I certainly wouldn't program any theory at 100% confidence.