From Wolfgang Schwarz's Belief Dynamics Across Fission:

Fred’s home planet, Sunday, is surrounded by two moons, Monday and Tuesday. Tonight, while Fred is asleep, his body will be scanned and destroyed; then a signal is sent to both Monday and Tuesday where he will be recreated from local matter.

A lot of ink has been spent on how to describe scenarios like this. Should we say that Fred will find himself both on Monday and on Tuesday? Which of the persons awakening on the two moons is identical to the person going to sleep on Sunday? In this paper, I want to look at a different question: what should Fred’s successors believe when they awaken on Monday and on Tuesday? More precisely, how should their beliefs be related to Fred’s beliefs before he went to sleep on Sunday?

Also see: Sleeping Beauty problem.

New Comment
19 comments, sorted by Click to highlight new comments since: Today at 3:25 PM

Luke, I'm confused by your occasional links to academic papers. There are a lot of papers out there related to topics we discuss. (I once collected and skimmed or read every paper I could find on the Sleep Beauty and Absent-Minded Driver problems, and there were something like 50 at that time.) How are you deciding which papers to link to? (Also, I'm surprised you have the time to look at papers like these, now that you're the Executive Director and no longer a researcher.)

It seems better to consolidate these posts into bibliography posts by subject, perhaps updating the old posts. The single-paper posts clog Discussion too much.

ETA: on further consideration, I hold this view less strongly.

I feel slightly more motivated to read these papers if they're presented one at a time.

I don't see a problem with someone saying (implicitly) "hey, I spotted this paper I found interesting on this topic of local interest" and don't see it as "clogging" Discussion.

There's a difference between one person posting one thing, and one person posting many over a short period. The latter person can reduce congestion without reducing the information conveyed by consolidation.

I feel that it's better to err on the side of posting too much rather than posting too little. High posting frequency is probably the number one thing that a blog-based community needs to stay alive. (Quality is also important, but that is hardly an issue in Luke's case.)

Posting this link here instead of making a separate discussion post about it.

Bradley, Four Problems about Self-Locating Belief (2012):

In this article I defend the Doomsday Argument, the Halfer Position in Sleeping Beauty, the Fine-Tuning Argument, and the applicability of Bayesian confirmation theory to the Everett interpretation of quantum mechanics. I will argue that all four problems have the same structure, and I give a unified treatment that uses simple models of the cases and no controversial assumptions about confirmation or self-locating evidence. I will argue that the troublesome feature of all these cases is not self-location but selection effects.

I'm guessing this is one of Luke's ways of bringing some focus to the research of Rationality in the Academic world, and its significance to the LW community.

I hypothesize that he see's a gap there that needs more bridging.

(just my humble estimation)

I'm yet to see how any of this untestable mumbo-jumbo is related to rationality or to existential risk... The question to ask is not what the Fred clones (or Sleeping Beauties) should believe, but in what circumstances their beliefs would matter.

Human rationality and FAI design are both about producing real-world approximations to some mathematical ideal of what we mean by "rationality". Puzzles in anthropics and decision theory suggest that our mathematical idealization of rationality is wrong or at least incomplete. Some people want to get this stuff sorted out so that we can make sure we're not approximating the wrong thing.

"Untestable" is irrelevant here. Essentially, the question in many of similar setups is, "What's a belief, how does it work, what does it mean?"

This is exactly the sorting of thinking an AI will go through given test scenarios where duplicate copies are brought online in different times/places.

I don't follow... Can you give an example where beliefs would matter?

I am not sure if I am answering your question, but:

a) if AI is trying to maximize X, and it has a possibility to do Y, then it matters whether AI believes that Y is X. For example an asteroid is going to hit the Earth, and it is not possible to completely avoid human deaths, which AI tries to avoid. But AI could scan all people and recreate them on another planet -- is this the best solution (all human lives saved) or the worst one (all humans killed, copies created later)?

b) it's not only about what AI believes, but human beliefs are also important, because they contribute to their happiness, and AI cares about human happiness. Should AI avoid doing things that according to its understanding are harmless (with some positive side-effects), but people believe that something wrong is done to them, and it makes them unhappy? In the example above, will the re-created people have nightmares about being copies (and unprotected from murder-and-copy by AI in case of another asteroid)?

I sort of see your point now.

My guess would be that some people would shrug and go on about their (recreated) life, some would grumble a bit first, a tiny minority would be so traumatized by the thought, that they would be unable to live and maybe even suicide, but on the whole, if the new life is not vastly different from the old one, it would be a non-event.

I agree with the point b), more or less. Note that the AI (let's call it by its old name, God, shall we?) also has an option of not revealing what happened to the humans would be detrimental to their happiness.

I know almost nothing but the Sleeping Beauty Problem, so this seems like a good learning opportunity.

I answer that these clones have no more reason to doubt Fred's cognitive abilities than Fred did, and thus have no new reason to re-evaluate all their beliefs. Well, unless they believed that copying people was impossible.

Well, Fred believed that he was on earth. Should the clones not re-evaluate that belief? Somewhere they seem to have acquired new information.

Well obviously they're going to need to draw new conclusions from new information. But the gist of the question is whether they should trust their original's conclusions about religion, politics, etc. And they should.

He makes the critical mistake of treating probabilities as inherent properties of flipping the coin, rather than an expression of an agent's knowledge. Specifically, the "move probability to the successor states" operation gets formulated as if the universe kept track of a special variable called "probability," rather than probability being something that people do based on information.

Other than that pretty good, but once critical mistake is all it takes.