shminux comments on Self-locating beliefs across identity fission - Less Wrong

1 Post author: lukeprog 03 March 2012 03:14AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (19)

You are viewing a single comment's thread.

Comment author: shminux 03 March 2012 04:53:24AM 4 points [-]

I'm yet to see how any of this untestable mumbo-jumbo is related to rationality or to existential risk... The question to ask is not what the Fred clones (or Sleeping Beauties) should believe, but in what circumstances their beliefs would matter.

Comment author: Giles 03 March 2012 06:27:36PM 4 points [-]

Human rationality and FAI design are both about producing real-world approximations to some mathematical ideal of what we mean by "rationality". Puzzles in anthropics and decision theory suggest that our mathematical idealization of rationality is wrong or at least incomplete. Some people want to get this stuff sorted out so that we can make sure we're not approximating the wrong thing.

Comment author: Vladimir_Nesov 03 March 2012 10:45:03AM *  2 points [-]

"Untestable" is irrelevant here. Essentially, the question in many of similar setups is, "What's a belief, how does it work, what does it mean?"

Comment author: drethelin 03 March 2012 05:04:09AM 0 points [-]

This is exactly the sorting of thinking an AI will go through given test scenarios where duplicate copies are brought online in different times/places.

Comment author: shminux 03 March 2012 06:00:39AM 0 points [-]

I don't follow... Can you give an example where beliefs would matter?

Comment author: Viliam_Bur 03 March 2012 10:41:09AM *  1 point [-]

I am not sure if I am answering your question, but:

a) if AI is trying to maximize X, and it has a possibility to do Y, then it matters whether AI believes that Y is X. For example an asteroid is going to hit the Earth, and it is not possible to completely avoid human deaths, which AI tries to avoid. But AI could scan all people and recreate them on another planet -- is this the best solution (all human lives saved) or the worst one (all humans killed, copies created later)?

b) it's not only about what AI believes, but human beliefs are also important, because they contribute to their happiness, and AI cares about human happiness. Should AI avoid doing things that according to its understanding are harmless (with some positive side-effects), but people believe that something wrong is done to them, and it makes them unhappy? In the example above, will the re-created people have nightmares about being copies (and unprotected from murder-and-copy by AI in case of another asteroid)?

Comment author: shminux 03 March 2012 06:07:50PM -1 points [-]

I sort of see your point now.

My guess would be that some people would shrug and go on about their (recreated) life, some would grumble a bit first, a tiny minority would be so traumatized by the thought, that they would be unable to live and maybe even suicide, but on the whole, if the new life is not vastly different from the old one, it would be a non-event.

I agree with the point b), more or less. Note that the AI (let's call it by its old name, God, shall we?) also has an option of not revealing what happened to the humans would be detrimental to their happiness.