CarlShulman comments on Preference For (Many) Future Worlds - Less Wrong

18 Post author: wedrifid 15 July 2011 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread.

Comment author: CarlShulman 16 July 2011 01:39:50AM *  20 points [-]

I think this sidesteps the underlying intuitions too quickly. We have cognitive mechanisms to predict "our next experience," memories of this algorithm working well, and preferences in terms of "our next experience." If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model. We don't start with total utilitarian-like preferences over the fates of our future copies (i.e. most aren't eager to lower their standard of living by a lot so as to be copied many times (with the copies also having low standards of living)), and one needs to explain why to translate our naive intuitions into the additive framework (rather than something more like averaging).

Comment author: wedrifid 16 July 2011 03:50:41PM 10 points [-]

I think this sidesteps the underlying intuitions too quickly.

I think you are right. I also seem not to have conveyed quite the same position as the one I intended. That is:

  • Quantum Suicide is not something that you "Believe In" but rather a preference that in all worlds in which you don't win you are killed.
  • This is a valid, coherent and not intrinsically irrational goal.
  • You don't get more "winningness" by killing yourself.
  • The Everett branches in which you are killed are just as real as the ones where you are alive. They are not trimmed from reality.

These are the points I have found myself wishing I had a post to link to when I have been asked to explain a position. Going on to explain in detail why I have the preferences I have would open up another post or three worth of discussion of whether existence in more branches is equivalent to copies and a bunch of related philosophical questions like those that you allude to.

Comment author: steven0461 16 July 2011 03:41:17AM *  2 points [-]

We have ... preferences in terms of "our next experience." If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model.

In what sense would I want to translate these preferences? Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences in the light of its new, improved world-model? If I'm asking myself, as if for the first time, the question, "if there are going to be a lot of me-like things, how many me-like things with how good lives would be how valuable?", then the answer my brain gives is that it wants to use empathy and population ethics-type reasoning to answer that question, and that it feels no need to ever refer to "unique next experience" thinking. Is it making a mistake?

Comment author: Wei_Dai 17 July 2011 05:36:33PM 4 points [-]

In what sense would I want to translate these preferences?

I think in the sense that the new world-model ought to add up to normality. The move you propose probably only works (i.e., is intuitively acceptable) for someone who already has a strong intuition that they ought to apply empathy and population ethics-type reasoning to all decisions, not just those that only affect other people. For others who don't share such intuition, switching from "unique thread of experience" to empathy and population ethics-type reasoning would imply making radically different decisions, even for current real-world (i.e., not thought experiment) decisions, like whether to donate most of their money to charity (the former says "no" while the latter says "yes", since the difference in empathy-level between "someone like me" and "a random human" isn't that great).

Comment author: Peter_de_Blanc 17 July 2011 01:02:32PM 2 points [-]

Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences

What makes you think a mind came up with them?

Comment author: steven0461 17 July 2011 04:33:21PM 0 points [-]

I don't understand what point you're making; could you expand?

Comment author: Peter_de_Blanc 18 July 2011 02:27:30AM 2 points [-]

You can't use the mind that came up with your preferences if no such mind exists. That's my point.

Comment author: steven0461 18 July 2011 02:50:17AM *  0 points [-]

What would have come up with them instead?

Comment author: Peter_de_Blanc 18 July 2011 05:02:39AM 5 points [-]

Evolution.

Comment author: steven0461 19 July 2011 06:24:36PM 1 point [-]

In the sense that evolution came up with my mind, or in some more direct sense?

Comment author: CarlShulman 16 July 2011 04:17:37AM 2 points [-]

That's one approach to take, with various attractive features, but one needs to be careful in that case in thinking about thought-experiments like those Wei Dai offers (which are implicitly callling on the thread model).

Comment author: Manfred 16 July 2011 11:23:19PM -1 points [-]

Well, assuming that you generally don't want to die, quantum suicide is irrational (not independent of irrelevant alternatives). The extent to which we should do irrational things because we want to is definitely something to think about, but I think it's also alright to just say "it's irrational and that's bad."