Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

steven0461 comments on Preference For (Many) Future Worlds - Less Wrong

18 Post author: wedrifid 15 July 2011 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: steven0461 16 July 2011 03:41:17AM *  2 points [-]

We have ... preferences in terms of "our next experience." If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model.

In what sense would I want to translate these preferences? Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences in the light of its new, improved world-model? If I'm asking myself, as if for the first time, the question, "if there are going to be a lot of me-like things, how many me-like things with how good lives would be how valuable?", then the answer my brain gives is that it wants to use empathy and population ethics-type reasoning to answer that question, and that it feels no need to ever refer to "unique next experience" thinking. Is it making a mistake?

Comment author: Wei_Dai 17 July 2011 05:36:33PM 4 points [-]

In what sense would I want to translate these preferences?

I think in the sense that the new world-model ought to add up to normality. The move you propose probably only works (i.e., is intuitively acceptable) for someone who already has a strong intuition that they ought to apply empathy and population ethics-type reasoning to all decisions, not just those that only affect other people. For others who don't share such intuition, switching from "unique thread of experience" to empathy and population ethics-type reasoning would imply making radically different decisions, even for current real-world (i.e., not thought experiment) decisions, like whether to donate most of their money to charity (the former says "no" while the latter says "yes", since the difference in empathy-level between "someone like me" and "a random human" isn't that great).

Comment author: Peter_de_Blanc 17 July 2011 01:02:32PM 2 points [-]

Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences

What makes you think a mind came up with them?

Comment author: steven0461 17 July 2011 04:33:21PM 0 points [-]

I don't understand what point you're making; could you expand?

Comment author: Peter_de_Blanc 18 July 2011 02:27:30AM 2 points [-]

You can't use the mind that came up with your preferences if no such mind exists. That's my point.

Comment author: steven0461 18 July 2011 02:50:17AM *  0 points [-]

What would have come up with them instead?

Comment author: Peter_de_Blanc 18 July 2011 05:02:39AM 5 points [-]

Evolution.

Comment author: steven0461 19 July 2011 06:24:36PM 1 point [-]

In the sense that evolution came up with my mind, or in some more direct sense?

Comment author: CarlShulman 16 July 2011 04:17:37AM 2 points [-]

That's one approach to take, with various attractive features, but one needs to be careful in that case in thinking about thought-experiments like those Wei Dai offers (which are implicitly callling on the thread model).