Peter_de_Blanc comments on Preference For (Many) Future Worlds - Less Wrong

18 Post author: wedrifid 15 July 2011 11:31PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (37)

You are viewing a single comment's thread. Show more comments above.

Comment author: CarlShulman 16 July 2011 01:39:50AM *  20 points [-]

I think this sidesteps the underlying intuitions too quickly. We have cognitive mechanisms to predict "our next experience," memories of this algorithm working well, and preferences in terms of "our next experience." If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model. We don't start with total utilitarian-like preferences over the fates of our future copies (i.e. most aren't eager to lower their standard of living by a lot so as to be copied many times (with the copies also having low standards of living)), and one needs to explain why to translate our naive intuitions into the additive framework (rather than something more like averaging).

Comment author: steven0461 16 July 2011 03:41:17AM *  2 points [-]

We have ... preferences in terms of "our next experience." If we become convinced by the data that this model of a unique thread of experience is false, we then have problems in translating preferences defined in terms of that false model.

In what sense would I want to translate these preferences? Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences in the light of its new, improved world-model? If I'm asking myself, as if for the first time, the question, "if there are going to be a lot of me-like things, how many me-like things with how good lives would be how valuable?", then the answer my brain gives is that it wants to use empathy and population ethics-type reasoning to answer that question, and that it feels no need to ever refer to "unique next experience" thinking. Is it making a mistake?

Comment author: Peter_de_Blanc 17 July 2011 01:02:32PM 2 points [-]

Why wouldn't I just discard the preferences, and use the mind that came up with them to generate entirely new preferences

What makes you think a mind came up with them?

Comment author: steven0461 17 July 2011 04:33:21PM 0 points [-]

I don't understand what point you're making; could you expand?

Comment author: Peter_de_Blanc 18 July 2011 02:27:30AM 2 points [-]

You can't use the mind that came up with your preferences if no such mind exists. That's my point.

Comment author: steven0461 18 July 2011 02:50:17AM *  0 points [-]

What would have come up with them instead?

Comment author: Peter_de_Blanc 18 July 2011 05:02:39AM 5 points [-]

Evolution.

Comment author: steven0461 19 July 2011 06:24:36PM 1 point [-]

In the sense that evolution came up with my mind, or in some more direct sense?