wedrifid comments on Cryonics without freezers: resurrection possibilities in a Big World - Less Wrong

40 Post author: Yvain 04 April 2012 10:48PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (129)

You are viewing a single comment's thread. Show more comments above.

Comment author: wedrifid 05 April 2012 06:17:18PM *  3 points [-]

Well, that was kind of rhetorical question;

It was a question that got a straight answer that is in accord with Yvain's position. To the extent that such rhetorical questions get answers that do not require the rhetoric victim to change their mind, the implied argument can be considered refuted.

In practical social usage rhetorical questions can, indeed, often be used as a way to make arguments that an opponent is not allowed to respond to. Here on lesswrong we are free to reject that convention and so not only are literal answers to rhetorical questions acceptable, arguments that are hidden behind rhetorical questions should be considered at least as subject to criticism as arguments made openly.

Comment author: Dmytry 05 April 2012 06:24:18PM *  0 points [-]

Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say 'because I defined my goal system to be this' he is also likely to think and try to come up with some kind of general principle that does not have weird discontinuities at some point when the amnesia is too much like death for his taste.

edit: i think we are talking about different issues; i'm not making a point about his utility function, i'm making a point that he is expecting to become 'him 20 points dumber and with a terrible headache', who's a rather different person, rather than to become someone in a galaxy far far away who doesn't have the hangover and is thus more similar to him before the hangover.

Comment author: wedrifid 05 April 2012 06:29:20PM *  1 point [-]

Well, it is the case that humans often strive to have consistent goal systems (perhaps minimizing their descriptive complexity?), so while he can just say 'because I defined my goal system to be this'

Yvain does present a consistent goal system. It is one that may appear either crazy or morally abhorrent to us but all indications are that it is entirely consistent. If you were attempting to demonstrate to Yvain an inconsistency in his value system that requires arbitrary complexity to circumvent then you failed.

Comment author: Dmytry 05 April 2012 06:31:17PM 0 points [-]

I think you're misunderstanding me, see edit. The point I am making is not so much about his values, but about his expectations of subjective experience.

Comment author: wedrifid 05 April 2012 06:33:35PM *  1 point [-]

The point I am making is not so much about his values, but about his expectations of subjective experience.

Yvain's expectations of subjective experience actually seem sane to me. Only his values (and so expected decisionmaking) are weird.

Comment author: Dmytry 05 April 2012 06:40:48PM *  1 point [-]

Well, my argument is that you can propose a battery of possible partial quantum suicide set ups involving a machine that partially destroys you (e.g. you are anaesthetised and undergo lobotomy with varying extent of cutting, or something of this sort such as administration of a sublethal dose of neurotoxin). At some point, there's so little of you left that you're as good as dead; at some other point, there's so much of you left that you don't really expect to be quantum-saved. Either he has some strange continuous function inbetween, that I am very curious about, or he has a discontinuity, which is weird. (and I am guessing a discontinuity but i'd be interested to hear about function)