pedanterrific comments on Greg Linster on the beauty of death - Less Wrong

6 Post author: Jonathan_Graehl 20 October 2011 04:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (67)

You are viewing a single comment's thread. Show more comments above.

Comment author: Logos01 20 October 2011 03:13:40PM 0 points [-]

Surely the net value of having the option depends on the magnitude of the chance that I will, given the option, choose suicide in situations where the expected value of my remaining span of life is positive?

Well -- here's the thing. These sorts of scenarios are relatively easy to safeguard against. For example (addressing the 'wrong button' but also the transient emotional state): require the suicide mechanism to take two thousand years to implement from initiation to irreversibility. Another example -- tailored the other way -- since we're already talking about altering/substituting physiology drastically, given that psychological states depend on physiological states, it would stand to reason that any being capable of engineering immortality could also engineer cognitive states.

I would find the notion of pressing the wrong button accidentally for two thousand years to be ... possible at all, given human timeframes. Especially if the suicide mechanism is tied to intentionality. Could you even imagine actively desiring to die for that long a period, short of something akin to an AIMS scenario?


Also, please note that I later described "AIMS" as "total anti-utility that endures for a duration outlasting the remaining lifespan of the universe". So what I'm talking about is a situation where you have a justified true belief that the remaining span of your existence will not merely not have positive value but rather specifically will consist solely of negative value.

(For me, the concept of "absolute suffering" -- that is, a suffering greater than which I cannot conceive (That's kinda got echoes of the Ontological but in this case it's not fallacious since I'm not using that as an argument in favor of its instantiation rather than definition) -- is not sufficient to induce zero-value/utility let alone negative-value/utility. Suffering that "serves a purpose" is acceptable to me; so even an 'eternity' of 'absolute suffering' wouldn't necessarily be AIMS for me, under these terms.)

The point is: such a scenario -- total anti-utility from a given point forward -- has a negligible but non-zero chance of occurring. And over a period of 10^100 years, that raises such a negligible percentage individual instance occurance to an accumulatedly significant one. IF we humans manage to manipulate M-Theory to bypass the closed-system state of the universe and thereby, furthermore, stave off the heat death of the universe indefinitely, then ... well, I know that my mind cannot properly grok the sort of scenario we'd then be discussing.

Still -- given the scenario-options of possibly fucking it up and dying early, or being unable to escape a scenario of unending anti-utility... I choose the former. Then again, If I were given the choice of increasing my g quotient and IQ quotient (or, rather, whatever actual cognitive function those scores are meant to model) twofold for ten years at the price of dying after those ten... I think, right now, I would take it.

So I guess that says something about my underlying beliefs in this discussion, as well.

Comment author: pedanterrific 20 October 2011 08:28:00PM 7 points [-]

For me, it's the irreversibility that's the real issue. For any situation that would warrant pressing the suicide switch, would it not be preferable to press the analgesia switch?

Comment author: TheOtherDave 21 October 2011 03:23:30PM 3 points [-]

Not necessarily. If I believed that my continued survival would cause the destruction of everything I valued, suicide would be a value-preserving option and analgesia would not be. More generally: if my values include anything beyond avoiding pain, analgesia isn't necessarily my best value-preserving option.

But, agreed, irreversibility of the sort we're discussing here is highly implausible. But we're discussing low-probability scenarios to begin with.

Comment author: pedanterrific 22 October 2011 02:58:41AM 4 points [-]

my continued survival would cause the destruction of everything I valued

This is a situation I hadn't thought of, and I agree that in this case, suicide would be preferable. But I hadn't got the impression that's what was being discussed - for one thing, if this were a real worry it would also argue against a two-thousand-year safety interval. I feel like the "Omega threatening to torture your loved ones to compel your suicide" scenario should be separated from the "I have no mouth and I must scream" scenario.

More generally: if my values include anything beyond avoiding pain, analgesia isn't necessarily my best value-preserving option.

True, but the problem with pain is that its importance in your hierarchy of values tends to increase with intensity. Now I'm thinking of a sort of dead-man's-switch where outside sensory information requires voluntary opting-in, and the suicide switch can only be accessed from the baseline mental state of total sensory deprivation, or an imaginary field of flowers, or whatever.

But, agreed, irreversibility of the sort we're discussing here is highly implausible. But we're discussing low-probability scenarios to begin with.

I was mostly talking about the irreversibility of suicide, actually. In an AIMS scenario, where I have every reason to expect my whole future to consist of total, mind-crushing suffering, I would still prefer "spend the remaining lifetime of the universe building castles in my head, checking back in occasionally to make sure the suffering hasn't stopped" to "cease to exist, permanently".

Of course, this is all rather ignoring the unlikelihood of the existence of an entity that can impose effectively infinite, total suffering on you but can't hack your mind and remove the suicide switch.