Comment author: SilasBarta 26 January 2010 01:59:20AM 3 points [-]

That's missing the point of the dilemma. You can assume that they're not workers and that they didn't consent to any risks.

Like JGW said: workers or not, they assumed the risks inherent in being on top of a trolley track. The dude on the bridge didn't. By choosing to be on top of a track, you are choosing to take the risks. It doesn't mean (as you seem to be reading it) that you consent to dying. It means you chose a scenario with risks like errant trolleys.

This problem isn't about assumption of risk, it's about how people perceive their actions as directly causing death, or not

Why do people talk like this? It's a bright red flag to me that, to put it politely, the discussion won't be productive.

Attention everyone: you don't get to decide what a problem is "about". You have to live with whatever logical implications follow from the problem as stated. If you want the problem to be "about" topic X, then you need to construct it so that the crucial point of dispute hinges on topic X. If you can't come up with such a scenario, you should probably reconsider the point you were trying to make about topic X.

You can certainly argue that people make their judgments about the scenario because of a golly-how-stupid cognitive bias, but you sure as heck don't get to say, "this problem is 'about' how people perceive their actions' causation, all other arguments are automatically invalid".

I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.

Comment author: Technologos 26 January 2010 02:24:25AM 0 points [-]

What if the problem was reframed such that nobody ever found out about the decision and thereby that their estimates of risk remained unchanged?

I presented a reason why intuitions treat the scenarios differently, and why the intuitions are correct in doing so. That reason is consistent with the problem as stated. Assumption of risk most certainly is a factor, and a justifiable one.

It is certainly possible that there is some underlying utilitarian rationale being used. Reframing the problem like I suggest above might provide something of a test of the reason you provided, if imperfect (can we really ignore intuitions on command?).

Comment author: JGWeissman 25 January 2010 11:20:22PM 1 point [-]

I think you missed this part:

This is not to say there aren't real moral dilemmas with the intended tradeoff. It's just that, like with the Prisoner's Dilemma, you need a more convoluted scenario to get the payoff matrix to work out as intended, at which point the situation is a lot less intuitive.

Silas is saying that the Least Convenient World to illustrate this point requires lots of caveats, and is not as simple as the scenario presented.

You can assume that they're not workers and that they didn't consent to any risks.

This is still not inconvenient enough. They are still responsible for being on the track, whether by ignorance or acceptance of the risks.

Comment author: Technologos 26 January 2010 02:20:02AM 1 point [-]

I have a different interpretation of the LCPW here, though. The LCPW is supposed to be the one that isolates the moral quantity of interest--in this case, the decision to push or not, or to switch tracks--and is specifically designed to exclude answers that consider factors (realistic or not) that sidestep the issue.

I'd say the LCPW is one in which nobody will ever hear about the decision, and thus in which any ancillary effects are neutralized.

In response to comment by sbharris on Normal Cryonics
Comment author: Blueberry 21 January 2010 11:36:05PM 6 points [-]

there’s still the problem that you have to think about your own physical mortality in a very concrete way. A way which requires choices, for hours and perhaps even days.

I'm baffled that this is the stumbling block for so many people. I can understand being worried about the cost/uncertainty trade-off, but I really don't understand why it's any less troublesome than buying life insurance, planning a funeral, picking a cemetery plot, writing a will, or planning for cremation. People make choices that involve contemplating their death all the time, and people make choices about unpleasant-sounding medical treatments all the time.

Is it not less gruesome than the alternatives of skeletonizing in a flame, or by slow decay? No. But the average person manages to mostly avoid thinking of the alternatives, and the funeral industry helps them do it.

Well, maybe more people would sign up if Alcor's process didn't involve as much thinking about the alternatives? I had thought that the process was just signing papers and arranging life insurance. But if Alcor's process is turning people away, maybe that needs to change.

Maybe I'm just deluding myself: I'm not in a financial position to sign up yet, and I plan on signing up when I am. But I can't see the "creep factor" being an issue for me at all. I have no idea what that would feel like.

In response to comment by Blueberry on Normal Cryonics
Comment author: Technologos 22 January 2010 03:40:41PM 5 points [-]

buying life insurance

For what it's worth, I've heard people initially had many of the same hangups about life insurance, saying that they didn't want to gamble on death. The way that salespeople got around that was by emphasizing that the contracts would protect the family in event of the breadwinner's death, and thus making it less of a selfish thing.

I wonder if cryo needs a similar marketing parallel. "Don't you want to see your parents again?"

In response to That Magical Click
Comment author: VijayKrishnan 21 January 2010 08:36:28AM *  9 points [-]

I am puzzled by Eliezer's confidence in the rationality of signing up for cryonics given he thinks it would be characteristic of a "GODDAMNED SANE CIVILIZATION". I am even more puzzled by the commenters overwhelming agreement with Eliezer. I am personally uncomfortable with cryonics for the two following reasons and am surprised that no one seems to bring these up.

  1. I can see it being very plausible that somewhere along the line I would be subject to immense suffering, over which death would have been a far better option, but that I would be either potentially unable to take my life due to physical constraints or would lack the courage to do so (it takes quite some courage and persistent suffering to be driven to suicide IMO). I see this as analogous to a case where I am very near death and am faced with the two following options.

(a) Have my life support system turned off and die peacefully.

(b) Keep the life support system going but subsequently give up all autonomy over my life and body and place it entirely in the hands of others who are likely not even my immediate kin. I could be made to put up with immense suffering either due to technical glitches which are very likely since this is a very nascent area, or due to willful malevolence. In this case I would very likely choose (a).

  1. Note that in addition to prolonged suffering where I am effectively incapable of pulling the plug on myself, there is also the chance that I would be an oddity as far as future generations are concerned. Perhaps I would be made a circus or museum exhibit to entertain that generation. Our race is highly speciesist and I would not trust the future generations with their bionic implants and so on to even necessarily consider me to be of the same species and offer me the same rights and moral consideration.

  2. Last but not the least is a point I made as a comment in response to Robin Hanson's post. Robin Hanson expressed a preference for a world filled with more people with scarce per-capita resources compared to a world with fewer people with significantly better living conditions. His point was that this gives many people the opportunity to "be born" who would not have come into existence. And that this was for some reason a good thing. I suspect that Eliezer too has a similar opinion on this, and this is probably another place we widely differ.

    I couldn't care less if I weren't born. As the saying goes, I have been dead/not existed for billions of years and haven't suffered the slightest inconvenience. I see cryonics and a successful recovery as no different from dying and being re-born. Thus I assign virtually zero positives to being re-born, while I assign huge negatives to 1 and 2 above.

    We are evolutionarily driven to dislike dying and try to postpone it for as long as possible. However I don't think we are particularly hardwired to prefer this form of weird cryonic rebirth over never waking up at all. Given that our general preference to not die has nothing fundamental about it, but is rather a case of us following our evolutionary leanings, what makes it so obvious that cryonic rebirth is a good thing. Some form of longetivity research which extends our life to say 200 years without going the cryonic route with all the above risks especially for the first few generations of cryonic guinea pigs, seems much harder to argue against.

    Unfortunately all the discussion on this forum including the writings by Eliezer seem to draw absolutely no distinction between the two scenarios:

A. Signing up for cryonics now, with all the associated risks/benefits that I just discussed.

B. Some form of payment for some experimental longetivity research that you need to make upfront when you are 30. If the research succeeds and is tested safe, you can use the drugs for free and live to be 200. If not, you live your regular lifespan and merely forfeit the money that you paid to sponsor the research.

I can readily see myself choosing (B) if the rates were affordable and if the probability of success seemed reasonable to justify that rate. I find it astounding that repeated shallow arguments are made on this blog which address scenario (A) as though it were identical to scenario (B).

Comment author: Technologos 22 January 2010 03:33:20PM 4 points [-]

Could you supply a (rough) probability derivation for your concerns about dystopian futures?

I suspect the reason people aren't bringing those possibilities up is that, through a variety of elements including in particular the standard Less Wrong understanding of FAI derived from the Sequences, LWers have a fairly high conditional probability Pr(Life after cryo will be fun | anybody can and bothers to nanotechnologically reconstruct my brain) along with at least a modest probability of that condition actually occurring.

Comment author: Technologos 22 January 2010 03:00:37AM 1 point [-]

Does anyone really expect that this population would not respond to its incentives to avoid more danger? Anecdotes aside; do you expect them to join the military with the same frequency, be firemen with the same frequency, to be doctors administering vaccinations in jungles with the same frequency?

Agreed--indeed, I suspect that one of the first steps to fundamentally altering the priorities of society may be the invention of methods to materially prolong life, such that it really does become an unspeakable tragedy to lose somebody permanently.

Comment author: wedrifid 21 January 2010 11:41:13PM *  7 points [-]

What does she say that convinces you?

  • I am wired with explosives triggered by an internal heart rate monitor.
  • My husband, right next to me, is 100 kg of raw muscle and armed.
  • I was the lead developer of an AGI that is scheduled to hit start in three weeks. I quit when I saw that the 'Friendliness' intended is actually a dystopia and my protested were suppressed. I have just cancelled my cryonics membership and the reason your cryonic revival is dependent on killing me is that I am planning to sabotage the AI.

  • A catch all: Humans can always say with sincerity that they would never do something so immoral under any circumstances without it necessarily changing their behaviour in the moment.

  • Awareness of the above tendency in oneself often comes with the (necessary) willingness to lie explicitly lie about their values for the same reasons that they would otherwise have lied to themselves.
  • Related to the above, it is the natural instinct to speak out in outrage against anyone who doesn't condemn such immoral actions or even those who don't imply that the answer should be known a priori.
  • This plays a part in the votes your post has received, which is unfortunate. I thank you for making it and hope the magnified downvotes do not put you under the threshold for posting.
Comment author: Technologos 22 January 2010 02:58:34AM 2 points [-]

I was the lead developer of an AGI that is scheduled to hit start in three weeks. I quit when I saw that the 'Friendliness' intended is actually a dystopia and my protests were suppressed. I have just cancelled my cryonics membership and the reason your cryonic revival is dependent on killing me is that I am planning to sabotage the AI.

Is it weird that my first reaction is to ask her specific questions about the Sequences to test the likelihood of that statement's veracity?

Comment author: michaelkeenan 07 January 2010 08:21:26AM *  14 points [-]

I used to argue with a more strident, arrogant tone than I try to adopt now. One influence in changing my tone was Ben Franklin's autobiography:

"I wish well-meaning, sensible men would not lessen their power of doing good by a positive, assuming manner, that seldom fails to disgust, tends to create opposition, and to defeat everyone of those purposes for which speech was given to us, to wit, giving or receiving information or pleasure. For, if you would inform, a positive and dogmatical manner in advancing your sentiments may provoke contradiction and prevent a candid attention."

He describes how he cultivated "the habit of expressing myself in terms of modest diffidence; never using, when I advanced any thing that may possibly be disputed, the words certainly, undoubtedly, or any others that give the air of positiveness to an opinion; but rather say, I conceive or apprehend a thing to be so and so; it appears to me, or I should think it so or so, for such and such reasons; or I imagine it to be so; or it is so, if I am not mistaken.

...

When another asserted something that I thought an error, I deny'd myself the pleasure of contradicting him abruptly, and of showing immediately some absurdity in his proposition; and in answering I began by observing that in certain cases or circumstances his opinion would be right, but in the present case there appear'd or seem'd to me some difference, etc. I soon found the advantage of this change in my manner; the conversations I engag'd in went on more pleasantly. The modest way in which I propos'd my opinions procur'd them a readier reception and less contradiction; I had less mortification when I was found to be in the wrong, and I more easily prevail'd with others to give up their mistakes and join with me when I happened to be in the right."

Another influence was Yvain's How To Not Lose An Argument. The common part of Franklin and Yvain's advice is to phrase your message in such a way that minimal status will be lost by your opponent agreeing with you. Your opponent must not see (consciously or subconsciously) your rhetoric as an attempt to gain status at zir expense.

Comment author: Technologos 22 January 2010 02:52:13AM 4 points [-]

Your opponent must not see (consciously or subconsciously) your rhetoric as an attempt to gain status at zir expense.

To quote Daniele Vare: "Diplomacy is the art of letting someone have your way."

Comment author: michaelkeenan 08 January 2010 08:58:04AM 5 points [-]

Good point. Humility and diffidence are optimal when arguing with someone who is already opposed to your position; a tone of certainty can be more effective when speaking to neutrals, especially if they won't hear another side presented to them; and rabble-rousing demagoguery gets strong believers most excited and moved to act.

I usually find myself arguing with those opposed to me, so I usually use the first mode.

Comment author: Technologos 22 January 2010 02:49:49AM 1 point [-]

Agreed, and I suspect that certainty and abrasive attributes are also less problematic if truth is not being sought after.

Comment author: Blueberry 21 January 2010 08:55:28PM -3 points [-]

No, the whole point is that people can be risk averse of utility. This seems to be confusing people (my original post got voted down to -2 for some reason), so I'll try spelling it out more clearly:

Choice X: gain of 1 utile. Choice Y: no gain or loss. Choice Z: gain of 2 utiles.

Choice B was a 50% chance of Y and a 50% chance of Z. To calculate the utility of choice B, we can't just take the expected value of the utility of choice B, because that doesn't include the risk. For a risk-averse person, choice B has a utility of less than 1, although the expected value of choice B is 1.

Comment author: Technologos 21 January 2010 09:10:47PM 2 points [-]

This would be entirely true if instead of utiles you had said dollars or other resources. As it is, it is false by definition: if two choices have the same expected utility (expected value of the utility function) then the chooser is indifferent between them. You are taking utility as an argument in something like a meta-utility function, which is an interesting discussion to have (which utility function we might want to have) but not the same as standard decision theory.

Comment author: magfrump 21 January 2010 04:45:47PM 0 points [-]

I'm confused about what is uncomfortable about this, or what function of wealth you would measure utility by.

Naively it seems that logarithmic functions would be more risk averse than nth root functions which I have seen Robin Hanson use. How would a u-function be more sensitive to current wealth?

Comment author: Technologos 21 January 2010 09:01:25PM *  1 point [-]

I think the uncomfortable part is that bill's (and my) experience suggests that people are even more risk-averse than logarithmic functions would indicate.

I'd suggest that any consistent function (prospect theory notwithstanding) for human utility functions is somewhere between log(x) and log(log(x))... If I were given the option of a 50-50 chance of squaring my wealth and taking the square root, I would opt for the gamble.

View more: Prev | Next