Yvain comments on Cryonics Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (165)
Some of these questions, like the one about running away from a fire, ignore the role of irrational motivation.
People, when confronted with an immediate threat to their lives, gain a strong desire to protect themselves. This has nothing to do with a rational evaluation of whether or not death is better than life. Even people who genuinely want to commit suicide have this problem, which is one reason so many of them try methods that are less effective but don't activate the self-defense system (like overdosing on pills instead of shooting themselves in the head). Perhaps even a suicidal person who'd entered the burning building because e planned to jump off the roof would still try to run out of the fire. So running away from a fire, or trying to stop a man threatening you with a sword, cannot be taken as proof of a genuine desire to live, only that any desire to die one might have is not as strong as one's self-protection instincts.
It is normal for people to have different motivations in different situations. When I see and smell pizza, I get a strong desire to eat the pizza; right now, not seeing or smelling pizza, I have no particular desire to eat pizza. The argument "If your life was in immediate danger, you would want it to be preserved; therefore, right now you should seek out ways to preserve your life in the future, whether you feel like it or not" is similar to the argument "If you were in front of a sizzling piece of pizza, you would want to eat it; therefore, right now you should seek out pizza and eat it, whether you feel like it or not".
Neither argument is inevitably wrong. But first you would have to prove that the urge comes from a reflectively stable value - something you "want to want", and not just from an impulse that you "want" but don't "want to want".
The empirical reason I haven't signed up for cryonics yet is that the idea of avoiding death doesn't have any immediate motivational impact on me, and the negatives of cryonics - weirdness, costs in time and money, negative affect of being trapped in a dystopia - do have motivational impact on me. I admit this is weird and not what I would have predicted about my motivations if I were considering them in the third person, but empirically, that's how things are.
I can use my willpower to overcome an irrational motivation or lack of motivation. But I only feel the need to do that in two cases. One, where I want to help other people (eg giving to charity even when I don't feel motivated to do so). And two, when I predict I will regret my decision later (eg I may overcome akrasia to do a difficult task now when I would prefer to procrastinate). The first reason doesn't really apply here, but the second is often brought out to support cryonics signup.
Many people who signal acceptance of death appear to genuinely go peacefully and happily - that is, even to the moment of dying they don't seem motivated to avoid death. If this is standard, then I can expect to go my entire life without regretting the choice not to sign up for cryonics at any moment. After I die, I will be dead, and not regretting anything. So I expect to go all of eternity without regretting a decision not to sign up for cryonics. This leaves me little reason to overcome my inherent dismotivation to get it.
Some have argued that, when I am dead, it will be a pity, because I would be having so much more fun if I were still alive, so I ought to be regretful even though I'm not physically capable of feeling the actual emotion. But this sounds too much like the arguments for a moral obligation to create all potential people, which lead to the Repugnant Conclusion and which I oppose in just about all other circumstances.
That's just what I've introspected as the empirical reasons I haven't signed up for cryonics. I'm still trying to decide if I should accept the argument. And I'm guessing that as I get older I might start feeling more motivation to cheat death, at which point I'd sign up. And there's a financial argument that if I'm going to sign up later, I might as well sign up now, though I haven't yet calculated the benefits.
But analogies to running away from a burning building shouldn't have anything to do with it.
[Bold added myself]
Is it accurate to say what I bolded? I know technically it's true, but only because there isn't any you to be doing the regretting. Death isn't so much a state [like how I used to picture sitting in the ground for eternity] as much as simple non-existence [which is much harder to grasp, at least for me] And if you have no real issues not existing at a future point, why do you attempt to prolong your existence now? I don't mean for this to be rude; I'm just curious as to why you would want to keep yourself around now if you're not willing to stay around as long as life is still enjoyable.
On a fair note, I have not signed up for cryonics, but that's mostly because I'm a college student with a lack of serious income.
By the way, I'm not here to troll, and I do have a serious question that doesn't necessarily have to do with cryonics. The goal of SIAI (Lesswrong, etc) is to learn and possibly avoid a dystopian future. If you truly are worried about a dystopian future, then doesn't that serve as a vote of "No confidence" for these initiatives?
Admittedly, I haven't looked into your history, so that may be a "Well, duh" answer :)
I suppose it serves as a vote of less than infinite confidence. I don't know if it makes me any less confident than SIAI themselves. It's still worth helping SIAI in any way possible, but they've never claimed a 100% chance of victory.
Thank you, Yvain. I quickly realized how dumb my question was, and so I appreciate that you took the time to make me feel better. Karma for you :)
Indeed, they have been careful not to present any estimates of the chance of victory (which I think is a wise decision.)
Let's say you're about to walk into a room that contains an unknown number of hostile people who possibly have guns. You don't have much of a choice about which way you're going, given that the "room" you're currently in is really more of an active garbage compactor, but you do have a lot of military-grade garbage to pick through. Do you don some armor, grab a knife, or try to assemble a working gun of your own?
Trick question. Given adequate time and resources, you do all three. In this metaphor, the room outside is the future, enemy soldiers are the prospect of a dystopia or other bad end, AGI is the gun (least likely to succeed, given how many moving parts there are and the fact that you're putting it together from garbage without real tools, but if you get it right it might solve a whole room full of problems very quickly), general sanity-improving stuff is the knife (a simple and reliable way to deal with whatever problem is right in front of you), and cryonics is the armor (so if one of those problems becomes lethally personal before you can solve it, you might be able to get back up and try again).
No. AI isn't a gun; it's a bomb. If you don't know what you're doing, or even just make a mistake, you blow yourself up. But if it works, you lob it out the door and completly solve your problem.
A poorly put together gun is perfectly capable of crippling the wielder, and most bombs light enough to throw won't reliably kill everyone in a room, especially a large room. Also, guns are harder to get right than bombs. That's why, in military history, hand grenades and land mines came first, then muskets, then rifles, instead of just better and better grenades. That's why the saying is "every Marine is a rifleman" and not "every Marine is a grenadier."
A well-made Friendly AI would translate human knowledge and intent into precise, mechanical solutions to problems. You just look through the scope and decide when to pull the trigger, then it handles the details of implementation.
Also, you seem to have lost track of the positional aspect of the metaphor. The room outside represents the future; are you planning to stay behind in the garbage compactor?
That's the iffy part.
So start with a quick sweep for functional-looking knives, followed by pieces of armor that look like they'd cover your skull or torso without falling off. No point to armor if it fails to protect you, or hampers your movements enough that you'll be taking more hits from lost capacity to dodge than the armor can soak up.
If the walls don't seem to have closed in much by the time you've got all that located and equipped, think about the junk you've already searched through. Optimistically, you may by this time have located several instances of the same model of gun with only one core problem each, in which case grab all of them and swap parts around (being careful not to drop otherwise good parts into the mud) until you've got at least one functional gun. Or, you may not have found anything that looks remotely like it could be converted into a useful approximation of a gun in the time available, in which case forget it and gather up whatever else you think could justify the effort of carrying it on your back.
Extending the metaphor, load-bearing gear is anything that lets you carry more of everything else with less discomfort. By it's very nature, that kind of thing needs to be fitted individually for best results, so don't just settle for a backpack or 'supportive community' that looks nice at arm's length but aggravates your spine when you actually try it on, especially if it isn't adjustable. If you've only found one or two useful items anyway, don't even bother.
Medical supplies would be investments in maintaining your literal health as well as non-crisis-averting skills and resources, so you're less likely to burn yourself out if one of those problems gets a grazing hit in. You should be especially careful to make sure that medical supplies you're picking out of the garbage aren't contaminated somehow.
Finally, a grenade would be any sort of clever political stratagem which could avert a range of related bad ends without much further work on your part, or else blow up in your face.
For what initiatives? I don't see any initiatives. And what is the "that" which is serving as a vote? By your sentence structure, "that" must refer to "worry", but your question still doesn't make any sense.
Just to keep things in context, my main point in posting was to demonstrate the unlikelihood of being awakened in a dystopia; it's almost as if critics suddenly jump from point A to point B without a transition. While your Niven scenario you listed below seems to be agreeable to my position, it's actually still off; you are missing the key point behind the chain of constant care, the needed infrastructure to continue cryonics care, etc. This has nothing to do with a family reviving ancestors: if someone - anyone - is there taking the time and energy to keep on refilling your dewar with LN2, then that means someone is there wanting to revive you. Think coma patients; hospitals don't keep them around just to feed them and stare at their bodies.
Anyways, moving on to the "initiatives" comment. Given that Lesswrong tends to overlap with SIAI supporters, perhaps I should have said mission? Again, I haven't looked too much into Yvain's history. However, let's suppose for the moment that he's a strong supporter of that mission. Since we:
...I guess I was just wondering if he thought it's a grim outlook for the mission. Signing up for cryonics seems to give a "glass half full" impression. Furthermore, due to #1 and #2 above, I'll eventually be arguing why mainstreaming cryonics could significantly assist in reducing existential risk.... and why it may be helpful for everyone from the LessWrong community to IEET be a little more assertive on the issue. Of course, I'm not saying eliminating risk. But at the very least, mainstreaming cryonics should be more helpful with existential risk than dealing with, say, measles ;)
To be honest, that did not clear anything up. I still don't know whether to interpret your original question as:
To be honest once again, I no longer care what you meant because you have made it clear that you don't really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday.
Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don't ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.
I apologize for the confusion and I understand if you're frustrated; I experience that frustration quite often once I realize I'm talking past someone. For whatever it's worth, I left it open because the curious side of me didn't want to limit Yvain; that curious side wanted to hear his thoughts in general. So... I guess both #2 and #3 (I'm not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn't mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place.
Also, thank you for being honest (admittedly, I was tempted to say, "So you weren't being honest with your other posts?" but I decided to present that temptation passively inside these parentheses)
:)
Ok, we're cool. Regarding my own opinions/postings, I said I'm not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I'll express that skepticism explicitly right now, since I'm thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.
But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?
No, that is your business. But if you or the AI would like my nature changed, or the nature of all yet-to-be-born children ...
If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.
We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.
The phrase "the best of all possible worlds" ought to be the canonical example of the Mind Projection Fallacy.
It would be unreasonably burdensome to append "with respect to a given mind" to every statement that involves subjectivity in any way.
ETA: For comparison, imagine if you had to say "with respect to a given reference frame" every time you talked about velocity.
Jack: "I've got the Super Glue for Yvain. I'm on my way back."
Chloe: "Hurry, Jack! I've just run the numbers! All of our LN2 suppliers were taken out by the dystopia!"
Freddie Prinze Jr: "Don't worry, Chloe. I made my own LN2, and we can buy some time for Yvain. But I'm afraid the others will have to thaw out and die. Also, I am sorry for starring in Scooby Doo and getting us cancelled."
- Jack blasts through wall, shoots Freddie, and glues Yvain back together -
Jack: "Welcome, Yvain. I am an unfriendly A.I. that decided it would be worth it just to revive you and go FOOM on your sorry ass."
(Jack begins pummeling Yvain)
(room suddenly fills up with paper clips)
This is one of the worst examples that I've ever seen. Why would a paperclip maximizer want to revive someone so they could see the great paperclip transformation? Doing so uses energy that could be allocated to producing paperclips, and paperclip maximizers don't care about most human values, they care about paperclips.
That was a point I was trying to make ;)
I should have ended off with (/sarcasm)
I think the issue is that the dystopia we're talking about here isn't necessarily paperclip maximizer land, which isn't really a dystopia in the conventional sense, as human society no longer exists in such cases. What if it's I Have No Mouth And I Must Scream instead?
Yes, the paper clip reference wasn't the only point I was trying to make; it was just a (failed) cherry on top. I mainly took issue with being revived in the common dystopian vision: constant states of warfare, violence, and so on. It simply isn't possible, given that you need to keep refilling dewars with LN2 and so much more; in other words, the chain of care would be disrupted, and you would be dead long before they found a way to resuscitate you.
And that leaves basically only a sudden "I Have No Mouth" scenario; i.e. one day it's sunny, Alcor is fondly taking care of your dewar, and then BAM! you've been resuscitated by that A.I. I guess I just find it unlikely that such an A.I. will say: "I will find Yvain, resuscitate him, and torture him." It just seems like a waste of energy.
Upvoted for making a comment that promotes paperclips.