enoonsti comments on Cryonics Questions - Less Wrong

9 Post author: James_Miller 26 August 2010 11:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (165)

You are viewing a single comment's thread. Show more comments above.

Comment author: enoonsti 28 August 2010 06:39:47PM *  2 points [-]

By the way, I'm not here to troll, and I do have a serious question that doesn't necessarily have to do with cryonics. The goal of SIAI (Lesswrong, etc) is to learn and possibly avoid a dystopian future. If you truly are worried about a dystopian future, then doesn't that serve as a vote of "No confidence" for these initiatives?

Admittedly, I haven't looked into your history, so that may be a "Well, duh" answer :)

Comment author: Yvain 29 August 2010 07:50:51AM 8 points [-]

I suppose it serves as a vote of less than infinite confidence. I don't know if it makes me any less confident than SIAI themselves. It's still worth helping SIAI in any way possible, but they've never claimed a 100% chance of victory.

Comment author: enoonsti 29 August 2010 05:43:57PM 4 points [-]

Thank you, Yvain. I quickly realized how dumb my question was, and so I appreciate that you took the time to make me feel better. Karma for you :)

Comment author: wedrifid 29 August 2010 10:08:11AM 0 points [-]

It's still worth helping SIAI in any way possible, but they've never claimed a 100% chance of victory.

Indeed, they have been careful not to present any estimates of the chance of victory (which I think is a wise decision.)

Comment author: Strange7 29 November 2014 09:26:11PM 2 points [-]

Let's say you're about to walk into a room that contains an unknown number of hostile people who possibly have guns. You don't have much of a choice about which way you're going, given that the "room" you're currently in is really more of an active garbage compactor, but you do have a lot of military-grade garbage to pick through. Do you don some armor, grab a knife, or try to assemble a working gun of your own?

Trick question. Given adequate time and resources, you do all three. In this metaphor, the room outside is the future, enemy soldiers are the prospect of a dystopia or other bad end, AGI is the gun (least likely to succeed, given how many moving parts there are and the fact that you're putting it together from garbage without real tools, but if you get it right it might solve a whole room full of problems very quickly), general sanity-improving stuff is the knife (a simple and reliable way to deal with whatever problem is right in front of you), and cryonics is the armor (so if one of those problems becomes lethally personal before you can solve it, you might be able to get back up and try again).

Comment author: Capla 30 November 2014 05:53:47PM 1 point [-]

No. AI isn't a gun; it's a bomb. If you don't know what you're doing, or even just make a mistake, you blow yourself up. But if it works, you lob it out the door and completly solve your problem.

Comment author: Strange7 01 December 2014 08:38:55PM 1 point [-]

A poorly put together gun is perfectly capable of crippling the wielder, and most bombs light enough to throw won't reliably kill everyone in a room, especially a large room. Also, guns are harder to get right than bombs. That's why, in military history, hand grenades and land mines came first, then muskets, then rifles, instead of just better and better grenades. That's why the saying is "every Marine is a rifleman" and not "every Marine is a grenadier."

A well-made Friendly AI would translate human knowledge and intent into precise, mechanical solutions to problems. You just look through the scope and decide when to pull the trigger, then it handles the details of implementation.

Also, you seem to have lost track of the positional aspect of the metaphor. The room outside represents the future; are you planning to stay behind in the garbage compactor?

Comment author: Lumifer 30 November 2014 02:01:31AM 1 point [-]

Given adequate time and resources

That's the iffy part.

Comment author: Strange7 30 November 2014 07:32:32AM 1 point [-]

So start with a quick sweep for functional-looking knives, followed by pieces of armor that look like they'd cover your skull or torso without falling off. No point to armor if it fails to protect you, or hampers your movements enough that you'll be taking more hits from lost capacity to dodge than the armor can soak up.

If the walls don't seem to have closed in much by the time you've got all that located and equipped, think about the junk you've already searched through. Optimistically, you may by this time have located several instances of the same model of gun with only one core problem each, in which case grab all of them and swap parts around (being careful not to drop otherwise good parts into the mud) until you've got at least one functional gun. Or, you may not have found anything that looks remotely like it could be converted into a useful approximation of a gun in the time available, in which case forget it and gather up whatever else you think could justify the effort of carrying it on your back.

Extending the metaphor, load-bearing gear is anything that lets you carry more of everything else with less discomfort. By it's very nature, that kind of thing needs to be fitted individually for best results, so don't just settle for a backpack or 'supportive community' that looks nice at arm's length but aggravates your spine when you actually try it on, especially if it isn't adjustable. If you've only found one or two useful items anyway, don't even bother.

Medical supplies would be investments in maintaining your literal health as well as non-crisis-averting skills and resources, so you're less likely to burn yourself out if one of those problems gets a grazing hit in. You should be especially careful to make sure that medical supplies you're picking out of the garbage aren't contaminated somehow.

Finally, a grenade would be any sort of clever political stratagem which could avert a range of related bad ends without much further work on your part, or else blow up in your face.

Comment author: Perplexed 28 August 2010 07:31:35PM 1 point [-]

doesn't that serve as a vote of "No confidence" for these initiatives?

For what initiatives? I don't see any initiatives. And what is the "that" which is serving as a vote? By your sentence structure, "that" must refer to "worry", but your question still doesn't make any sense.

Comment author: enoonsti 28 August 2010 08:46:56PM 1 point [-]

Just to keep things in context, my main point in posting was to demonstrate the unlikelihood of being awakened in a dystopia; it's almost as if critics suddenly jump from point A to point B without a transition. While your Niven scenario you listed below seems to be agreeable to my position, it's actually still off; you are missing the key point behind the chain of constant care, the needed infrastructure to continue cryonics care, etc. This has nothing to do with a family reviving ancestors: if someone - anyone - is there taking the time and energy to keep on refilling your dewar with LN2, then that means someone is there wanting to revive you. Think coma patients; hospitals don't keep them around just to feed them and stare at their bodies.

Anyways, moving on to the "initiatives" comment. Given that Lesswrong tends to overlap with SIAI supporters, perhaps I should have said mission? Again, I haven't looked too much into Yvain's history. However, let's suppose for the moment that he's a strong supporter of that mission. Since we:

  1. Can't live in parallel universes
  2. Live in a universe where even (seemingly) unrelated things are affected by each other.
  3. Think A.I. may be a crucial element of a bad future, due to #1 and #2.

...I guess I was just wondering if he thought it's a grim outlook for the mission. Signing up for cryonics seems to give a "glass half full" impression. Furthermore, due to #1 and #2 above, I'll eventually be arguing why mainstreaming cryonics could significantly assist in reducing existential risk.... and why it may be helpful for everyone from the LessWrong community to IEET be a little more assertive on the issue. Of course, I'm not saying eliminating risk. But at the very least, mainstreaming cryonics should be more helpful with existential risk than dealing with, say, measles ;)

Comment author: Perplexed 28 August 2010 09:29:04PM 1 point [-]

To be honest, that did not clear anything up. I still don't know whether to interpret your original question as:

  • Doesn't signing up for cryonics indicate skepticism that SIAI will succeed in creating FAI?
  • Doesn't not signing up indicate skepticism that SIAI will succeed?
  • Doesn't signing up indicate skepticism that UFAI is something to worry about?
  • Doesn't not signing up indicate skepticism regarding UFAI risk?

To be honest once again, I no longer care what you meant because you have made it clear that you don't really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday.

Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don't ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.

Comment author: enoonsti 28 August 2010 10:31:14PM 1 point [-]

I apologize for the confusion and I understand if you're frustrated; I experience that frustration quite often once I realize I'm talking past someone. For whatever it's worth, I left it open because the curious side of me didn't want to limit Yvain; that curious side wanted to hear his thoughts in general. So... I guess both #2 and #3 (I'm not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn't mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place.

Also, thank you for being honest (admittedly, I was tempted to say, "So you weren't being honest with your other posts?" but I decided to present that temptation passively inside these parentheses)

:)

Comment author: Perplexed 28 August 2010 11:16:30PM 1 point [-]

Ok, we're cool. Regarding my own opinions/postings, I said I'm not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I'll express that skepticism explicitly right now, since I'm thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.

Comment author: Alicorn 28 August 2010 11:39:38PM 8 points [-]

But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?

Comment author: Perplexed 29 August 2010 12:30:24AM 1 point [-]

But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?

No, that is your business. But if you or the AI would like my nature changed, or the nature of all yet-to-be-born children ...

Comment author: Pavitra 29 August 2010 12:36:38AM 2 points [-]

If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.

Comment author: Perplexed 29 August 2010 12:43:05AM 3 points [-]

I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.

Comment author: Pavitra 28 August 2010 11:18:41PM -1 points [-]

We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.

Comment author: Perplexed 29 August 2010 12:33:03AM 2 points [-]

The phrase "the best of all possible worlds" ought to be the canonical example of the Mind Projection Fallacy.

Comment author: Pavitra 29 August 2010 12:41:03AM *  1 point [-]

It would be unreasonably burdensome to append "with respect to a given mind" to every statement that involves subjectivity in any way.

ETA: For comparison, imagine if you had to say "with respect to a given reference frame" every time you talked about velocity.

Comment author: Perplexed 29 August 2010 12:53:50AM *  1 point [-]

I'm not saying that you didn't express yourself precisely enough. I am saying that there is no such thing as "best (full stop)" There is "best for me", there is "best for you", but there is not "best for both of us". No more than there is an objective (or intersubjective) probability that I am wearing a red shirt as I type.

Your argument above only works if "best" is interpreted as "best for every mind". If that is what you meant, then your implicit definition of FAI proves that FAI is impossible.

ETA: What given frame do you have in mind??????