Perplexed comments on Cryonics Questions - Less Wrong

9 Post author: James_Miller 26 August 2010 11:19PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (165)

You are viewing a single comment's thread. Show more comments above.

Comment author: Perplexed 28 August 2010 09:29:04PM 1 point [-]

To be honest, that did not clear anything up. I still don't know whether to interpret your original question as:

  • Doesn't signing up for cryonics indicate skepticism that SIAI will succeed in creating FAI?
  • Doesn't not signing up indicate skepticism that SIAI will succeed?
  • Doesn't signing up indicate skepticism that UFAI is something to worry about?
  • Doesn't not signing up indicate skepticism regarding UFAI risk?

To be honest once again, I no longer care what you meant because you have made it clear that you don't really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday.

Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don't ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.

Comment author: enoonsti 28 August 2010 10:31:14PM 1 point [-]

I apologize for the confusion and I understand if you're frustrated; I experience that frustration quite often once I realize I'm talking past someone. For whatever it's worth, I left it open because the curious side of me didn't want to limit Yvain; that curious side wanted to hear his thoughts in general. So... I guess both #2 and #3 (I'm not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn't mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place.

Also, thank you for being honest (admittedly, I was tempted to say, "So you weren't being honest with your other posts?" but I decided to present that temptation passively inside these parentheses)

:)

Comment author: Perplexed 28 August 2010 11:16:30PM 1 point [-]

Ok, we're cool. Regarding my own opinions/postings, I said I'm not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I'll express that skepticism explicitly right now, since I'm thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.

Comment author: Alicorn 28 August 2010 11:39:38PM 8 points [-]

But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?

Comment author: Perplexed 29 August 2010 12:30:24AM 1 point [-]

But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?

No, that is your business. But if you or the AI would like my nature changed, or the nature of all yet-to-be-born children ...

Comment author: Pavitra 29 August 2010 12:36:38AM 2 points [-]

If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.

Comment author: Perplexed 29 August 2010 12:43:05AM 3 points [-]

I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.

Comment author: Pavitra 29 August 2010 12:44:55AM 1 point [-]

What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?

Comment author: Perplexed 29 August 2010 01:02:07AM 0 points [-]

Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power.

When did I become the enemy?

Comment author: Pavitra 29 August 2010 03:48:59AM 2 points [-]

Sorry, I shouldn't have said immoral, especially considering the last sentence in which you explicitly disclaimed moral objection. I read "unfriendly" as "unFriendly" as "incompatible with our moral value systems".

Please read my comment as follows:

What threshold of power difference do you object to? Do you object to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?

Comment author: katydee 29 August 2010 02:35:55AM 2 points [-]

Although you're right (except for the last sentence, which seems out of place), you didn't actually answer the question, and I suspect that's why you're being downvoted here. Sub out "immoral" in Pavitra's post for "dangerous and unfriendly" and I think you'll get the gist of it.

Comment author: Pavitra 28 August 2010 11:18:41PM -1 points [-]

We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.

Comment author: Perplexed 29 August 2010 12:33:03AM 2 points [-]

The phrase "the best of all possible worlds" ought to be the canonical example of the Mind Projection Fallacy.

Comment author: Pavitra 29 August 2010 12:41:03AM *  1 point [-]

It would be unreasonably burdensome to append "with respect to a given mind" to every statement that involves subjectivity in any way.

ETA: For comparison, imagine if you had to say "with respect to a given reference frame" every time you talked about velocity.

Comment author: Perplexed 29 August 2010 12:53:50AM *  1 point [-]

I'm not saying that you didn't express yourself precisely enough. I am saying that there is no such thing as "best (full stop)" There is "best for me", there is "best for you", but there is not "best for both of us". No more than there is an objective (or intersubjective) probability that I am wearing a red shirt as I type.

Your argument above only works if "best" is interpreted as "best for every mind". If that is what you meant, then your implicit definition of FAI proves that FAI is impossible.

ETA: What given frame do you have in mind??????

Comment author: Pavitra 29 August 2010 03:45:20AM 1 point [-]

The usual assumption in this context would be CEV. Are you saying you strongly expect humanity's extrapolated volition not to cohere?

Comment author: Perplexed 29 August 2010 04:13:34AM 4 points [-]

Perhaps you should explain, by providing a link, what is meant by CEV. The only text I know of describing it is dated 2004, and, ... how shall I put this ..., it doesn't seem to cohere.

But, I have to say, based on what I can infer, that I see no reason to expect coherence, and the concept of "extrapolation" scares the sh.t out of me.

Comment author: timtyler 29 August 2010 07:39:58AM *  1 point [-]

"Coherence" seems a bit like the human genome project. Yes there are many individual differences - but if you throw them all away, you are still left with something.

Comment author: Pavitra 29 August 2010 04:24:53AM 0 points [-]

I'm looking at the same document you are, and I actually agree that EV almost certainly ~C. I just wanted to make sure the assumption was explicit.