enoonsti comments on Cryonics Questions - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (165)
Just to keep things in context, my main point in posting was to demonstrate the unlikelihood of being awakened in a dystopia; it's almost as if critics suddenly jump from point A to point B without a transition. While your Niven scenario you listed below seems to be agreeable to my position, it's actually still off; you are missing the key point behind the chain of constant care, the needed infrastructure to continue cryonics care, etc. This has nothing to do with a family reviving ancestors: if someone - anyone - is there taking the time and energy to keep on refilling your dewar with LN2, then that means someone is there wanting to revive you. Think coma patients; hospitals don't keep them around just to feed them and stare at their bodies.
Anyways, moving on to the "initiatives" comment. Given that Lesswrong tends to overlap with SIAI supporters, perhaps I should have said mission? Again, I haven't looked too much into Yvain's history. However, let's suppose for the moment that he's a strong supporter of that mission. Since we:
...I guess I was just wondering if he thought it's a grim outlook for the mission. Signing up for cryonics seems to give a "glass half full" impression. Furthermore, due to #1 and #2 above, I'll eventually be arguing why mainstreaming cryonics could significantly assist in reducing existential risk.... and why it may be helpful for everyone from the LessWrong community to IEET be a little more assertive on the issue. Of course, I'm not saying eliminating risk. But at the very least, mainstreaming cryonics should be more helpful with existential risk than dealing with, say, measles ;)
To be honest, that did not clear anything up. I still don't know whether to interpret your original question as:
To be honest once again, I no longer care what you meant because you have made it clear that you don't really care what the answer is. You have your own opinions on the relationship between cryonics and existential risk which you will share with us someday.
Please, when you do share, start by presenting your own opinion and arguments clearly and directly. Don't ask rhetorical questions which no one can parse. No one here will consider you a troll for speaking your mind.
I apologize for the confusion and I understand if you're frustrated; I experience that frustration quite often once I realize I'm talking past someone. For whatever it's worth, I left it open because the curious side of me didn't want to limit Yvain; that curious side wanted to hear his thoughts in general. So... I guess both #2 and #3 (I'm not sure how #1 and #4 could be deduced from my posts, but my opinion is irrelevant to this situation). Anyways, I didn't mean to push this too much, because I felt it was minor. Perhaps I should not have asked it in the first place.
Also, thank you for being honest (admittedly, I was tempted to say, "So you weren't being honest with your other posts?" but I decided to present that temptation passively inside these parentheses)
:)
Ok, we're cool. Regarding my own opinions/postings, I said I'm not signing up, but my opinions on FAI or UFAI had nothing to do with it. Well, maybe I did implicitly express skepticism that FAI will create a utopia. What the hell! I'll express that skepticism explicitly right now, since I'm thinking of it. There is nothing an FAI can do to eliminate human misery without first changing human nature. An FAI that tries to change human nature is an UFAI.
But I would like my nature changed in some ways. If an AI does that for me, does that make it unFriendly?
No, that is your business. But if you or the AI would like my nature changed, or the nature of all yet-to-be-born children ...
If you have moral objections to altering the nature of potential future persons that have not yet come into being, then you had better avoid becoming a teacher, or interacting at all with children, or saying or writing anything that a child might at some point encounter, or in fact communicating with any person under any circumstances whatsoever.
I have no moral objection to any person of limited power doing whatever they can to influence future human nature. I do have an objection to that power being monopolized by anyone or anything. It is not so much that I consider it immoral, it is that I consider it dangerous and unfriendly. My objections are, in a sense, political rather than moral.
What threshold of power difference do you consider immoral? Do you have a moral objection to pickup artists? Advertisers? Politicians? Attractive people? Toastmasters?
Where do you imagine that I said I found something immoral? I thought I had said explicitly that morality is not involved here. Where do I mention power differences? I mentioned only the distinction between limited power and monopoly power.
When did I become the enemy?
We may assume that an FAI will create the best of all possible worlds. Your argument seems to be that the criteria of a perfect utopia do not correspond to a possible world; very well then, an FAI will give us an outcome that is, at worst, no less desirable than any outcome achievable without one.
The phrase "the best of all possible worlds" ought to be the canonical example of the Mind Projection Fallacy.
It would be unreasonably burdensome to append "with respect to a given mind" to every statement that involves subjectivity in any way.
ETA: For comparison, imagine if you had to say "with respect to a given reference frame" every time you talked about velocity.
I'm not saying that you didn't express yourself precisely enough. I am saying that there is no such thing as "best (full stop)" There is "best for me", there is "best for you", but there is not "best for both of us". No more than there is an objective (or intersubjective) probability that I am wearing a red shirt as I type.
Your argument above only works if "best" is interpreted as "best for every mind". If that is what you meant, then your implicit definition of FAI proves that FAI is impossible.
ETA: What given frame do you have in mind??????
The usual assumption in this context would be CEV. Are you saying you strongly expect humanity's extrapolated volition not to cohere?
Perhaps you should explain, by providing a link, what is meant by CEV. The only text I know of describing it is dated 2004, and, ... how shall I put this ..., it doesn't seem to cohere.
But, I have to say, based on what I can infer, that I see no reason to expect coherence, and the concept of "extrapolation" scares the sh.t out of me.