Alicorn comments on Welcome to Heaven - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (242)
Merely having the inability to regret an occurrence doesn't make the occurrence coincide with one's preferences. I couldn't regret an unexpected, instantaneous death from which I was never revived, either; I emphatically don't prefer one.
But wire-heading is not death. It is the opposite - the most fulfilling experience possible, to which everything else pales in comparison.
It seems you think paternalism is okay if it is pure in intent and flawless in execution.
It has been shown that vulnerability to smoking addiction is due to a certain gene. Suppose we could create a virus that would silently spread through the human population and fix this gene in everyone, willing or not. Suppose our intent is pure, and we know that this virus would operate flawlessly, only affecting this gene and having no other effects.
Would you be in favor of releasing this virus?
..."fulfilling"? Wire-heading only fulfills "make me happy" - it doesn't fulfill any other goal that a person may have.
"Fulfilling" - in the sense of "To accomplish or carry into effect, as an intention, promise, or prophecy, a desire, prayer, or requirement, etc.; to complete by performance; to answer the requisitions of; to bring to pass, as a purpose or design; to effectuate" (Webster 1913) - is precisely what wire-heading cannot do.
Your other goals are immaterial and pointless to the outside world.
Nevertheless, suppose the FAI respects such a desire. This is questionable, because in the FAI's mind, this is tantamount to letting a depressed patient stay depressed, simply because a neurotransmitter imbalance causes them to want to stay depressed. But suppose it respects this tendency.
In that case, the cheapest way to satisfy your desire, in terms of consumption of resources, is to create a simulation where you feel like you are thinking, learning and exploring, though in reality your brain is in a vat.
You'd probably be better off just being happy and sharing in the FAI's infinite wisdom.
Would you do me a favor and refer to this hypothesized agent as a DAI (Denis Artificial Intelligence)? Such an entity is nothing I would call Friendly, and, given the widespread disagreement on what is Friendly, I believe any rhetorical candidates should be referred to by other names. In the meantime:
I reject this point. Let me give a concrete example.
Recently I have been playing a lot of Forza Motorsport 2 on the XBox 360. I have recently made some gaming buddies who are more experienced in the game than I am - both better at driving in the game and better at tuning cars in the game. (Like Magic: the Gathering, Forza 2 is explicitly played on both the preparation and performance levels, although tilted more towards the latter.) I admire the skills they have developed in creating and controlling their vehicles and, wishing to admire myself in a similar fashion, wish to develop my own skills to a similar degree.
What is the DAI response to this?
An FAI-enhanced World of Warcraft?
You can still interact with others even though you're in a vat.
Though as I commented elsewhere, chances are that FAI could fabricate more engaging companions for you than mere human beings.
And chances are that all this is inferior to being the ultimate wirehead.
That could be fairly awesome.
If it comes to that, I could see making the compromise.
This relates to subjects discussed in the other thread - I'll let that conversation stand in for my reply to it.
Well...
Consider you want to explore and learn and build ad infinitum. Progress in your activities requires you to control increasing amounts of matter and consume increasing amounts of energy, until such point as you conflict with others who also want to build and explore. When that point is reached, the only way the FAI can make you all happy is to intervene while you all sleep, put you in separate vats, and from then on let each of you explore an instance of the universe that it simulates for you.
Should it let you wage Star Wars on each other instead? And how would that be different from no AI to begin with?
You seem to be engaging in all-or-nothing thinking. Because I want more X does not mean that I want to maximize X to the exclusion of all other possibilities. I want to explore and learn and build, but I also want to act fairly toward my fellow sapients/sentients. And I want to be happy, and I want my happiness to stem causally from exploring, learning, building, and fairness. And I want a thousand other things I'm not aware of.
An AI which examines my field of desires and maximizes one to the exclusion of all others is actively inimical to my current desires, and to all extrapolations of my current desires I can see.
Whoa whoa whoa wait what? No. Not under a blanket description like that, at any rate. If you want to wirehead, and that's your considered and stable desire, I say go for it. Have a blast. Just don't drag us into it.
No. I'd be in favor of making it available in a controlled non-contagious form to individuals who were interested, though.
Apologies, Alicorn - I was confusing you with Adelene. I was paying all attention to the content and not enough to who is the author.
Only the first paragraph (but wire-heading is not death) is directed at your comment. The rest is actually directed at Adelene.
My point was that you used "you won't regret it" as a point in favor of wireheading, whereas it does not serve as a point in favor of death.
Can you check the thread of this comment:
http://lesswrong.com/lw/1o9/welcome_to_heaven/1iia?context=3#comments
and let me know what your response to that thread is?
I would save the drunk friend (unless I had some kind of special knowledge, such as that the friend got drunk in order to enable him or herself to go through with a plan to indulge a considered and stable sober desire for death). In the case of the depressed friend, I'd want to refer to my best available knowledge of what that friend would have said about the situation prior to acquiring the neurotransmitter imbalance, and act accordingly.
You're twisting my words. I said that FAI paternalism would be different - which it would be, qualitatively and quantitatively. "Pure in intent and flawless in execution" are very fuzzy words, prone to being interpreted differently by different people, and only a very specific set of interpretations of those words would describe FAI.
I'm with Alicorn on this one: If it can be made into a contagious virus, it can almost certainly be made into a non-contagious one, and that would be the ethical thing to do. However, if it can't be made into a non-contagious virus, I would personally not release it, and I'm going to refrain from predicting what a FAI would do in that case; part of the point of building a FAI is to be able to give those kinds of decisions to a mind that's able to make unbiased (or much less biased, if you prefer; there's a lot of room for improvement in any case) decisions that affect groups of people too large for humans to effectively model.
I understand. That makes some sense. Though smokers' judgement is impaired by their addiction, one can imagine that at least they will have periods of sanity when they can choose to fix the addiction gene themselves.
We do appear to differ in the case when an infectious virus is the only option to help smokers fix that gene. I would release the virus in that case. I have no qualms taking that decision and absorbing the responsibility.
This seems contradictory to your earlier claims about wireheading. Say that some smokers get a lot of pleasure from smoking, and don't want to stop, and in fact would experience more pleasure in their lives if they kept the addiction. You'd release the virus?