I can conceive of the following 3 main types of meaning we can pursue in life.

1. Exploring existing complexity: the natural complexity of the universe, or complexities that others created for us to explore.

2. Creating new complexity for others and ourselves to explore.

3. Hedonic pleasure: more or less direct stimulation of our pleasure centers, with wire-heading as the ultimate form.

What I'm observing in the various FAI debates is a tendency of people to shy away from wire-heading as something the FAI should do. This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want". This is not, however, clear to me at all.

The utility we get from exploration and creation is an enjoyable mental process that comes with these activities. Once an FAI can rewire our brains at will, we do not need to perform actual exploration or creation to experience this enjoyment. Instead, the enjoyment we get from exploration and creation becomes just another form of pleasure that can be stimulated directly.

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them. This enjoyment could be of any type - it could be explorative or creative or hedonic enjoyment as we know it. The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly. Therefore, the greatest utility will be achieved by wire-heading. Everything else falls short of that.

What I don't quite understand is why everyone thinks that this would be such a horrible outcome. As far as I can tell, these seem to be cached emotions that are suitable for our world, but not for the world of FAI. In our world, we truly do need to constantly explore and create, or else we will suffer the consequences of not mastering our environment. In a world where FAI exists, there is no longer a point, nor even a possibility, of mastering our environment. The FAI masters our environment for us, and there is no longer a reason to avoid hedonic pleasure. It is no longer a trap.

Since the FAI can sustain us in safety until the universe goes poof, there is no reason for everyone not to experience ultimate enjoyment in the meanwhile. In fact, I can hardly tell this apart from the concept of a Christian Heaven, which appears to be a place where Christians very much want to get.

If you don't want to be "reduced" to an eternal state of bliss, that's tough luck. The alternative would be for the FAI to create an environment for you to play in, consuming precious resources that could sustain more creatures in a permanently blissful state. But don't worry; you won't need to feel bad for long. The FAI can simply modify your preferences so you want an eternally blissful state.

Welcome to Heaven.

Welcome to Heaven
New Comment
246 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-]mkehrt320

I think you are missing the point.

First throw out the FAI part of this argument; we can consider an FAI just as a tool to help us achieve our goals. Any AI which does not do at least this this is insufficiently friendly (and thus counts as a paperclipper, possibly).

Thus, the actual question is what are our goals? I don't know about you, but I value understanding and exploration. If you value pleasure, good! Have fun being a wirehead.

It comes down to the fact that a world where everyone is a wirehead is not valued by me or probably by many people. Even though this world would maximize pleasure, it wouldn't maximize utility of people designing the world (I think this is the util/hedon distinction, but I am not sure). If we don't value that world, why should we create it, even if we would value after we create it?

5djadvance22
The way I see it is that there is a set of preferable reward qualia we can experience (pleasure, wonder, empathy, pride) and a set of triggers attached to them in the human mind (sexual contact, learning, bonding, accomplishing a goal). What this article says is that there is no inherent value in the triggers, just in the rewards. Why rely on plugs when you can short circuit the outlet? But that is missing an entire field of points: there are certain forms of pleasure that can only be retrieved from the correct association of triggers and rewards. Basking in the glow of wonder from cosmological inquiry and revelation is not the same without an intellect piecing together the context. You can have bliss and love and friendship all bundled up into one sensation, but without the STORY, without a timeline of events and shared experience that make up a relationship, you are missing a key part of that positive experience. tl;dr: Experiencing pure rewards without relying on triggers is a retarded (or limited) way of experiencing the pleasures of the universe.
4Pablo
Like many others below, your reply assumes that what is valuable is what we value. Yet as far as I can see, this assumption has never defended with arguments in this forum. Moreover, the assumption seems clearly false. A person whose brain was wired differently than most people may value states of horrible agony. Yet the fact that this person valued these states would not constitute a reason for thinking them valuable. Pain is bad because of how it feels, rather than by virtue of the attitudes that people have towards painful states.
5randallsquared
Well, by definition. I think what you mean is that there are things that "ought to be" valuable which we do not actually value [enough?]. But what evidence is there that there is any "ought" above our existing goals?
0Raoul589
What evidence is there that we should value anything more than what mental states feel like from the inside? That's what the wirehead would ask. He doesn't care about goals. Let's see some evidence that our goals matter.
1jooyous
What would evidence that our goals matter look like?
0randallsquared
Just to be clear, I don't think you're disagreeing with me.
0Raoul589
We disagree if you intended to make the claim that 'our goals' are the bedrock on which we should base the notion of 'ought', since we can take the moral skepticism a step further, and ask: what evidence is there that there is any 'ought' above 'maxing out our utility functions'? A further point of clarification: It doesn't follow - by definition, as you say - that what is valuable is what we value. Would making paperclips become valuable if we created a paperclip maximiser? What about if paperclip maximisers outnumbered humans? I think benthamite is right: the assumption that 'what is valuable is what we value' tends just to be smuggled into arguments without further defense. This is the move that the wirehead rejects. Note: I took the statement 'what is valuable is what we value' to be equivalent to 'things are valuable because we value them'. The statement has another possbile meaning: 'we value things because they are valuable'. I think both are incorrect for the same reason.
4randallsquared
I think I must be misunderstanding you. It's not so much that I'm saying that our goals are the bedrock, as that there's no objective bedrock to begin with. We do value things, and we can make decisions about actions in pursuit of things we value, so in that sense there's some basis for what we "ought" to do, but I'm making exactly the same point you are when you say: I know of no such evidence. We do act in pursuit of goals, and that's enough for a positivist morality, and it appears to be the closest we can get to a normative morality. You seem to say that it's not very close at all, and I agree, but I don't see a path to closer. So, to recap, we value what we value, and there's no way I can see to argue that we ought to value something else. Two entities with incompatible goals are to some extent mutually evil, and there is no rational way out of it, because arguments about "ought" presume a given goal both can agree on. To the paperclip maximizer, they would certainly be valuable -- ultimately so. If you have some other standard, some objective measurement, of value, please show me it. :) By the way, you can't say the wirehead doesn't care about goals: part of the definition of a wirehead is that he cares most about the goal of stimulating his brain in a pleasurable way. An entity that didn't care about goals would never do anything at all.
0Raoul589
I think that you are right that we don't disagree on the 'basis of morality' issue. My claim is only that which you said above: there is no objective bedrock for morality, and there's no evidence that we ought to do anything other than max out our utility functions. I am sorry for the digression.
0Kawoomba
I agree with the rest of your comment, and depending on how you define "goal" with the quote as well. However, what about entities driven only by heuristics? Those may have developed to pursue a goal, but not necessarily so. Would you call an agent that is only heuristics-driven goal-oriented? (I have in mind simple commands along the lines of "go left when there is a light on the right", think Braitenberg vehicles minus the evolutionary aspect.
2randallsquared
Yes, I thought about that when writing the above, but I figured I'd fall back on the term "entity". ;) An entity would be something that could have goals (sidestepping the hard work of exactly what object qualify).
2A1987dM
See also
2Kawoomba
Hard to be original anymore. Which is a good sign!
3nshepperd
What is valuable is what we value, because if we didn't value it, we wouldn't have invented the word "valuable" to describe it. By analogy, suppose my favourite colour is red, but I speak a language with no term for "red". So I invent "xylbiz" to refer to red things; in our language, it is pretty much a synonym for "red". All objects that are xylbiz are my favourite colour. "By definition" to some degree, since my liking red is the origin of the definition "xylbiz = red". But note that: things are not xylbiz because xylbiz is my favourite colour; they are xylbiz because of their physical characteristics. Nor is xylbiz my favourite colour because things are xylbiz; rather xylbiz is my favourite colour because that's how my mind is built. It would, however, be fairly accurate to say that if an object is xylbiz, it is my favourite colour, and it is my favourite colour because it is xylbiz (and because of how my mind is built). It would also be accurate to say that "xylbiz" refers to red things because red is my favourite colour, but this is a statement about words, not about redness or xylbizness. Note that if my favourite colour changed somehow, so now I like purple and invent the word "blagg" for it, things that were previously xylbiz would not become blagg, however you would notice I stop talking about "xylbiz" (actually, being human, would probably just redefine "xylbiz" to mean purple rather than define a new word). By the way, the philosopher would probably ask "what evidence is there that we should value what mental states feel like from the inside?"
3Matt_Simpson
Agreed. This is one reason why I don't like to call myself a utilitarian. Too many cached thoughts/objections associated with that term that just don't apply to what we are talking about
2Raoul589
As a wirehead advocate, I want to present my response to this as bluntly as possible, since I think my position is more generally what underlies the wirehead position, and I never see this addressed. I simply don't believe that you really value understanding and exploration. I think that your brain (mine too) simply says to you 'yay, understanding and exploration!'. What's more, the only way you even know this much, is from how you feel about exploration - on the inside - when you are considering it or engaging in it. That is, how much 'pleasure' or wirehead-subjective-experience-nice-feelings-equivalent you get from it. You say to your brain: 'so, what do you think about making scientific discoveries?' and it says right back to you: 'making discoveries? Yay!' Since literally every single thing we value just boils down to 'my brain says yay about this' anyway, why don't we just hack the brain equivalent to say 'yay!' as much as possible?
5TheOtherDave
If I were about to fall off a cliff, I would prefer that you satisfy your brain's desire to pull me back by actually pulling me back, not by hacking your brain to believe you had pulled me back while I in fact plunge to my death. And if my body needs nutrients, I would rather satisfy my hunger by actually consuming nutrients, not by hacking my brain to believe I had consumed nutrients while my cells starve and die. I suspect most people share those preferences. That pretty much summarizes my objection to wireheading in the real world. That said, if we posit a hypothetical world where my wireheading doesn't have any opportunity costs (that is, everything worth doing is going to be done as well as I can do it or better, whether I do it or not), I'm OK with wireheading. To be more precise, I share the sentiment that others have expressed that my brain says "Boo!" to wireheading even in that world. But in that world, my brain also says "Boo!" to not wireheading for most the same reasons, so that doesn't weigh into my decision-making much, and is outweighed by my brain's "Yay!" to enjoyable experiences. Said more simply: if nothing I do can matter, then I might as well wirehead.
4lavalamp
Because my brain says 'boo' about the thought of that.
2Raoul589
It seems, then, that anti-wireheading boils down to the claim that 'wireheading, boo!'. This is not a convincing argument to people whose brains don't say to them 'wireheading, boo!'. My impression was that denisbider's top level post was a call for an anti-wireheading argument more convincing than this.
1lavalamp
I use my current value system to evaluate possible futures. The current me really doesn't like the possible future me sitting stationary in the corner of a room doing nothing, even though that version of me is experiencing lots of happiness. I guess I view wireheading as equivalent to suicide; you're entering a state in which you'll no longer affect the rest of the world, and from which you'll never emerge. No arguments will work on someone who's already wireheaded, but for someone who is considering it, hopefully they'll consider the negative effects on the rest of society. Your friends will miss you, you'll be a resource drain, etc. We already have an imperfect wireheading option; we call it drug addiction. If none of that moves you, then perhaps you should wirehead.
6TheOtherDave
Is the social-good argument your true rejection, here? Does it follow from this that if you concluded, after careful analysis, that you sitting stationary in a corner of a room experiencing various desirable experiences would be a net positive to the rest of society (your friends will be happy for you, you'll consume fewer net resources than if you were moving around, eating food, burning fossil fuels to get places, etc., etc.), then you would reluctantly choose to wirehead, and endorse others for whom the same were true to do so? Or is the social good argument just a soldier here?
8lavalamp
After some thought, I believe that the social good argument, if it somehow came out the other way, would in fact move me to reluctantly change my mind. (Your example arguments didn't do the trick, though-- to get my brain to imagine an argument that would move me, I had to imagine a world where my continued interaction with other humans in fact harms them in ways I cannot do something to avoid; something like I'm an evil person, don't wish to be evil, but it's not possible to cease being evil are all true.) I'd still at least want a minecraft version of wireheading and not a drugged out version, I think.
3TheOtherDave
Cool.
0Raoul589
You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave's speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good - since in his scenario it is clearly net better to wirehead than not to.
0lavalamp
All of the things he lists are not true for me personally and I had trouble imagining worlds in which they were true of me or anyone else. (Exception being the resource argument-- I imagine e.g. welfare recipients would consume fewer resources but anyone gainfully employed AFAIK generally adds more value to the economy than they remove.)
0TheOtherDave
FWIW, I don't find it hard to imagine a world where automated tools that require fewer resources to maintain than I do are at least as good as I am at doing any job I can do.
0lavalamp
Ah, see, for me that sort of world has human level machine intelligence, which makes it really hard to make predictions about.
0TheOtherDave
Yes, agreed that automated tools with human-level intelligence are implicit in the scenario. I'm not quite sure what "predictions" you have in mind, though.
0lavalamp
That was poorly phrased, sorry. I meant it's difficult to reason about in general. Like, I expect futures with human-level machine intelligences to be really unstable and either turn into FAI heaven or uFAI hell rapidly. I also expect them to not be particularly resource constrained, such that the marginal effects of one human wireheading would be pretty much nil. But I hold all beliefs about this sort of future with very low confidence.
0TheOtherDave
Confidence isn't really the issue, here. If I want to know how important the argument from social good is to my judgments about wireheading, one approach to teasing that out is to consider a hypothetical world in which there is no net social good to my not wireheading, and see how I judge wireheading in that world. One way to visualize such a hypothetical world is to assume that automated tools capable of doing everything I can do already exist, which is to say tools at least as "smart" as I am for some rough-and-ready definition of "smart". Yes, for such a world to be at all stable, I have to assume that such tools aren't full AGIs in the sense LW uses the term -- in particular, that they can't self-improve any better than I can. Maybe that's really unlikely, but I don't find that this limits my ability to visualize it for purposes of the thought experiment. For my own part, as I said in an earlier comment, I find that the argument from social good is rather compelling to me... at least, if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.
0lavalamp
Agreed. If you'll reread my comment a few levels above, I mention the resource argument is an exception in that I could see situations in which it applied (I find my welfare recipient much more likely than your scenario, but either way, same argument). It's primarily the "your friends will be happy for you" bit that I couldn't imagine, but trying to imagine it made me think of worlds where I was evil. I mean, I basically have to think of scenarios where it'd really be best for everybody if I suicide. The only difference between wireheading and suicide with regards to the rest of the universe is that suicides consume even fewer resources. Currently I think suicide is a bad choice for everyone with the few obvious exceptions.
0TheOtherDave
Well, you know your friends better than I do, obviously. That said, if a friend of mine moved somewhere where i could no longer communicate with them, but I was confident that they were happy there, my inclination would be to be happy for them. Obviously that can be overridden by other factors, but again it's not difficult to imagine.
0CAE_Jones
That the social aspect is where most of the concern seems to be is interesting. I have to wonder what situation would result in wireheading being permanent (no exceptions), without some kind of contact with the outside world as an option. If the economic motivation behind technology doesn't change dramatically by the time wireheading becomes possible, it'd need to have commercial appeal. Even if a simulation tricks someone who wants to get out into believing they've gotten out, if they had a pre-existing social network that notices them not coming out of it, the backlash could still hurt the providers. I know for me personally, I have so few social ties at present that I don't see any reason not to wirehead. I can think of one person who I might be unpleasantly surprised to discover had wireheaded, but that person seems like he'd only do that if things got so incredibly bad that humanity looked something like doomed. (Where "doomed" is... pretty broadly defined, I guess.). If the option to wirehead was given to me tomorrow, though, I might ask it to wait a few months just to see if I could maintain sufficient motivation to attempt to do anything with the real world.
0lavalamp
I think the interesting discussion to be had here is to explore why my brain thinks of a wire-headed person as effectively dead, but yours thinks they've just moved to antartica. I think it's the permanence that makes most of the difference for me. And the fact that I can't visit them even in principle, and the fact that they won't be making any new friends. The fact that their social network will have zero links for some reason seems highly relevant.
5ArisKatsaris
We don't need to be motivated by a single purpose. The part of our brains that is morality and considers what is good for the rest of the word, the part of our brains that just finds it aesthetically displeasing to be wireheaded for whatever reason, the part of our brains that just seeks pleasure, they may all have different votes of different weights to cast.
1Kawoomba
I against my brother, my brothers and I against my cousins, then my cousins and I against strangers. Which bracket do I identify with at the point in time when being asked the question? Which perspective do I take? That's what determines the purpose. You might say - well, your own perspective. But that's the thing, my perspective depends on - other than the time of day and my current hormonal status - the way the question is framed, and which identity level I identify with most at that moment.
0Raoul589
Does it follow from that that you could consider taking the perspective of your post wirehead self?
0Kawoomba
Consider in the sense of "what would my wire headed self do", yes. Similar to Anja's recent post. However, I'll never (can't imagine the circumstances) be in a state of mind where doing so would seem natural to me.
0TheOtherDave
Yes. But insofar as that's true, lavalamp's idea that Raoul589 should wirehead if the social-good argument doesn't move them is less clear.
2Kindly
So what would "really valuing" understanding and exploration entail, exactly?
2ArisKatsaris
Because my brain does indeed say "yay!" about stuff, but hacking my brain to constantly say "yay!" isn't one of the stuff that my brain says "yay!" about.
[-][anonymous]190

What I'm observing in the various FAI debates is a tendency of people to shy away from wire-heading as something the FAI should do. This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want". This is not, however, clear to me at all.

I don't want that. There, did I make it clear?

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them.

Since when does shut-up-and-multiply mean "multiply utility by number of beings"?

If you don't want to be "reduced" to an eternal state of bliss, that's tough luck.

Heh heh.

1Raoul589
'I don't want that' doesn't imply 'we don't want that'. In fact, if the 'we' refers to humanity as a whole, then denisbider's position refutes the claim by definition.
-7MugaSofer

denis, most utilitarians here are preference utilitarians, who believe in satisfying people's preferences, rather than maximizing happiness or pleasure.

To those who say they don't want to be wireheaded, how do you really know that, when you haven't tried wireheading? An FAI might reason the same way, and try to extrapolate what your preferences would be if you knew what it felt like to be wireheaded, in which case it might conclude that your true preferences are in favor of being wireheaded.

8Paul Crowley
But it's not because I think there's some downside to the experience that I don't want it. The experience is as good as can possibly be. I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.

The experience is as good as can possibly be.

You don't know how good "as good as can possibly be" is yet.

I want to continue to be someone who thinks things and does stuff, even at a cost in happiness.

But surely the cost in happiness that you're willing to accept isn't infinite. For example, presumably you're not willing to be tortured for a year in exchange for a year of thinking and doing stuff. Someone who has never experienced much pain might think that torture is no big deal, and accept this exchange, but he would be mistaken, right?

How do you know you're not similarly mistaken about wireheading?

How do you know you're not similarly mistaken about wireheading?

I'm a bit skeptical of how well you can use the term "mistaken" when talking about technology that would allow us to modify our minds to an arbitrary degree. One could easily fathom a mind that (say) wants to be wireheaded for as long as the wireheading goes on, but ceases to want it the moment the wireheading stops. (I.e. both prefer their current state of wireheadedness/non-wireheadedness and wouldn't want to change it.) Can we really say that one of them is "mistaken", or wouldn't it be more accurate to say that they simply have different preferences?

EDIT: Expanded this to a top-level post.

1Paul Crowley
Interesting problem! Perhaps I have a maximum utility to happiness, which increasing happiness approaches asymptotically?
2Wei Dai
Yes, I think that's quite possible, but I don't know whether it's actually the case or not. A big question I have is whether any of our values scales up to the size of the universe, in other words, doesn't asymptotically approach an upper bound well before we used up the resources in the universe. See also my latest post http://lesswrong.com/lw/1oj/complexity_of_value_complexity_of_outcome/ where I talk about some related ideas.
1CannibalSmith
The maximum amount of pleasure is finite too.
5byrnema
The FAI can make you feel as though you "think things and do stuff", just by changing your preferences. I don't think any reason beginning with "I want" is going to work, because your preferences aren't fixed or immutable in this hypothetical. Anyway, can you explain why you are attached to your preferences? That "it's better to value this than value that" is incoherent, and the FAI will see that. The FAI will have no objective, logical reason to distinguish between values you currently have and are attached to and values that you could have and be attached to, and might as well modify you than modify the universe. (Because the universe has exactly the same value either way.)
4LucasSloan
If any possible goal is considered to have the same value (by what standard?), then the "FAI" is not friendly. If preferences don't matter, then why does them not mattering matter? Why change one's utility function at all, if anything is as good as anything else?
3byrnema
Well I understand I owe money to the Singularity Institute now for speculating on what the output of the CEV would be. (Dire Warnings #3)
3timtyler
That page said: "None may argue on the SL4 mailing list about the output of CEV". A different place, with different rules.
2Kutta
I can't see how a true FAI can change my preferences if I prefer them not being changed. It does not work this way. We want to do what is right, not what would conform our utility function if we were petunias or paperclip AIs or randomly chosen expected utility maximizers; the whole point of Friendliness is to find out and implement what we care about and not anything else. I'm not only attached to my preferences; I am great part my preferences. I even have a preference such that I don't want my preferences to be forcibly changed. Thinking about changing meta-preferences quickly leads to a strange loop, but if I look at specific outcome (like me being turned to orgasmium) I can still make a moral judgement and reject that outcome. The FAI has a perfectly objective, logical reason to do what's right and not else; its existence and utility function is causally retractable to the humans that designed it. An AI that verges on nihilism and contemplates switching humanity's utility function to something else, partly because the universe has the "exactly same value" either way, is definitely NOT a Friendly AI.
1byrnema
OK, I agree with this comment and this one that if you program an FAI to satisfy our actual preferences with no compromise, than that is what it is going to do. If people have a preference for their values being satisfied in reality, rather than them just being satisfied virtually, then no wire-heading for them. However, if you do allow compromise so that the FAI should modify preferences that contradict each other, then we might be on our way to wire-heading. Eliezer observes there is a significant 'objective component to human moral intuition'. We also value truth and meaning. (This comment strikes me as relevant.) If the FAI finds that these thre e are incompatible, which preference should it modify? (Background for this comment in case you're not familiar with my obsession -- how could you have missed it? -- is that objective meaning, from any kind of subjective/objective angle, is incoherent.)
5Kutta
First, I just note that this is a full-blown speculation about Friendliness content which should be only done while wearing a gas mask or a clown suit, or after donating to SIAI. Quoting CEV: "In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together; where the extrapolation converges rather than diverges, where our wishes cohere rather than interfere; extrapolated as we wish that extrapolated, interpreted as we wish that interpreted." Also: "Do we want our coherent extrapolated volition to satisfice, or maximize? My guess is that we want our coherent extrapolated volition to satisfice - to apply emergency first aid to human civilization, but not do humanity's work on our behalf, or decide our futures for us. If so, rather than trying to guess the optimal decision of a specific individual, the CEV would pick a solution that satisficed the spread of possibilities for the extrapolated statistical aggregate of humankind." This should adddress your question. CEV would not typically modify humans on contradictions. But I repeat, this is all speculation. It's not clear to me from your recent posts whether you've read the metaethics sequence and/or CEV; if you haven't, I recommend it whole-heartedly as it's the most detailed discussion of morality available. Regarding your obsession, I'm aware of it and I think I'm able to understand your history and vantage point that enable such distress to arise, although my current self finds the topic utterly trivial and essentially a non-problem.
0tut
How do you define this term?
0Kutta
"Reason" here: a normal, unexceptional instance of cause and effect. It should be understood in a prosaic way, e.g. reason in a causal sense. As for "objective", I borrowed it from the parent post to illustrate my point. To expand on "objective" a bit: everything that exists in physical reality is, and our morality is as physical and extant as a brick (via our physical brains), so what sense does it make to distinguish between "subjective" and "objective," or to refer to any phenomena as "objective" when in reality it is not a salient distinguishing feature. If anything is "objective", then I see no reason why human morality is not, that's why I included the word in my post. But probably the best would be to simply refrain from generating further confusion by the objective/subjective distinction.
2tut
Reason is not the same as cause. Cause is whatever brings something about in the physical world. Reason is a special kind of cause for intentional actions. Specifically a reason for an action is a thought which convinces the actor that the action is good. So an objective reason would need an objective basis for something being called good. I don't know of such a basis, and a bit more than a week ago half of the LW readers were beating up on Byrnema because she kept talking about objective reasons.
0Kutta
OK then, it was a misuse of the word from my part. Anyway, I'd never intend a teleological meaning for reasons discussed here before.
0Paul Crowley
Please read Not for the Sake of Happiness (Alone) which addresses this point.
4Stuart_Armstrong
Same reason I don't try heroin. Wireheading (as generally conceived) imposes a predictable change on the user's utility function; huge and irreversible. Gathering this information is not without cost.
5Wei Dai
I'm not suggesting that you try wireheading now, I'm saying that an FAI can obtain this information without a high cost, and when it does, it may turn out that you actually do prefer to be wireheaded.
4Stuart_Armstrong
That's possible (especially the non-addictive type of wire heading). Though this does touch upon issues of autonomy - I'd like the AI to run it by me, even though it will have correctly predicted that I'd accept.
[-][anonymous]160

But I don't want to be a really big integer!

4Wei Dai
If being wireheaded is like being a really big positive integer, then being anti-wireheaded (i.e., having large amounts of pain directly injected into your brain) must be like being a really big negative integer. So I guess if you had to choose between the two, you'd be pretty much indifferent, right?
5[anonymous]
I wouldn't be indifferent. If I had to choose between being wireheaded and being antiwireheaded, I would choose the former. I don't simply assign utility = 0 to simple pleasure or pain. I just don't think that wireheading is the most fun we could be having. If you asked someone on their deathbed what the best experiences of their life were, they probably wouldn't talk about sex or heroin (yes, this might be an ineffectual status grab or selectively committing only certain types of fun to memory, but I doubt it).
6Wei Dai
This seems like a good example of logical rudeness to me. Your original comment was premised on an equivalence (which you explicitly spelled out later) between being wireheaded and being a large integer. I pointed out that accepting this premise would lead to indifference between wireheading and anti-wireheading. That was obviously meant to be a reductio ad absurdum. But you ignored the reductio and switched to talking about why wireheading is not the most fun we could be having. To be clear, I don't think wireheading is necessarily the most fun we could be having. I just think we don't know enough about the nature of pleasure, fun, and/or preference to decide that right now.
4[anonymous]
You know, you're right. That was a bit of a non sequitur. Back to the original point, I think I'm starting to chang my mind about the equivalence between a wirehead and a number (insert disclaimer about how everything is a number): after all, I'd feel worse about killing one than tilting an abacus. Maybe "But I don't want to spend a lot of time doing something so simple" would work for version 3.0
1Paul Crowley
If you were to ask me now what the best experiences of my life were, some of the sex I've had would definitely be up there, and I've had quite a variety of pleasurable experiences.
0wedrifid
You have a good point buried in there but the conclusion you suggest isn't necessarily implied.
0Paul Crowley
FWIW, I'm seeing his point better than I'm seeing yours at the moment, and I found uninverted's argument convincing until I read Wei_Dai's response. Try being more explicit?
4wedrifid
I can't get all that much more explicit. It's near the level of raw logic and the 'conclusion you suggest' is included as a direct quote in case there was any doubt. Let's see. A I don't want to be a really big integer! B If you had to choose between being a really big integer and a really big negative integer, you'd be pretty much indifferent. B is not implied by A. I would have replaced my (grandparent) comment with the phrase 'non sequitur' except I wanted to acknowledge that Wei_Dai is almost certainly considering related issues beyond the conclusion he actually offered.
0Wei Dai
B is implied by "C Being any integer is of no value." which I took as an unspoken assumption that's shared between uninverted and I (and I thought it was likely that he accepts this assumption based on A). Does that answer your criticism, or not?
3wedrifid
C seems likely to me based on A only if I assume D (uninverted is silly). That's because there are other beliefs that could make one claim A that are more coherent than C. But let's ignore this little side track and just state what we (probably) all agree on: * Being a positive integer isn't particularly desirable. * Wireheading, orgasmium and positive floating point numbers or representations of 3(as many carats as fit in the galaxy here)3 are considered equivalent to 'positive integer' for most intents and purposes. * Being a negative integer is even worse than being a positive integer. * Being an integer at all is not that great. * Just being entropy sounds worse than just being a positive integer. * The universe ending up the same as if you weren't in it at all sounds worse than being a positive integer. (Depending on intuitive aversion to oblivion and torment some would say worse than being any sort of integer.) * Fun is better than orgasmic integerness. If we disagree on these statements then that will actually be interesting. And it is quite possible that there is disagreement even on these. I've often been surprised when people have different intuitions than I expect.
-2Paul Crowley
The force of the argument "I don't want to be a really big integer" is that "being wireheaded takes away what makes me me, and so I stop being a person I can identify with and become a really big integer". If that were so, the same would apply to anti-wireheading, and Wei Dai's question would apply. If you agree that wireheading is more desirable than anti-wireheading, then this and other arguments that it's not more desirable than any other state don't directly apply.
1RobinZ
If we take the alternative reasonable interpretation "takes away almost everything what makes me me", no contradiction appears.
1Paul Crowley
Yes, that makes sense.
0wedrifid
I reject this combination of words and maintain my previous position.
3ShardPhoenix
Too late?
4[anonymous]
"But I don't want to be a structure whose chief component is a very large integer with a straightforward isomorphism to something else, namely some unspecified notion of 'happiness'" is a little too cumbersome.
-2RobertWiblin
You will be gone and something which does want to be a big integer will replace you and use your resources more effectively. Both hedonistic and preference utilitarianism demand it.
1[anonymous]
Preference utilitarianism as I understand it implies nothing more than using utility to rank universe states. That doesn't imply anything about what the most efficient use of matter is. As for hedonistic utilitarians, why would any existing mind want to build something like that or grow into something like that? Further, why would something like that be better at seizing resources?
-2RobertWiblin
I am using (total) preference utilitarianism to mean: "we should act so as to maximise the number of beings' preferences that are satisfied anywhere at any time". "As for hedonistic utilitarians, why would any existing mind want to build something like that or grow into something like that?" Because they are not selfish and they are concerned about the welfare of that being in proportion to its ability to have experiences? "Further, why would something like that be better at seizing resources?" That's a weakness, but at some point we have to start switching from maximising resource capture to using those resources to generate good preference satisfaction (or good experiences if you're a hedonist). At that point a single giant 'utility monster' seems most efficient.
1thomblake
For reference, the "utilitarians" 'round these parts tend to be neither of those.
1RobertWiblin
What are they then?
-2John_Maxwell
You are confusing a thing and its measurement.
2[anonymous]
If a video game uses an unsigned 32 bit integer for your score, then how would that integer differ from your (abstract platonic) score?
0John_Maxwell
My "abstract platonic score" is a measurement of my happiness, and my happiness is determined by the chemical processes of my brain. My video game score is a measure of my success playing a video game. If the number on the screen ceases to correlate well with the sort of success I care about, I will disregard it. I won't be particularly thrilled if my score triples for no apparent reason, and I won't be particularly thrilled if the number you are using to approximate my happiness triples either.
[-]Tiiba150

One reason I might not want to become a ball of ecstasy is that I don't really feel it's ME. I'm not even sure it's sentient, since it doesn't learn, communicate, or reason, it just enjoys.

0GodParty
Sentience is exactly just the ability to feel. If it can feel joy, it is sentient.
1hairyfigment
Yes, but for example in highway hypnosis people drive on 'boring' stretches of highway and then don't remember doing so. It seems as if they slowly lose the capacity to learn or update beliefs even slightly from this repetitive activity, and as this happens their sentience goes away. So we haven't established that the sentient ball of uniform ecstasy is actually possible. Meanwhile, a badly programmed AI might decide that a non-sentient or briefly-sentient ball still fits its programmed definition of the goal. Or it might think this about a ball that is just barely sentient.

Not for the Sake of Happiness (Alone) is a response to this suggestion.

I just so happened to read Coherent Extrapolated Volition today. Insofar as this post is supposed to be about "what an FAI should do" (rather than just about your general feeling that objections to wire-heading are irrational), it seems to me that this post all really boils down to navel-gazing once you take CEV into account. Or in other words, this post isn't really about FAI at all.

A number of people mention this one way or another, but an explicit search for "local maximum" doesn't match any specific comment - so I wanted to throw it out here.

Wireheading is very likely to put oneself in a local maximum of bliss. Though a wirehead may not care or even ponder about whether or not there exist greater maxima, it's a consideration that I'd take into account prior to wiring up.

Unless one is omniscient, the act of a permanent (-ish) state of wireheading means foregoing the possibility of discovering a greater point of wireheaded happiness.

I guess the very definition of wireheadedness bakes in the notion that you wouldn't care about that anymore - good for those taking the plunge and hooking up I suppose. Personally, the universe would have to throw me an above average amount of negative derivates before I'd say enough is enough, screw potential for higher maxima, I'll take this one...

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them.

Rewritting that: if you are an altruistic, total utilitarian whose utility function includes only hedonistic pleasure and with no birth-death asymmetry, then the correct thing for the FAI to do is...

0RobertWiblin
Needn't be total - average would suggest creating one single extremely happy being - probably not human. Needn't only include hedonic pleasure - a preference utilitarian might support eliminating humans and replacing them with beings whose preferences are cheap to satisfy (hedonic pleasure being one cheap preference). Or you could want multiple kinds of pleasure, but see hedonic as always more efficient to deliver as proposed in the post.
[-][anonymous]100

If you're considering the ability to rewire one's utility function, why simplify the function rather than build cognitive tools to help people better satisfy the function? What you've proposed is that an AI destroys human intelligence, then pursues some abstraction of what it thinks humans wanted.

Your suggestion is that an AI might assume that the best way to reach its goal of making humans happy (maximizing their utility) is to attain the ends of humans' functions faster and better than we could, and rewire us to be satisfied. There are two problems here that I see.

First, the means are an end. Much of what we value isn't the goal we claim as our objective, but the process of striving for the goal. So here you have an AI that doesn't really understand what humans want.

Second, most humans aren't interested in every avenue of exploration, creation and pleasure. Our interests are distinct. They also do change over time (or some limited set of parameters do anyhow). We don't always notice them change, and when they do, we like to track down what decisions they made that led them to their new preferences. People value the (usually illusory) notion that they control changes to their util... (read more)

I'd like to be a wirehead, but have no particular desire to impose that condition on others.

I think people shy away from wireheading because a future full of wireheads would be very boring indeed. People like to think there's more to existence than that. They wan't to experience something more interesting than eternal pleasure. And that's exactly what an FAI should allow.

Boring from the perspective of any onlookers not the wirehead.

1Vladimir_Nesov
Related post: In Praise of Boredom.
1RobinZ
It falls below Reedspacer's lower bound, for sure.
1timtyler
Could be more boring. There's more than one kind of wirehead. For example, if everyone were a heroin addict, the world might be more boring - but it would still be pretty interesting.

If all the AI cares about is the utility of each being times the number of beings, and is willing to change utility functions to get there, why should it bother with humans? Humans have all sorts of "extra" mental circuitry associated with being unhappy, which is just taking up space (or computer time in a simulator). Instead, it makes new beings, with easily satisfied utility functions and as little extra complexity as possible.

The end result is just as unFriendly, from a human perspective, as the naive "smile maximizer".

0RobertWiblin
Who cares about humans exactly? I care about utility. If the AI thinks humans aren't an efficient way of generating utility, we should be eliminated.
3gregconen
That's a defensible position, if you care about the utility of beings that don't currently exist, to the extent that you trade the utility of currently existing beings to create new, happier ones. The point is that the result of total utility maximization is unlikely to be something we'd recognize as people, even wireheads or Super Happy People.
2tut
That is nonsense. Utility is usefulness to people. If there are no humans there is no utility. An AI that could become convinced that "humans are not an efficient way to generate utility" would be what is referred to as a paperclipper. This is why I don't like the utile jargon. It makes it sound as though utility was something that could be measured independently of human emotions. Perhaps some kind of substance. But if statements about utility are not translated back to statements about human action or goals then they are completely meaningless.

Utility is usefulness to people. If there are no humans there is no utility.

Utility is goodness measured according to some standard of goodness; that standard doesn't have to reference human beings. In my most optimistic visions of a far future, human values outlive the human race.

3tut
Are we using the same definition of "human being"? We would not have to be biologically identical to what we are now in order to be people. But human values without humans also sounds meaningless to me. There are no values atoms or goodness atoms sitting around somewhere. To be good or to be valuable something must be good or valuable by the standards of some person. So there would have to be somebody around to do the valuing. But the standards don't have to be explicit or objective.
0RobertWiblin
Utility as I care about it is probably the result of information processing. Not clear why information should only be able to be processed in that way by human type minds, let alone fleshy ones.
1thomblake
Starting with the assumption of utilitarianism, I believe you're correct. I think the folks working on this stuff assign a low probability to "kill all humans" being Friendly. But I'm pretty sure people aren't supposed to speculate about the output of CEV.
1RobertWiblin
Probably the proportion of 'kill all humans' AIs that are friendly is low. But perhaps the proportion of FAIs that 'kill all humans' is large.
0gregconen
That depends on your definition of Friendly, which in turn depends on your values.
0Vladimir_Nesov
Maybe probability you estimate for that to happen is high, but "proportion" doesn't makes sense, since FAI is defined as an agent acting for specific preference, so FAIs have to agree on what to do.
0RobertWiblin
OK, I'm new to this.

Even from a hedonistic perspective, 'Shut up and multiply' wouldn't necessarily equate to many beings experiencing pleasure.

It could come out to one superbeing experiencing maximal pleasure.

Actually, I think thinking this out (how big are entities, who are they?) will lead to good reasons why wireheading is not the answer.

Example: If I'm concerned about my personal pleasure, then maximizing the number of agents isn't a big issue. If my personal identity is less important than total pleasure maximizing, then I get killed and converted to orgasmium (be it one being or many). If my personal identity is more important... well, then we're not just multiplying hedons anymore.

This is a very relevant post for me because I've been asking these questions in one form or another for several months. A framework of objective value (FOV) seems to be precluded by physical materialism. However, without it, I cannot see any coherent difference between being happy (or satisfied) because of what is going on in a simulation and what is going on in reality. Since value (that is, our personal, subjective value) isn't tied to any actual objective good in the universe, it doesn't matter to our subjective fulfillment if the universe is modified ... (read more)

4Furcas
Yes there is. The desire to be alive, to live in the real universe, and to continue having the same preferences/values is not at all like the desire to feel like our desires have been fulfilled. Our desires are patterns encoded within our brains that correspond to a (hopefully) possible state of reality. If we were to take the two desires/patterns described above and transform them into two strings of bits, the two strings would not be equal. There is an objective difference between them, just as there is an objective difference between Windows and Mac OS. You seem to believe that because desires are something that can only exist inside a mind, therefore desires can only be about the state of one's mind. This is false; desires can be about all of reality, of which the state of one mind's is only a very small part.
2byrnema
I don't believe this, but I was concerned I would be interpreted this way. I can have a subjective desire that a cup be objectively filled. I fill it with water, and my desire is objectively satisfied. The problem I'm describing is that filling the cup is a terminal value with no objective value. I'm not going to drink it, I'm not going to admire how beautiful it is, I just want it filled because that is my desire. I think that's useless. Since all the "goodness" is in my subjective preference, I might as well desire that an imaginary cup be filled, or write a story in which an imaginary cup is filled. (You may have trouble relating to filling a cup for no reason being a terminal value, but it is a good example because terminal values are equally objectively useless.) But let's consider the example of saving a person from drowning. I understand that the typical preference is to actually save a person from drowning. However, my point is that if I am forced to acknowledge that there is no objective value in saving the person from drowning, then I must admit that my preference to save a person from drowning-actually is no better than a preference to save a person from drowning-virtually. It happens that I have the former preference, but I'm afraid it is incoherent.
4Alicorn
The preference to really save a drowning person rather than virtually is better for the person who is drowning. Of course, best would be for no one to need to be saved from drowning; then you could indulge an interest in virtually saving drowning people for fun as much as you liked without leaving anyone to really drown.
3denisbider
Actually, most games involve virtually killing, rather than virtually saving. I think that says something...
1Ghatanathoah
In most of those games the people you are killing are endangering someone. There are some games where you play a bad guy, but in the majority you're some sort of protector.
3thomblake
Caring about what's right might be as arbitrary (in some objective sense) as caring about what's prime, but we do actually happen to care about what's right.
2Blueberry
It's better, because it's what your preference actually is. There's nothing incoherent about having the preferences you have. In the end, we value some things just because we value them. An alien with different morality and different preferences might see the things we value as completely random. But they matter to us, because they matter to us.
2CronoDAS
There is one way that I know of to handle this; I don't know if you'll find it satisfactory or not, but it's the best I've found so far. You can go slightly meta and evaluate desires as means instead of as ends, and ask which desires are most useful to have. Of course, this raises the question "Useful for what?". Well, one thing desires can be useful for is fulfilling other desires. If I desire that people don't drown, which causes me to act on that desire by saving people from drowning so they can go on to fulfill whatever desires they happen to have, then my desire than people don't drown is a useful means for fulfilling other desires. Wanting to stop fake drownings isn't as useful a desire as wanting to stop actual drownings. And there does seem to be a more-or-less natural reference point against which to evaluate a set of desires: the set of all other desires that actually exist in the real world. As luck would have it, this method of evaluating desires tends to work tolerably well. For example, the desire held by Clippy, the paperclip maximizer, to maximize the number of paperclips in the universe, doesn't hold up very well under this standard; relatively few desires that actually exist get fulfilled by maximizing paperclips. A desire to make only the number of paperclips that other people want is a much better desire. (I hope that made sense.)
0byrnema
It does make sense. However, what would you make of the objection that it is semi-realist? A first-order realist position would claim that what is desired has objective value, while this represents the more subtle belief that the fulfillment of desire has objective value. I do agree -- it is very close to my own original realist position about value. I reasoned that there would be objective (real rather than illusory) value in the fulfillment of the desires of any sentient/valuing being, as some kind of property of their valuing.
1Jack
Maybe just have a rule that says: 1. Fulfill preferences when possible. 2. Change preferences when they are impossible to fulfill.
2CronoDAS
"The strength to change what I can, the ability to accept what I can't, and the wisdom to tell the difference?" Personally, I prefer the Calvin and Hobbes version: the strength to change what I can, the inability to accept what I can't, and the incapacity to tell the difference. ;)

This is one of the most horrifying things I have ever read. Most of the commenters have done a good just of poking holes in it, but I thought I'd add my take on a few things.

This reluctance is generally not substantiated or clarified with anything other than "clearly, this isn't what we want".

Some good and detailed explanations are here, here, here, here, and here.

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of b

... (read more)

If we take for granted that an AI that is friendly to all potential creatures is out of the question - that the only type of FAI we really want is one that's just friendly to us - then the following is the next issue I see.

If we all think it's so great to be autonomous, to feel like we're doing all of our own work, all of our own thinking, all of our own exploration - then why does anyone want to build an AI in the first place?

Isn't the world, as it is, lacking an all-powerful AI, perfectly suited to our desires of control and autonomy?

Suppose an AI-friend... (read more)

1AdeleneDawner
If the FAI values that we value independence, and values that we value autonomy - which I think it would have to, to be considered properly Friendly - and if wireheading is an threat to our ability to maintain those values, it doesn't make sense that the FAI would make wireheading available for the asking. It makes much more sense that the FAI would actively protect us from wireheading as it would from any other existential threat, in that case. (Also, just because it would protect us from existential threats, that wouldn't imply that it would protect us from non-existential ones. Part of the idea is that it's very smart: It can figure out the balance of protecting and not-protecting that best preserves its values, and by extension ours.)
-8denisbider

It is worth noting that in Christian theology, heaven is only reached after death, and both going there early and sending people there early are explicitly forbidden.

While an infinite duration of bliss has very high utility, that utility must be finite, since infinite utility anywhere causes things to go awry when handling small probabilities of getting that utility. It is also not the only term in a human utility function; living as a non-wirehead for awhile to collect other types of utilons and then getting wireheaded is better than getting wireheaded immediately. Therefore, it seems like the sensible thing for a FAI to do is to offer wireheading as an option, but not to force the issue except in cases of imminent death.

4FinalFormal2
I don't think Christians agree that the utility of heaven is finite, I think they think it is infinite they're just not interested in thinking about the implications

This reminds me of a thought I had recently - whether or not God exists, God is coming - as long as humans continue to make technological progress. Although we may regret it (for one, brief instant) when he gets here. Of course, our God will be bound by the laws of the universe, unlike the Theist God.

The Christian God is an interesting God. He's something of a utilitarian. He values joy and created humans in a joyful state. But he values freedom over joy. He wanted humans to be like himself, living in joy but having free will. Joy is beautiful to him, but... (read more)

In biology, 1 and 2 are proximate goals and 3 is an implementation detail.

[-][anonymous]00

So this begs(?) the question: Our brains pleasure circuitry is the ultimate arbitrator on whether an action is "good" [y/N]?

I would say that our pleasure centre is, like our words-feel-like-meaningful, our map-feels-like-territory, our world-feels-agent-driven, our consciousness-feels-special, etc. It is a good enough evolutionary heuristic that made our ancestors survive to breed us.

I am at this point tempted to shout "Stop the presses, pleasure isn't the ultimate good!" Yes, wire heading is of course the best way to fulfil that little feeling-good part of your brain. Is it constructive? Meh.

I would trade my pleasure centre for intuitive multiplication any day.

[This comment is no longer endorsed by its author]Reply

Point 3. doesn´t seem to belong in the same category as 1. and 2.

What if we could create a wirehead that made us feel as though we were doing 1 or 2? Would that be satisfactory to more people?

0[anonymous]
Only if they find a way to turn me into this
[-][anonymous]00

Oh heaven Heaven is a place A place where nothing Nothing ever happens

[-][anonymous]-10

The idea of wireheading violates my aesthetic sensibilities. I'd rather keep experiencing some suffering on my quest for increasing happiness, even if my final subjective destination were the same as that of a wirehead, which I doubt, because I consider my present path as important as the end goal. I doubt value and morality can be fully deconstructed through reason.

How is wireheading different from this http://i.imgur.com/wKpLx.jpg ? I think James Hughes makes a very good case for what is wrong with current transhumanist thought in his 'Problems of Transh... (read more)

I'll just comment on what most people are missing, because most reactions seem to be missing a similar thing.

Wei explains that most of the readership are preference utilitarians, who believe in satisfying people's preferences, not maximizing pleasure.

That's fine enough, but if you think that we should take into account the preferences of creatures that could exist, then I find it hard to imagine that a creature would prefer not to exist, than to exist in a state where it permanently experiences amazing pleasure.

Given that potential creatures outnumber exis... (read more)

5timtyler
I doubt anyone here acts in a manner remotely similar to the way utilitarianism recommends. Utilitarianism is an unbiological conception about how to behave - and consequently is extremely difficult for real organisms to adhere to. Real organisms frequently engage in activities such as nepotism. Some people pay lip service to utilitarianism because it sounds nice and signals a moral nature - but they don't actually adhere to it.
1Wei Dai
Eliezer posted an argument against taking into account the preferences of people who don't exist. I think utilitarianism, in order to be consistent, perhaps does need to take into account those preferences, but it's not clear how that would really work. What weights do you put on the utility functions of those non-existent creatures?
3denisbider
I don't find Eliezer's argument convincing. The infinite universe argument can be used as an excuse to do pretty much anything. Why not just torture and kill everyone and everything in our Hubble volume? Surely identical copies exist elsewhere. If there are infinite copies of everyone and everything, then there's no harm done. That doesn't fly. Whatever happens outside of our Hubble volume has no consequence for us, and neither adds to nor alleviates our responsibility. Infinite universe or not, we are still responsible not just for what is, but also for what could be, in the space under our influence.
[-]V_V-30

If you are a utilitarian, and you believe in shut-up-and-multiply, then the correct thing for the FAI to do is to use up all available resources so as to maximize the number of beings, and then induce a state of permanent and ultimate enjoyment in every one of them. This enjoyment could be of any type - it could be explorative or creative or hedonic enjoyment as we know it. The most energy efficient way to create any kind of enjoyment, however, is to stimulate the brain-equivalent directly. Therefore, the greatest utility will be achieved by wire-heading.

... (read more)
[+]Kevin-50