Raoul589 comments on Welcome to Heaven - Less Wrong

23 Post author: denisbider 25 January 2010 11:22PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (242)

You are viewing a single comment's thread. Show more comments above.

Comment author: Raoul589 28 January 2013 01:19:14AM 0 points [-]

You will only wirehead if that will prevent you from doing active, intentional harm to others. Why is your standard so high? TheOtherDave's speculative scenario should be sufficient to have you support wireheading, if your argument against it is social good - since in his scenario it is clearly net better to wirehead than not to.

Comment author: lavalamp 28 January 2013 01:34:52AM 0 points [-]

All of the things he lists are not true for me personally and I had trouble imagining worlds in which they were true of me or anyone else. (Exception being the resource argument-- I imagine e.g. welfare recipients would consume fewer resources but anyone gainfully employed AFAIK generally adds more value to the economy than they remove.)

Comment author: TheOtherDave 28 January 2013 05:51:44AM 0 points [-]

FWIW, I don't find it hard to imagine a world where automated tools that require fewer resources to maintain than I do are at least as good as I am at doing any job I can do.

Comment author: lavalamp 28 January 2013 01:29:53PM 0 points [-]

Ah, see, for me that sort of world has human level machine intelligence, which makes it really hard to make predictions about.

Comment author: TheOtherDave 28 January 2013 03:30:45PM 0 points [-]

Yes, agreed that automated tools with human-level intelligence are implicit in the scenario.
I'm not quite sure what "predictions" you have in mind, though.

Comment author: lavalamp 28 January 2013 07:35:04PM 0 points [-]

That was poorly phrased, sorry. I meant it's difficult to reason about in general. Like, I expect futures with human-level machine intelligences to be really unstable and either turn into FAI heaven or uFAI hell rapidly. I also expect them to not be particularly resource constrained, such that the marginal effects of one human wireheading would be pretty much nil. But I hold all beliefs about this sort of future with very low confidence.

Comment author: TheOtherDave 28 January 2013 08:26:34PM 0 points [-]

Confidence isn't really the issue, here.

If I want to know how important the argument from social good is to my judgments about wireheading, one approach to teasing that out is to consider a hypothetical world in which there is no net social good to my not wireheading, and see how I judge wireheading in that world. One way to visualize such a hypothetical world is to assume that automated tools capable of doing everything I can do already exist, which is to say tools at least as "smart" as I am for some rough-and-ready definition of "smart".

Yes, for such a world to be at all stable, I have to assume that such tools aren't full AGIs in the sense LW uses the term -- in particular, that they can't self-improve any better than I can. Maybe that's really unlikely, but I don't find that this limits my ability to visualize it for purposes of the thought experiment.

For my own part, as I said in an earlier comment, I find that the argument from social good is rather compelling to me... at least, if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.

Comment author: lavalamp 28 January 2013 08:59:03PM 0 points [-]

...if I posit a world in which nothing I might do improves the world in any way, I feel much more comfortable about the decision to wirehead.

Agreed. If you'll reread my comment a few levels above, I mention the resource argument is an exception in that I could see situations in which it applied (I find my welfare recipient much more likely than your scenario, but either way, same argument).

It's primarily the "your friends will be happy for you" bit that I couldn't imagine, but trying to imagine it made me think of worlds where I was evil.

I mean, I basically have to think of scenarios where it'd really be best for everybody if I suicide. The only difference between wireheading and suicide with regards to the rest of the universe is that suicides consume even fewer resources. Currently I think suicide is a bad choice for everyone with the few obvious exceptions.

Comment author: TheOtherDave 28 January 2013 09:54:59PM 0 points [-]

Well, you know your friends better than I do, obviously.

That said, if a friend of mine moved somewhere where i could no longer communicate with them, but I was confident that they were happy there, my inclination would be to be happy for them. Obviously that can be overridden by other factors, but again it's not difficult to imagine.

Comment author: CAE_Jones 29 January 2013 04:08:54AM 0 points [-]

That the social aspect is where most of the concern seems to be is interesting.

I have to wonder what situation would result in wireheading being permanent (no exceptions), without some kind of contact with the outside world as an option. If the economic motivation behind technology doesn't change dramatically by the time wireheading becomes possible, it'd need to have commercial appeal. Even if a simulation tricks someone who wants to get out into believing they've gotten out, if they had a pre-existing social network that notices them not coming out of it, the backlash could still hurt the providers.

I know for me personally, I have so few social ties at present that I don't see any reason not to wirehead. I can think of one person who I might be unpleasantly surprised to discover had wireheaded, but that person seems like he'd only do that if things got so incredibly bad that humanity looked something like doomed. (Where "doomed" is... pretty broadly defined, I guess.). If the option to wirehead was given to me tomorrow, though, I might ask it to wait a few months just to see if I could maintain sufficient motivation to attempt to do anything with the real world.

Comment author: lavalamp 29 January 2013 03:55:28AM 0 points [-]

I think the interesting discussion to be had here is to explore why my brain thinks of a wire-headed person as effectively dead, but yours thinks they've just moved to antartica.

I think it's the permanence that makes most of the difference for me. And the fact that I can't visit them even in principle, and the fact that they won't be making any new friends. The fact that their social network will have zero links for some reason seems highly relevant.