You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Alicorn comments on Why No Wireheading? - Less Wrong Discussion

16 [deleted] 18 June 2011 11:33PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (112)

You are viewing a single comment's thread. Show more comments above.

Comment author: Alicorn 19 June 2011 07:52:22PM 5 points [-]

How could you tell the difference? Let's say I claim to have build a MBLS that doesn't contain any sentients whatsoever and invite you to test it for an hour. (I guarantee you it won't rewire any preferences or memories; no cheating here.) Do you expect to not be happy? I have taken great care that emotions like loneliness or guilt won't arise and that you will have plenty of fun. What would be missing?

I'd probably test such a thing for an hour, actually, and for all I know it would be so overwhelmingly awesome that I would choose to stay, but I expect that assuming my preferences and memories remained intact, I would rather be out among real people. My desire to be among real people is related to but not dependent on my tendency towards loneliness, and guilt hadn't even occurred to me (I suppose I'd think I was being a bit of a jerk if I abandoned everybody without saying goodbye, but presumably I could explain what I was doing first?) I want to interact with, say, my sister, not just with an algorithm that pretends to be her and elicits similar feelings without actually having my sister on the other end.

Why would you want that? To me, that sounds like deliberately crippling a good solution. What good does it do to be in a low mood when something bad happens? I'd assume that this isn't an easy question to answer and I'm not calling you out on it, but "I want to be able to feel something bad" sounds positively deranged.

In a sense, emotions can be accurate sort of like beliefs can. I would react similarly badly to the idea of having pleasant, inaccurate beliefs. It would be mistaken (given my preferences about the world) to feel equally happy when someone I care about has died (or something else bad) as when someone I care about gets married (or something else good).

(I can see uses with regards to honest signaling, but then a constant high set-point and a better ability to lie would be preferable.)

Lying is wrong.

You already have a very unreliable and sparse memory.

I know. It is one of the many terrible things about reality. I hate it.

I can only think of the intuition "the only way to access some of the good things that happened to me, right now, is through my memory, so if I lost it, those good things would be gone". Orgasmium is always amazing.

Memories are a way to access reality-tracking information. As I said, remembering stuff is not consistently pleasant, but that's not what it's about.

How can you value something that doesn't have a causal connection to you?

Counterfactually.

How do you know that? I'm not trying to play the postmodernism card "How do we know anything?", I'm genuinely curious how you arrived at this conclusion.

Well, I wrote everything above that in my comment, and then noticed that there was this pattern, and didn't immediately come up with a counterexample to it.

I think it's fine if you want to wirehead. I do not advocate interfering with your interest in doing so. But I still don't want it.