You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Dorikka comments on BOOK DRAFT: 'Ethics and Superintelligence' (part 1) - Less Wrong Discussion

11 Post author: lukeprog 13 February 2011 10:09AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (107)

You are viewing a single comment's thread. Show more comments above.

Comment author: Dorikka 16 February 2011 02:20:56AM 3 points [-]

If you want to, say, stop people from starving to death, would you be satisfied with being plopped on a holodeck with images of non-starving people? If so, then your stop-people-from-starving-to-death desire is not a desire to optimize reality into a smaller set of possible world-states, but simply a desire to have a set of sensations so that you believe starvation does not exist. The two are really different.

If you don't understand what I'm saying, the first two paragraphs of this comment might explain it better.

Comment author: nazgulnarsil 16 February 2011 02:25:44AM *  0 points [-]

thanks for clarifying. I guess I'm evil. It's a good thing to know about oneself.

Comment author: Dorikka 16 February 2011 02:30:23AM 0 points [-]

Uh, that was a joke, right?

Comment author: nazgulnarsil 16 February 2011 06:19:09AM 0 points [-]

no.

Comment author: Dorikka 16 February 2011 11:53:37PM 0 points [-]

What definition of evil are you using? I'm having trouble understanding why (how?) you would declare yourself evil, especially evil_nazgulnarsil.

Comment author: nazgulnarsil 17 February 2011 06:06:07AM 4 points [-]

i don't care about suffering independent of my sensory perception of it causing me distress.

Comment author: Dorikka 17 February 2011 03:31:49PM 0 points [-]

Oh. In that case, it might be more precise to say that your utility function does not assign positive or negative utility to the suffering of others (if I'm interpreting your statement correctly). However, I'm curious about whether this statement holds true for you at extremes, so here's a hypothetical.

I'm going to assume that you like ice cream. If you don't like any sort of ice cream, substitute in a certain quantity of your favorite cookie. If you could get a scoop of ice cream (or a cookie) for free at the cost of a million babies thumbs cut off, would you take the ice cream/cookie?

If not, then you assign a non-zero utility to others suffering, so it might be true that you care very little, but it's not true that you don't care at all.

Comment author: nazgulnarsil 18 February 2011 07:33:48AM 5 points [-]

I think you misunderstand slightly. Sensory experience includes having the idea communicated to me that my action is causing suffering. I assign negative utility to other's suffering in real life because the thought of such suffering is unpleasant.

Comment author: Dorikka 19 February 2011 01:50:20AM 0 points [-]

Alright. Would you take the offer if Omega promised that he would remove your memories of the agreement of having a million babies' thumbs cut off for a scoop of ice cream right after you made the agreement, so you could enjoy your ice-cream without guilt?

Comment author: nazgulnarsil 19 February 2011 03:31:59AM *  1 point [-]

no, at the time of the decision i have sensory experience of having been the cause of suffering.

I don't feel responsibility to those who suffer in that I would choose to holodeck myself rather than stay in reality and try to fix problems. this does not mean that I will cause suffering on purpose.

a better hypothetical dilemma might be if I could ONLY get access to the holodeck if I cause others to suffer (cypher from the matrix).