buybuydandavis comments on Decision Theory FAQ - Less Wrong

52 Post author: lukeprog 28 February 2013 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (467)

You are viewing a single comment's thread. Show more comments above.

Comment author: buybuydandavis 18 March 2013 04:14:44AM -1 points [-]

What Eliezer is talking about (a superintelligence paperclip maximiser) does not have a pleasure-pain axis.

Why does that matter for the argument?

As long as Clippy is in fact optimizing paperclips, what does it matter what/if he feels while he does it?

Pearce seems to be making a claim that Clippy can't predict creatures with pain/pleasure if he doesn't feel them himself.

Maybe Clippy needs pleasure/pain too be able to predict creatures with pleasure/pain. I doubt it, but fine, grant the point. He can still be a paper clip maximizer regardless.

Comment author: wedrifid 18 March 2013 04:53:37AM *  0 points [-]

Why does that matter for the argument?

I fail to comprehend the cause for your confusion. I suggest reading the context again.