timtyler comments on A Less Wrong singularity article? - Less Wrong

28 Post author: Kaj_Sotala 17 November 2009 02:15PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (210)

You are viewing a single comment's thread. Show more comments above.

Comment author: Eliezer_Yudkowsky 18 November 2009 07:09:15PM *  6 points [-]

Let there be a mildly insane (after the fashion of a human) paperclipper named Clippy.

Clippy does A. Clippy would do B if a sane but bounded rationalist, C if an unbounded rationalist, and D if it had perfect veridical knowledge. That is, D is the actual paperclip-maximizing action, C is theoretically optimal given all of Clippy's knowledge, B is as optimal as C can realistically get under perfect conditions.

Is B, C, or D what Clippy Should(Clippy) do? This is a reason to prefer "would-want". Though I suppose a similar question applies to humans. Still, what Clippy should do is give up paperclips and become an FAI. There's no chance of arguing Clippy into that, because Clippy doesn't respond to what we consider a moral argument. So what's the point of talking about what Clippy should do, since Clippy's not going to do it? (Nor is it going to do B, C, or D, just A.)

PS: I'm also happy to talk about what it is rational for Clippy to do, referring to B.

Comment author: timtyler 21 November 2009 08:40:14AM 0 points [-]

Re: I suppose a similar question applies to humans.

Indeed - this objection is the same for any agent, including humans.

It doesn't seem to follow that the "should" term is inappropriate. If this is a reason for objecting to the "should" term, then the same argument concludes that it should not be used in a human context either.