Kingreaper comments on The hard limits of hard nanotech - Less Wrong

19 Post author: lsparrish 07 November 2010 12:49AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (53)

You are viewing a single comment's thread. Show more comments above.

Comment author: Kingreaper 30 November 2010 09:53:42PM 0 points [-]

By its own moral lights, Clippy ought to stop presenting itself as a paperclip-maximizer.

Clippy can simultaneously present in one account as a paperclip maximiser, and in another as human.

The interplay between Clippy and a fake-human account could serve to create an environment more conducive to Clippy's end-goal.

Or, of course, Clippy might be programmed to achieve vis aims solely through honest communication. Would be an interesting, but incomplete, safeguard on an AI.

Comment author: topynate 30 November 2010 10:08:04PM 1 point [-]

I struggle to understand the mentality that would put safeguards like that on an AI and then instruct it to maximize paperclips.

Comment author: TheOtherDave 30 November 2010 10:24:47PM 2 points [-]

Well, let's just be thankful they didn't create the AI equivalent of a "Hello, world" program. That would be really annoying.

Comment author: Kingreaper 30 November 2010 10:15:42PM 0 points [-]

Well, it would have to be a paperclip manufacturer I suppose.

Either that or a very strange experiment.

Maybe Mythbusters?

Comment author: TheOtherDave 30 November 2010 10:23:24PM 0 points [-]

Clippy can simultaneously present in one account as a paperclip maximiser, and in another as human.

(nods) I stand corrected... that is a far better solution from Clippy's perspective, as it actually allows Clippy to experimentally determine which approach generates the most paperclips.

Or, of course, Clippy might be programmed to achieve vis aims solely through honest communication.

The question would then arise as to whether Clippy considers honest communication to be a paperclip-maximizing sort of thing to do, or if it's more like akrasia -- that is, a persistent cognitive distortion that leads Clippy to do things it considers non-paperclip-maximizing.