Doug_S. comments on Contaminated by Optimism - Less Wrong

10 Post author: Eliezer_Yudkowsky 06 August 2008 12:26AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Doug_S. 06 August 2008 06:21:39AM 0 points [-]

A paperclip maximizer might keep humans around for a while (because, as of right now, we're the only beings around that make paperclips) but yeah, if it had enough power (magic nanotechnology, etc.), we'd most likely be gone.

It seems to me like the simplest way to solve friendliness is: "Ok AI, I'm friendly so do what I tell you to do and confirm with me before taking any action." It is much simpler to program a goal system that responds to direct commands than to somehow try to infuse 'friendliness' into the AI.

This has been addressed before. Basically, you'll get what you asked for, but it probably won't be what you really want.