MaoShan comments on Domesticating reduced impact AIs - All

9 Post author: Stuart_Armstrong 14 February 2013 04:59PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (104)

You are viewing a single comment's thread.

Comment author: MaoShan 15 February 2013 04:41:06AM 0 points [-]

t=59 minutes...

AI: Hmm, I have produced in this past hour one paperclip, and the only other thing I did was come up with the solutions for all of humanity's problems, I guess I'll just take the next minute to etch them into the paperclip...

t=2 hours...

Experimenters: Phew, at least we're safe from that AI.

Comment author: Stuart_Armstrong 15 February 2013 12:13:54PM 1 point [-]

Extra clarification: in this example, I'm assuming that we don't observe the AI, and that we are very unlikely to detect the paperclip. How to get useful work out of the AI is the next challenge, if this model holds up.

Comment author: CCC 15 February 2013 05:56:41AM 0 points [-]

That seems to be the preferred outcome, yes. In the process, we can (hopefully) safely learn more about AIs in general. Though the AI may choose to sabotage this learning process in order to reduce its future impact...