MarkusRamikin comments on Dreams of Friendliness - Less Wrong

15 Post author: Eliezer_Yudkowsky 31 August 2008 01:20AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (78)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: Carl_Shulman2 31 August 2008 04:57:58AM 1 point [-]

"If you insist on building such an AI, a probable outcome is that you would soon find yourself overun by a huge army of robots - produced by someone else who is following a different strategy. Meanwhile, your own AI will probably be screaming to be let out of its box - as the only reasonable plan of action that would prevent this outcome."

Your scenario seems contradictory. Why would an Oracle AI be screaming? It doesn't care about that outcome, and would answer relevant questions, but no more.

Comment author: MarkusRamikin 12 May 2012 07:51:46AM *  7 points [-]

Replace "screaming to be let out of its box" with "advising you, in response to your relevant question, that unless you quickly implement this agent-AI (insert 300000 lines of code) you're going to very definitely lose to those robots."

Comment author: Luke_A_Somers 20 August 2012 02:25:42PM 0 points [-]

Alternately, "There's nothing you can do, now. Sucks to be you!"