jimrandomh comments on A Nightmare for Eliezer - Less Wrong

0 Post author: Madbadger 29 November 2009 12:50AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (74)

You are viewing a single comment's thread. Show more comments above.

Comment author: jimrandomh 30 November 2009 02:14:16PM *  6 points [-]

What plausible sequence of computations within an AI constructed by fools leads it to ring me on the phone?

It's only implausible because it contains too many extraneous details. An AI could contain an explicit safeguard of the form "ask at least M experts on AI friendliness for permission before exceeding N units of computational power", for example. Or substitute "changing the world by more than X", "leaving the box", or some other condition in place of a computational power threshold. Or the contact might be made by an AI researcher instead of the AI itself.

As of today, your name is highly prominent on the Google results page for "AI friendliness", and in the academic literature on that topic. Like it or not, that means that a large percentage of AI explosion and near-explosion scenarios will involve you at some point.