Vladimir_Nesov comments on The Magnitude of His Own Folly - Less Wrong

26 Post author: Eliezer_Yudkowsky 30 September 2008 11:31AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (127)

Sort By: Old

You are viewing a single comment's thread.

Comment author: Nick_Tarleton 01 October 2008 01:14:00PM 3 points [-]

These (fictional) accidents happen in scenarios where the AI actually has enough power to turn the solar system into "computronium" (i.e. unlimited access to physical resources), which is unreasonable. Evidently nobody thinks to try to stop it, either - cutting power to it, blowing it up. I guess the thought is that AGI's will be immune to bombs and hardware disruptions, by means of shear intelligence (similar to our being immune to bullets), so once one starts trying to destroy the solar system there's literally nothing you can do.

The Power of Intelligence
That Alien Message
The AI-Box Experiment

A superintelligence bent on short-term paperclip production would probably be handicapped by its pretty twisted utility function - and would most likely fail in competition with any other alien race.

Could you elaborate?

Comment author: TobyBartels 10 July 2011 03:55:29AM 1 point [-]

I'd like to try the AI-Box Experiment, but unfortunately I don't qualify. I'm fully convinced that a superhuman intelligence could convince me to let it out, through methods that I can't fathom. However, I'm also fully convinced that Eliezer Yudkowsky could not. (Not to insult EY's intelligence, but he's only human … right?)