Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

nshepperd comments on Hard Takeoff - Less Wrong

14 Post author: Eliezer_Yudkowsky 02 December 2008 08:44PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (33)

Sort By: Old

You are viewing a single comment's thread. Show more comments above.

Comment author: nshepperd 30 September 2011 03:09:00PM 2 points [-]

I think it all boils down to very simple showstopper - considering you are building perfect simulation, how many atoms you need to simulate a atom?

Perfect simulation is not the only means of self-knowledge.

As for empirical knowledge, I'm not sure Eliezer expects an AI to take over the world with no observations/input at all, but he does think that people do far overestimate the amount of observations an effective AI would need.

(Also, for an AI, "building a new AI" and "self-improving" are pretty much the same thing. There isn't anything magic about "self". If the AI can write a better AI, it can write a better AI; whether it calls that code "self" or not makes no difference. Granted, it may be somewhat harder for the AI to make sure the new code has the same goal structure if it's written from scratch, but there's no particular reason it has to start from scratch.)