You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

gjm comments on The Kolmogorov complexity of a superintelligence - Less Wrong Discussion

2 Post author: Thomas 26 June 2011 12:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread.

Comment author: gjm 27 June 2011 02:55:27PM 2 points [-]

It seems likely that the answer depends on how rapidly and how reliably you want your "seed" to be able to turn into an actual working superintelligence. For instance:

Something along the lines of AIXI might have a very short program indeed, but not do anything useful unless you gave it more computing power and time than the entire universe can offer. (It is AIUI an open question whether that's actually finite. Let's suppose it is.)

A lots-of-humans simulator might well produce a superintelligence but its inhabitants might instead wipe themselves out in a (simulated) nuclear war, or just turn out not to be smart enough to make a working AI, or something.