You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Wei_Dai comments on The Kolmogorov complexity of a superintelligence - Less Wrong Discussion

2 Post author: Thomas 26 June 2011 12:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: Wei_Dai 26 June 2011 09:57:09PM *  1 point [-]

I agree with wedrifid here. We don't seem to have a valid argument showing that "an upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains". I would like to point out that if the K-complexity of friendly superintelligence is greater than that, then there is no way for us to build a friendly superintelligence except by luck (i.e., most Everett branches are doomed to fail to build a friendly superintelligence) or by somehow exploiting uncomputable physics.