Wei_Dai comments on The Kolmogorov complexity of a superintelligence - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (30)
I agree with wedrifid here. We don't seem to have a valid argument showing that "an upper bound on the complexity of a friendly superintelligence would be the total information content of all human brains". I would like to point out that if the K-complexity of friendly superintelligence is greater than that, then there is no way for us to build a friendly superintelligence except by luck (i.e., most Everett branches are doomed to fail to build a friendly superintelligence) or by somehow exploiting uncomputable physics.