You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

cousin_it comments on The Kolmogorov complexity of a superintelligence - Less Wrong Discussion

2 Post author: Thomas 26 June 2011 12:11PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (30)

You are viewing a single comment's thread. Show more comments above.

Comment author: cousin_it 26 June 2011 10:11:35PM *  0 points [-]

Technically you can cheat by using the information in human brains to create upload-based superintelligences along the lines of Stuart's proposal, make them do the research for you, etc., so it seems likely to me that the upper bound should hold... but I appreciate your point and agree that my comment was wrong as stated.

Comment author: wedrifid 27 June 2011 12:57:37AM *  2 points [-]

When talking about upper bounds we cannot afford to just cheat saying humans can probably figure it out. That isn't an upper bound - it is an optimistic prediction about human potential. Moreover we still need a definition of Friendliness built in so we can tell whether this thing that the human researchers come up with is Friendliness or some other thing with that name. (Even an extremely 'meta' reference to a definition is fine but still requires more bits to point to which part of the humans is able to define Friendliness.)

Upper bounds are hard. But yes, I know your understanding of the area is solid and your ancestor comment serves as the definitive answer to the question of the post. I disagree only with the statement as made.