hairyfigment comments on Does Kolmogorov complexity imply a bound on self-improving AI? - Less Wrong

4 Post author: contravariant 14 February 2016 08:38AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (14)

You are viewing a single comment's thread.

Comment author: hairyfigment 15 February 2016 11:32:59PM 1 point [-]

I hate to say it, but this seems like an empty triviality. People have already mentioned the old post of Eliezer's. People have also touched on the fact that a human brain seems more complex (read:inefficient and arbitrary) than I would expect a good self-improving AGI to be. This at least suggests that the OP does little to illuminate the problem.

If we want to talk about technicalities of dubious relevance, I don't think the definition of Kolmogorov complexity logically requires what you need it to mean. The Turing machine does not need to list "all possible strings" in order to evade the problem; technically it just has to output something other than the solution in addition to the string S. This may turn out to matter somehow when it comes to the credibility of option #2, eg by allowing empirical tests.