Wei_Dai comments on Does Solomonoff always win? - Less Wrong

11 Post author: cousin_it 23 February 2011 08:42PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (55)

You are viewing a single comment's thread.

Comment author: Wei_Dai 23 February 2011 09:53:23PM 2 points [-]

To recap my position on this, intuitively it seems obvious that we don't want to assign a zero probability to the universe being uncomputable, but there are purportedly arguments showing that even if the universe is uncomputable, an AI programmed with the Solomonoff prior will do nearly as well as any human. I think there are valid counterarguments to all such arguments so far, so it seems reasonable to stick with the intuitive position until we have better arguments one way or another.

Regarding my version of Berry's Paradox, I really don't understand what it shows myself. But it seems clearly related to the fact that mathematical truth cannot be defined using any formal language. There's a cluster of related problems that are all perhaps waiting for some single major insight.

Comment author: cousin_it 23 February 2011 10:23:28PM *  0 points [-]

Yes, this is my position as well.

Mathematical truth is something of a mystery, I wrote a post about that sometime ago. I remember you liked it. There hasn't been any progress since then so I'll just leave the link here.