PhilGoetz comments on Metaphilosophical Mysteries - Less Wrong

35 Post author: Wei_Dai 27 July 2010 12:55AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (255)

You are viewing a single comment's thread. Show more comments above.

Comment author: PhilGoetz 27 July 2010 07:59:02PM *  4 points [-]

For example, if our universe does in fact contain halting problem oracles, the Bayesian superintelligence with a TM-based universal prior will never be able to believe that.

I think this problem would vanish if you spelled out what "believe" means. The Bayesian superintelligence would quickly learn to trust the opinion of the halting problem oracle; therefore, it would "believe" it.

Comment author: timtyler 30 July 2010 05:02:29PM *  -2 points [-]

I am having a few problems in thinking of a sensible definition of "believe" in which the superintelligence would fail to believe what its evidence tells it is true. It would be especially obvious if the machine was very small. The superintelligence would just use Occcam's razor - and figure it out.

Of course, one could imagine a particularly stupid agent, that was too daft to do this - but then it would hardly be very much of a superintelligence.