Unknowns comments on Self-modification is the correct justification for updateless decision theory - Less Wrong

12 Post author: Benja 11 April 2010 04:39PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (32)

You are viewing a single comment's thread. Show more comments above.

Comment author: JGWeissman 11 April 2010 05:29:49PM *  1 point [-]

Ok, the intuition pump is problematic in that not only do you know what the first digit of pi is, it is also easy for the AI to calculate.

Perhaps I wasn't clear. I meant that Omega does not actually tell you what logical proposition it used. The phrase "some logical proposition" is literally what Omega says, it is not a placeholder for something more specific. All you have to go on is that of the things that Omega believes with probability .5, on average half of them are actually true.

Can you imagine a least convenient possible world in which there is a logical fact for Omega to use that you know the answer to, but that is not trivial for the AI to calculate? Would you agree that it makes sense to enter it into the AI's prior?

No. A properly designed AGI should be able to figure out any logical fact that I know.

My point was that ...

My point was that one particular argument you made does not actually support your point.

Comment author: Unknowns 11 April 2010 06:30:02PM 1 point [-]

I've given such a logical fact before.

"After thinking about it for a sufficiently long time, the AI at some time or other will judge this statement to be false."

This might very well be a logical fact because it's truth or falsehood can be determined from the AI's programming, something quite logically determinate. But it is quite difficult for the AI to discover the truth of the matter.