You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Tyrrell_McAllister comments on Bayesian probability as an approximate theory of uncertainty? - Less Wrong Discussion

16 Post author: cousin_it 26 September 2013 09:16AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (35)

You are viewing a single comment's thread. Show more comments above.

Comment author: Tyrrell_McAllister 26 September 2013 08:59:31PM *  3 points [-]

I may be missing your point. As you've written about before, things go haywire when the agent knows too much about its own decisions in advance. Hence hacks like "playing chicken with the universe".

So, the agent can't know too much about its own decisions in advance. But is this an example of indexical uncertainty? Or is it (as it seems to me) an example of a kind of logical uncertainty that an agent apparently needs to have? Apparently, an agent needs to be sufficiently uncertainty, or to have uncertainty of some particular kind, about the output of the algorithm that the agent is. But uncertainty about the output of an algorithm requires only logical uncertainty.