Lumifer comments on JFK was not assassinated: prior probability zero events - Less Wrong

20 Post author: Stuart_Armstrong 27 April 2016 11:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread. Show more comments above.

Comment author: Lumifer 28 April 2016 04:08:07PM 0 points [-]

Technically, no - an expected utility maximiser doesn't even have a self model.

Why not? Is there something that prevents it from having a self model?

Comment author: Stuart_Armstrong 28 April 2016 04:18:10PM 0 points [-]

You're right, it could, and that's not even the issue here. The issue is that it only has one tool to change beliefs - Bayesian updating - and that tool has not impact with a prior of zero.

Comment author: Lumifer 28 April 2016 04:33:46PM *  0 points [-]

The issue is that it only has one tool to change beliefs - Bayesian updating

That idea has issues. Where is the agent getting its priors? Does it have the ability to acquire new priors or it can only chain forward from pre-existing priors? And if so, is there a ur-prior, the root of the whole prior hierarchy?

How will it deal with an Outside Context Problem?

Comment author: Stuart_Armstrong 29 April 2016 10:45:58AM 0 points [-]

Does it have the ability to acquire new priors [...]?

It might, but that would be a different design. Not that that's a bad thing, necessarily, but that's not what is normally meant by priors.

Comment author: Lumifer 29 April 2016 02:35:20PM 2 points [-]

Priors are a local term. Often enough a prior used to be a posterior during the previous iteration.

Comment author: Stuart_Armstrong 29 April 2016 04:49:13PM 1 point [-]

But if the probability ever goes to zero, it stays there.

Comment author: Lumifer 29 April 2016 07:00:36PM *  1 point [-]

Some people say that zero is not a probability :-)

But yes, if you have completely ruled out Z as impossible, you will not consider it any more and it will be discarded forever.

Unless the agent can backtrack and undo the inference chain to fix its mistakes (which is how humans operate and which would be a highly useful feature for a fallible Bayesian agent, in particular one which cannot guarantee that the list of priors it is considering is complete).