malcolmocean comments on JFK was not assassinated: prior probability zero events - Less Wrong

20 Post author: Stuart_Armstrong 27 April 2016 11:47AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (47)

You are viewing a single comment's thread.

Comment author: malcolmocean 27 April 2016 03:50:44PM 0 points [-]

Fascinating.

Is there any problem that might occur from an agent failing to do enough investigation? (Possibly ever, possibly just before taking some action that ends up being important)

Comment author: Stuart_Armstrong 27 April 2016 07:01:41PM 1 point [-]

It's when it's done a moderate amount of investigation that the error is highest. Disbelieving JFK's assassination makes little difference to most people. If you investigate a little, you start believing in ultra efficient gov conspiracies. If you investigate a lot, you start believing in general miracles. If you do a massive investigation, you start believing in one specific miracle.

Basically there's a problem when JFK's assassination is relevant to your prediction, but you don't have many other relevant samples.

Comment author: MrMind 28 April 2016 03:00:07PM 0 points [-]

If you do a massive investigation, you start believing in one specific miracle.

It will never question its own sanity?

Comment author: Stuart_Armstrong 28 April 2016 03:56:38PM 0 points [-]

Technically, no - an expected utility maximiser doesn't even have a self model. But it practice it might behave in wys that really look like it's questioning its own sanity, I'm not entirely sure,

Comment author: Lumifer 28 April 2016 04:08:07PM 0 points [-]

Technically, no - an expected utility maximiser doesn't even have a self model.

Why not? Is there something that prevents it from having a self model?

Comment author: Stuart_Armstrong 28 April 2016 04:18:10PM 0 points [-]

You're right, it could, and that's not even the issue here. The issue is that it only has one tool to change beliefs - Bayesian updating - and that tool has not impact with a prior of zero.

Comment author: Lumifer 28 April 2016 04:33:46PM *  0 points [-]

The issue is that it only has one tool to change beliefs - Bayesian updating

That idea has issues. Where is the agent getting its priors? Does it have the ability to acquire new priors or it can only chain forward from pre-existing priors? And if so, is there a ur-prior, the root of the whole prior hierarchy?

How will it deal with an Outside Context Problem?

Comment author: Stuart_Armstrong 29 April 2016 10:45:58AM 0 points [-]

Does it have the ability to acquire new priors [...]?

It might, but that would be a different design. Not that that's a bad thing, necessarily, but that's not what is normally meant by priors.

Comment author: Lumifer 29 April 2016 02:35:20PM 2 points [-]

Priors are a local term. Often enough a prior used to be a posterior during the previous iteration.

Comment author: Stuart_Armstrong 29 April 2016 04:49:13PM 1 point [-]

But if the probability ever goes to zero, it stays there.