You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

V_V comments on Bayes Academy: Development report 1 - Less Wrong Discussion

47 Post author: Kaj_Sotala 19 November 2014 10:35PM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread.

Comment author: V_V 20 November 2014 01:35:59PM 7 points [-]

Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 0%.

Priors don't get updated, posteriors do. Moreover, if the posterior probability becomes 0, then you will be unable to recognize monsters afterwards, and you will not be able to further update your model for this variable. It looks like you are overupdating.

Comment author: Kaj_Sotala 20 November 2014 03:12:44PM 3 points [-]

Thanks for the comments!

Priors don't get updated, posteriors do.

That's technically true, though it felt to me like such a common abuse of terminology that it could be allowed to slide. That said, if I just said "the probability of the variable", that would avoid the problem. (That probability may still be listed as a "prior variable" the next time it's used in a calculation... but then it's a prior for that calculation, so that's probably okay.)

Moreover, if the posterior probability becomes 0, then you will be unable to recognize monsters afterwards, and you will not be able to further update your model for this variable.

That's true, too. I was thinking that the belief networks aren't supposed to literally represent the protagonist's complete set of beliefs about the world, just some set of explicitly-held hypotheses, and she's still capable of realizing that something that she assigned a 0% probability actually happened. After all, the boy could have been looking in her direction because of something that was neither her response nor a monster, say a beautiful bird... which wasn't even assigned a 0% probability, it wasn't represented in the model in the first place. But it's not like she'd have been incapable of realizing that possibility, had it been pointed out to her - she just didn't think of it.

Comment author: abramdemski 22 November 2014 06:20:01AM 2 points [-]

While I was reading it I got the impression that it was pointing at common mistakes, not just demonstrating correct behavior -- so the protagonist first sets the probability to zero based on naive trust (and because the player is not yet ready to handle an explicit model of the correctness of statements), but this gets corrected later in a realistic way.

If the game made a point of this sort of thing, it would give the (good!) impression that all examples in the game are approximations which need to be refined quite a bit to account for real-life details.

In hindsight, I see it's not doing this effectively. Perhaps when she finds out the kid was wrong she's like "Whoops! We just gave a probability of zero to something which then immediately happened!! That's just about as wrong as you can possibly get. We'd better account for that in our model." Or, something to that effect.