You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Lumifer comments on Top-Down and Bottom-Up Logical Probabilities - Less Wrong Discussion

2 Post author: Manfred 22 July 2014 08:53AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (4)

You are viewing a single comment's thread.

Comment author: Lumifer 22 July 2014 03:57:24PM 1 point [-]

Badly in need of tl;dr

Comment author: Manfred 22 July 2014 06:45:31PM *  1 point [-]

tl;dr Some methods of assigning logical probability seem to be convergent. That's neat.

Comment author: Skeptityke 22 July 2014 06:29:55PM 0 points [-]

I disagree strongly, but here is a prototype of one anyways.

There are top-down and bottom-up approaches to logical probabilities. Top-down approaches typically involve distributions selected to fit certain properties, and, while being elegant and easy to apply math to, are often quite uncomputable. Bottom-up approaches take an agent with some information, and ask what they should do to assign probabilities/find out more, leading to a more "hacky" probability distribution, but they also tend to be easier to compute. Interestingly enough, given limited computing resources, these two sorts of distributions have distinct similarities. They both involve a starting probability distribution modified by iterated consistency checks.

Did I get it mostly right?

Comment author: Manfred 22 July 2014 11:24:32PM 0 points [-]

They both involve a starting probability distribution modified by iterated consistency checks.

This part is a bit misleading because there's nothing special about having a starting distribution and updating it (thought that's definitely a bottom-up trait). It's also okay to create the logical probability function all at once, or through Monte Carlo sampling, or other weird stuff we haven't thought up yet.