You're looking at Less Wrong's discussion board. This includes all posts, including those that haven't been promoted to the front page yet. For more information, see About Less Wrong.

Romashka comments on Open thread, Feb. 01 - Feb. 07, 2016 - Less Wrong Discussion

3 Post author: MrMind 01 February 2016 08:24AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (177)

You are viewing a single comment's thread.

Comment author: Romashka 01 February 2016 07:56:24PM 2 points [-]

Are there any exercises similar to calibration questions where people are 1) asked a question and 2) given some info relevant for the answer, and then required to state how the info influenced the changes in the probabilities they state? I mean, if a brain 'does something similar to a Bayesian calculation', then it should be measurable, and maybe trainable even in 'vaguely stated' word-only problems. And if it is easier to do in some domains, it would be interesting why.

Comment author: RomeoStevens 03 February 2016 01:55:01AM 1 point [-]

fermi estimates and generating inside view models before constructing an outside view one to compare the results both are kind of in this direction I think.