Romashka comments on Open thread, Feb. 01 - Feb. 07, 2016 - Less Wrong Discussion
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (177)
Are there any exercises similar to calibration questions where people are 1) asked a question and 2) given some info relevant for the answer, and then required to state how the info influenced the changes in the probabilities they state? I mean, if a brain 'does something similar to a Bayesian calculation', then it should be measurable, and maybe trainable even in 'vaguely stated' word-only problems. And if it is easier to do in some domains, it would be interesting why.
fermi estimates and generating inside view models before constructing an outside view one to compare the results both are kind of in this direction I think.