Cyan comments on Intuitive supergoal uncertainty - Less Wrong

4 Post author: JustinShovelain 04 December 2009 05:21AM

You are viewing a comment permalink. View the original post to see all comments and the full post content.

Comments (27)

You are viewing a single comment's thread. Show more comments above.

Comment author: Cyan 04 December 2009 09:00:34PM *  1 point [-]

The material I have in mind is Chapter 18 of PT:LOS. You can see the section headings on page 8 (numbered vii because the title page is unnumbered) here. One of the section titles is "Outer and Inner Robots"; when rhollerith says 72%, he's giving the outer robot answer. To give an account of how unstable your probability estimates are, you need to give the inner robot answer.

What does it tell us about that? Doesn't it depend on which piece of evidence we're talking about?

When we receive new evidence, we assign a likelihood function for the probability. (We take the perspective of the inner robot reasoning about what the outer robot will say.) The width of the interval for the probability tells us how narrow the likelihood function has to be to shift the center of that interval by a non-neglible amount.

Do you have to specify a prior over which variables you are likely to observe next?

No.

Comment author: Eliezer_Yudkowsky 05 December 2009 08:42:18AM 1 point [-]

That is a strange little chapter, but I should note that if you talk about the probability that you will make some future probability estimate, then the distribution of a future probability estimate does make a good way of talking about the instability of a state of knowledge. As opposed to the notion of talking about the probability of a current probability estimate, which sounds much more like you're doing something wrong.