Comment author:ialdabaoth
24 May 2013 05:29:18AM
*
1 point
[-]

Thank you! This helps me hone in on a point that I am sorely confused on, which BrienneStrohl just illustrated nicely:

You're stating that B9's prior that "the ball is blue" is 'very low', as opposed to {Null / NaN}. And that likewise, my prior that "zorgumphs are wogle" is 'very low', as opposed to {Null / NaN}.

Does this mean that my belief system actually contains an uncountable infinitude of priors, one for each possible framing of each possible cluster of facts?

Or, to put my first question more succinctly, what priors should I assign potential facts that my current gestalt assigns no semantic meaning to whatsoever?

"The ball is blue" only gets assigned a probability by your prior when "blue" is interpreted, not as a word that you don't understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn't previously know about, plus the one number you do know about. It's like imagining that there's a fifth force appearing in quark-quark interactions a la the "Alderson Drive". You don't need to have seen the fifth force for the hypothesis to be meaningful, so long as the hypothesis specifies how the causal force interacts with you.

If you restrain yourself to only finite sets of physical laws of this sort, your prior will be over countably many causal models.

There are only so many distinct states of experience, so yes, causal models are countable. The set of all causal models is a set of functions that map K n-valued past experiential states into L n-valued future experiential states.

This is a monstrously huge number of functions in the set, but still countable, so long as K and L are at most countably infinite.

Note that this assumes that states of experience with zero discernible difference between them are the same thing - eg, if you come up with the same predictions using the first million digits of sqrt(2) and the irrational number sqrt(2), then they're the same model.

But the set of causal models is not the set of experience mappings. The model where things disappear after they cross the cosmological horizon is a different model than standard physics, even though they predict the same experiences. We can differentiate between them because Occam's Razor favors one over the other, and our experiences give us ample cause to trust Occam's Razor.

At first glance, it seems this gives us enough to diagonalize models--1 meter outside the horizon is different from model one, two meters is different from model two...

There might be a way to constrain this based on the models we can assign different probabilities to, given our knowledge and experience, which might get it down to countable numbers, but how to do it is not obvious to me.

Er, now I see that Eliezer's post is discussing finite sets of physical laws, which rules out the cosmological horizon diagonalization. But, I think this causal models as function mapping fails in another way: we can't predict the n in n-valued future experiential states. Before the camera was switched, B9 would assign low probability to these high n-valued experiences. If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision. Since it can't put a bound on the number of values in the L states, the set is uncountable and so is the set of functions.

we can't predict the n in n-valued future experiential states.

What? Of course we can - it's much simpler with a computer program, of course. Suppose you have M bits of state data. There are 2^M possible states of experience. What I mean by n-valued is that there are a certain discrete set of possible experiences.

If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision.

Arbitrary, yes. Unbounded, no. It's still bounded by the amount of physical memory it can use to represent state.

In order to bound the states at a number n, it would need to assign probability zero to ever getting an upgrade allowing it to access log n bytes of memory. I don't know how this zero-probability assignment would be justified for any n--there's a non-zero probability that one's model of physics is completely wrong, and once that's gone, there's not much left to make something impossible.

Comment author:Vaniver
24 May 2013 07:26:10PM
2 points
[-]

"The ball is blue" only gets assigned a probability by your prior when "blue" is interpreted, not as a word that you don't understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn't previously know about, plus the one number you do know about.

Note that a conversant AI will likely have a causal model of conversations, and so there are two distinct things going on here- both "what are my beliefs about words that I don't understand used in a sentence" and "what are my beliefs about physics I don't understand yet." This split is a potential source of confusion, and the conversational model is one reason why the betting argument for quantifying uncertainties meets serious resistance.

To me the conversational part of this seems way less complicated/interesting than the unknown causal models part - if I have any 'philosophical' confusion about how to treat unknown strings of English letters it is not obvious to me what it is.

Comment author:Kawoomba
24 May 2013 02:03:47PM
2 points
[-]

You can reserve some slice of your probability space for "here be dragons", the (1 - P("my current gestalt is correct"). Your countably many priors may fight over that real estate.

Also, if you demand your models to be computable (a good assumption, because if they aren't we're eff'ed anyways), there'll never be an uncountable infinitude of priors.

## Comments (186)

Old*1 point [-]Thank you! This helps me hone in on a point that I am sorely confused on, which BrienneStrohl just illustrated nicely:

You're stating that B9's prior that "the ball is blue" is 'very low', as opposed to {Null / NaN}. And that likewise, my prior that "zorgumphs are wogle" is 'very low', as opposed to {Null / NaN}.

Does this mean that my belief system actually contains an uncountable infinitude of priors, one for each possible framing of each possible cluster of facts?

Or, to put my first question more succinctly, what priors should I assign potential facts that my current gestalt assigns no semantic meaning to whatsoever?

"The ball is blue" only gets assigned a probability by your prior when "blue" is interpreted, not as a word that you don't understand, but as a causal hypothesis about previously unknown laws of physics allowing light to have two numbers assigned to it that you didn't previously know about, plus the one number you do know about. It's like imagining that there's a fifth force appearing in quark-quark interactions

a lathe "Alderson Drive". You don't need to have seen the fifth force for the hypothesis to be meaningful, so long as the hypothesis specifies how the causal force interacts with you.If you restrain yourself to only finite sets of physical laws of this sort, your prior will be over countably many causal models.

Causal models are countable? Are irrational constants not part of causal models?

There are only so many distinct states of experience, so yes, causal models are countable. The set of all causal models is a set of functions that map K n-valued past experiential states into L n-valued future experiential states.

This is a monstrously huge number of functions in the set, but still countable, so long as K and L are at most countably infinite.

Note that this assumes that states of experience with zero discernible difference between them are the same thing - eg, if you come up with the same predictions using the first million digits of sqrt(2) and the irrational number sqrt(2), then they're the same model.

But the set of causal models is not the set of experience mappings. The model where things disappear after they cross the cosmological horizon is a different model than standard physics, even though they predict the same experiences. We can differentiate between them because Occam's Razor favors one over the other, and our experiences give us ample cause to trust Occam's Razor.

At first glance, it seems this gives us enough to diagonalize models--1 meter outside the horizon is different from model one, two meters is different from model two...

There might be a way to constrain this based on the models we can assign different probabilities to, given our knowledge and experience, which might get it down to countable numbers, but how to do it is not obvious to me.

Er, now I see that Eliezer's post is discussing finite sets of physical laws, which rules out the cosmological horizon diagonalization. But, I think this causal models as function mapping fails in another way: we can't predict the n in n-valued future experiential states. Before the camera was switched, B9 would assign low probability to these high n-valued experiences. If B9 can get a camera that allows it to perceive color, it could also get an attachment that allows it to calculate the permittivity constant to arbitrary precision. Since it can't put a bound on the number of values in the L states, the set is uncountable and so is the set of functions.

*0 points [-]What? Of course we can - it's much simpler with a computer program, of course. Suppose you have M bits of state data. There are 2^M possible states of experience. What I mean by n-valued is that there are a certain discrete set of possible experiences.

Arbitrary, yes. Unbounded, no. It's still bounded by the amount of physical memory it can use to represent state.

In order to bound the states at a number n, it would need to assign probability zero to ever getting an upgrade allowing it to access log n bytes of memory. I don't know how this zero-probability assignment would be justified for any n--there's a non-zero probability that one's model of physics is completely wrong, and once that's gone, there's not much left to make something impossible.

Note that a conversant AI will likely have a causal model of conversations, and so there are two distinct things going on here- both "what are my beliefs about words that I don't understand used in a sentence" and "what are my beliefs about physics I don't understand yet." This split is a potential source of confusion, and the conversational model is one reason why the betting argument for quantifying uncertainties meets serious resistance.

To me the conversational part of this seems way less complicated/interesting than the unknown causal models part - if I have any 'philosophical' confusion about how to treat unknown strings of English letters it is not obvious to me what it is.

You can reserve some slice of your probability space for "here be dragons", the (1 - P("my current gestalt is correct"). Your countably many priors may fight over that real estate.

Also, if you demand your models to be computable (a good assumption, because if they aren't we're eff'ed anyways), there'll never be an

uncountableinfinitude of priors.