Alicorn comments on Open Thread: March 2010 - Less Wrong
You are viewing a comment permalink. View the original post to see all comments and the full post content.
You are viewing a comment permalink. View the original post to see all comments and the full post content.
Comments (658)
Yes, but in this situation you have so little information that .5 doesn't seem remotely cautious enough. You might as well ask the members of Strigli as they land on Earth what their probability is that the Red Sox will win at a spelling bee next year - does it look obvious that they shouldn't say 50% in that case? .5 isn't the right prior - some eensy prior that any given possibly-made-up alien thing will happen, adjusted up slightly to account for the fact that they did choose this question to ask over others, seems better to me.
Unless there's some reason that they'd suspect it's more likely for us to ask them a trick question whose answer is "No" than one whose question is "Yes" (although it is probably easier to create trick questions whose answer is "No", and the Striglian could take that into account), 50% isn't a bad probability to assign if asked a completely foreign Yes-No question.
Basically, I think that this and the other problems of this nature discussed on LW are instances of the same phenomenon: when the space of possibilities (for alien culture, Omega's decision algorithm, etc) grows so large and so convoluted as to be utterly intractable for us, our posterior probabilities should be basically our ignorance priors all over again.
It seems to me that even if you know that there is a Doldun game, played by exactly two teams, of which one is Strigli, which game exactly one team will entirely win, 50% is as high as you should go. If you don't have that much precise information, then 50% is an extremely generous upper bound for how likely you should consider a Strigli win. The space of all meaningful false propositions is hugely larger than the space of all meaningful true propositions. For every proposition that is true, you can also contradict it directly, and then present a long list of indirectly contradictory statements. For example: it is true that I am sitting on a blue couch. It is false that I am not on a blue couch - and also false that I am on a red couch, false that I am trapped in carbonite, false that I am beneath the Great Barrier Reef, false that I'm in the Sea of Tranquility, false that I'm equidistant between the Sun and the star Polaris, false that... Basically, most statements you can make about my location are false, and therefore the correct answer to most yes-or-no questions you could ask about my location is "no".
Basically, your prior should be that everything is almost certainly false!
The odds of a random sentence being true are low, but the odds of the alien choosing to give you a true sentence are higher.
A random alien?
No, just a random alien that (1) I encountered and (2) asked me a question.
The two conditions above restrict enormously the general class of “possible” random aliens. Every condition that restricts possibilities brings information, though I can't see a way of properly encoding this information as a prior about the answer to said question.
[ETA:] Note that I don't necessarily accept cousin_it's assertion, I just state my interpretation of it.
Well, let's say I ask you whether all "fnynznaqre"s are "nzcuvovna"s. Prior to using rot13 on this question (and hypothesizing that we hadn't had this particular conversation beforehand), would your prior really be as low as your previous comment implies?
(Of course, it should probably still be under 50% for the reference class we're discussing, but not nearly that far under.)
Given that you chose this question to ask, and that I know you are a human, then screening off this conversation I find myself hovering at around 25% that all "fnynznaqre"s are "nzcuvovna"s. We're talking about aliens. Come on, now that it's occurred to you, wouldn't you ask an E.T. if it thinks the Red Sox have a shot at the spelling bee?
Yes, but I might as easily choose a question whose answer was "Yes" if I thought that a trick question might be too predictable of a strategy.
1/4 seems reasonable to me, given human psychology. If you expand the reference class to all alien species, though, I can't see why the likelihood of "Yes" should go down— that would generally require more information, not less, about what sort of questions the other is liable to ask.
Okay, if you have some reason to believe that the question was chosen to have a specific answer, instead of being chosen directly from questionspace, then you can revise up. I didn't see a reason to think this was going on when the aliens were asking the question, though.
Hmm. As you point out, questionspace is biased towards "No" when represented in human formalisms (if weighting by length, it's biased by nearly the length of the "not" symbol), and it would seem weird if it weren't so an an alien representation. Perhaps that's a reason to revise down and not up when taking information off the table. But it doesn't seem like it should be more than (say) a decibel's worth of evidence for "No".
ETA: I think we each just acknowledged that the other has a point. On the Internet, no less!
Isn't it awesome when that happens? :D
I think one important thing to keep in mind when assigning prior probabilities to yes/no questions is that the probabilities you assign should at least satisfy the axioms of probability. For example, you should definitely not end up assigning equal probabilities to the following three events -
I am not sure if your scheme ensures that this does not happen.
Also, to me, Bayesianism sounds like an iterative way of forming consistent beliefs, where in each step you gather some evidence and update your probability estimates for the truth or falsity of various hypotheses accordingly. But I don't understand how exactly to start. Or in other words, consider the very first iteration of this whole process, where you do not have any evidence whatsoever. What probabilities do you assign to the truth or falsity of different hypotheses?
One way I can imagine is to assign all of them a probability inversely proportional to their Kolmogorov complexities. The good thing about Kolmogorov complexity is that it satisfies the axioms of probability. But I have only seen it defined for strings and such. I don't know how to define Kolmogorov complexity of complicated things like hypotheses. Also, even if there is a way to define it, I can't completely convince myself that it gives a correct prior probability.
But it is true that you are not on a red couch.
Negation is a one-to-one map between true and false propositions.
Since you can understand the alien's question except for the nouns, presumably you'd be able to tell if there was a "not" in there?
Yes, you have made a convincing argument, I think, that given that a proposition does not involve negation, as in the alien's question, that it is more likely to be false than true. (At least, if you have a prior for being presented with questions that penalize complexity. The sizes of the spaces of true and false propositions, however, are the same countable infinity.) (Sometimes I see claims in isolation, and so miss that a slightly modified claim is more correct and still supports the same larger claim.)
ETA: We should also note the absence of any disjunctions. It is also true that Alicorn is sitting on a blue couch or a red couch. (Well, maybe not, some time has passed since she reported sitting on a blue couch. But that's not the point.)
This effect may be screened off if, for example, you have a prior that the aliens first choose whether the answer should be yes or no, and then choose a question to match the answer.
That the aliens chose to translate their word as the English 'game' says, I think, a lot.
"Game" is one of the most notorious words in the language for the virtual impossibility of providing a unified definition absent counterexamples.
A family resemblance is still a resemblance.
Could you include a source for this quote, please?
Googling it would've told you that it's from Wittgenstein's Philosophical Investigations.
Simply Googling it would not have signaled any disappointment radical_negative_one may have had that you did not include a citation (preferably with a relevant link) as is normal when making a quote like that.
/me bats the social signal into JGWeissman's court
Omitting the citation, which wasn't really needed, sends the message that I don't wish to stand on Wittgenstein's authority but think the sentiment stands on its own.
Then use your own words. Wittgenstein's are barely readable.
If it doesn't stand on its own, you shouldn't quote it at all - the purpose of the citation is to allow interested parties to investigate the original source, not to help you convince.
Voted up, but I would say the purpose is to do both, to help convince and help further investigation, and more, such as to give credit to the source. Citations benifet the reader, the quoter, and the source.
I definitely agree that willingness to forgo your own benifet as the quoter does not justify ignoring the benifets to the others involved.
"A game is a voluntary attempt to overcome unnecessary obstacles."
This is, perhaps, a necessary condition but not a sufficient one. It is true of almost all hobbies, but I wouldn't classify hobbies such as computer programming or learning to play the piano as games.
I wouldn't class most hobbies as attempts to overcome unnecessary obstacles either -- certainly not playing a musical instrument, where the difficulties are all necessary ones. I might count bird-watching, of the sort where the twitcher's goal is to get as many "ticks" (sightings of different species) as possible, as falling within the definition, but for that very reason I'd regard it as being a game.
One could argue that compulsory games at school are a counterexample to the "voluntary" part. On the other hand, Láadan has a word "rashida": "a non-game, a cruel "playing" that is a game only for the dominant "player" with the power to force others to participate [ra=non- + shida=game]". In the light of that concept, perhaps these are not really games for the children forced to participate.
But whatever nits one can pick in Bernard Suits' definition, I still think it makes a pretty good counter to Wittgenstein's claims about the concept.
Oh, right. Reading "unnecessary" as "artificial", the definition is indeed as good as they come. My first interpretation was somewhat different and, in retrospect, not very coherent.
Hm. For actual aliens I don't think even that's justified, without either knowing more about their psychology, or having some sort of equally problematic prior regarding the psychology of aliens.
I was conditioning on the probability that the question is in fact meaningful to the aliens (more like "Will the Red Sox win the spelling bee?" than like "Does the present king of France's beard undertake differential diagnosis of the psychiatric maladies of silk orchids with the help of a burrowing hybrid car?"). If you assume they're just stringing words together, then there's not obviously a proposition you can even assign probability to.
Hey, maybe they're Zen aliens who always greet strangers by asking meaningless questions.
More sensibly, it seems to me roughly equally plausible that they might ask a meaningful question because the correct answer is negative, which would imply adjusting the prior downward; and unknown alien psychology makes me doubtful of making a sensible guess based on context.