Imagine that you and I are sitting at a table. Hidden in my lap, I have a jar of beans. We are going to play the traditional game wherein you try to guess the number of beans in the jar. However, you don’t get to see the jar. The rule is that I remove beans from the jar and place them on the table for as long as I like, and then at an arbitrary point ask you how many beans there are in total. That’s all you get to see.
One by one, I remove a dozen beans. As I place the twelfth bean on the table in front of you, I ask: “So how many beans are there total, including those left in the jar?”
“I have no idea,” you reasonably reply.
“Alright, well let’s try to narrow it down,” I say helpfully. “What is the greatest amount of beans I could possibly have in total?”
You reason thusly: “Well, given the Copernican principle, this twelfth bean is equally likely to fall anywhere along the distribution of the total number of beans. Thus, for example, all else held equal, there is a 50% chance that it will be within the last 50% of beans removed from the jar – or the first 50%, for that matter.
“But, obviously, it further follows that there is a 70% chance that it will be in the final 70% of beans, and a 95% chance that it will be within the last 95% of beans you might remove, and so on. In this scenario – if the 11 previous beans represent only 5% of the total – then there should be at most 11x20 total beans, or 220. Thus, I can be 95% confident that there are no more than 220 beans. Of course, the actual possible number asymptotically approaches infinity by this reasoning (say I wanted to be 99% confident?), but 95% confidence is good enough for me! So I’ll take 220 as my upper bound…”
You are wrong. I have over 1,000 beans left in the jar.
Or: you are (technically) right. There is only one bean left in the jar.
Or: any other possibility.
Either way, it seems obvious that your reasoning is completely disconnected from the actual number of beans left in the jar. Given the evidence you’ve actually seen, it seems intuitively that it could just as well be any number (12 or greater).
Where did you go wrong?
The proper Bayesian response to evidence is to pick a particular hypothesis – say, “there are fewer than 220 beans,” which is the hypothesis you just pegged at 95% confidence – and then see whether the given evidence (“he stopped at the 12th bean”) updates you towards or away from it.[1]
It seems clear that this kind of update is not what you have done in reasoning about the beans. Rather, you picked a hypothesis that was merely compatible with the evidence – “there are fewer than 220 beans.” You then found this weird value of the percentage of possible worlds wherein the evidence could possibly appear[2], out of possible worlds where the hypothesis is true (i.e. worlds where there are at least 12 beans out of worlds with fewer than 220). And this was then conflated this with the actual posterior probability of the hypothesis.
It seems to me that the Doomsday Argument is exactly analogous to this situation, except that it’s a sentient 12th bean itself (i.e. a human somewhere in the timeline) that happens to be making the guess.
I am not at all confident that I haven’t failed to address some obvious feature of the original argument. Please rebut.
[1] I’ve just tried to do this, but I’m rubbish at math, especially when it includes tricky (to me) things like ranges and summations. (Doesn’t the result depend entirely on your prior probability that there are (0, 220] beans, which would depend on your implicit upper bound for beans to begin with, if you assume there can’t be infinite beans in my jar?)
[2] Not does appear – remember, I could have stopped on any bean. This chunk of possible worlds includes, e.g. the world where I went all the way to bean 219.
Surveyed.
An odd technique, which I'll rate at +5: whilst already locked into some mundane but necessary task (e.g. grocery shopping, dishes, wading through work e-mails), consciously forcing my brain to complete the "Man, I wish I could be doing [blank] instead" template with some other mundane task that I would normally procrastinate - then immediately switching to that other task when the first task is done.
For example: "These dishes are taking so long - I really wish I could be... [hijack the train of thought by picking something else on my to-do list]... doing research for that article." I'll then make my brain, while still doing dishes, concretely imagine working on the article - what I'll search for online, in what order I'll attack the sections, even how I'll format it, etc. - in the ordinary way I would normally fantasize about playing a game, watching a TV show, or some other fun task naturally preferable to doing dishes.
By the time the dishes are done, I literally can't wait to jump into the article. After all, these dishes have been keeping me from it for so long!
I have had a shocking amount of personal success with this.
This is a really excellent technique in a lot of contexts.
I offer a word of caution about actually using it with theists, even those less Biblically literate than Yvain's friend: the catch-all excuse that many (not all) theists make for Biblical atrocities is precisely that they were commanded by God, and thus on some version of Divine Command Theory are rendered okay - not that the atrocities are in some observable way actually less bad than those committed by other groups or religions.
Thanks! At the risk of falling prey to the planning fallacy, I should have some draft-worthy stuff next month.
I'm kind of thrilled to find this discussion occurring. I've just managed to actually start writing my long-planned, akrasia-blocked series of rationalist adventures for kids (say, smart-7-year-olds through 12-year-olds). It's a fantasy-adventure, a little bit zany, a little bit dark, and will be intended to promote basic virtues like curiosity, empiricism, changing your mind, and admitting when you don't know.
If and when I have drafts of a few stories, would there be interest in me writing a post explaining the project in more depth and requesting criticism/feedback?
I will note that though consequentialism is a fine ideal theory, at some point you really do have to implement a procedure, which means in practice, all consequentialists will be deontologists.
Agreed. This is usually called “rule utilitarianism” – the idea that, in practice, it actually conserves utils to just make a set of basic rules and follow them, rather than recalculating from scratch the utility of any given action each time you make a decision. Like, “don’t murder” is a pretty safe one, because it seems like in the vast majority of situations taking a life will have a negative utility. However, its still worth distinguishing this sharply from deontology, because if you ever did calculate and find a situation in which your rule resulted in lesser utility – like pushing the fat man in front of the train – you’d break the rule. The rule is an efficiency-approximation rather than a fundamental posit.
Much for the same reasons that people can be mistaken about their own desires, people can be mistaken about what they would actually consider awesome if they were to engage in an accurate modeling of all the facts. E.g. People who post flashing images to epileptic boards or suicide pictures to battered parents are either 1) failing to truly envision the potential results of their actions and consequently overvaluing the immediate minor awesomeness of the irony of the post or whatever vs. the distant, unseen, major anti-awesomeness of seizures/suicides, or 2) they’re actually socio- or psychopaths. Given the infrequency of real sociopathy, it’s safe to assume a lot of the former happens, especially over the impersonal, empathy-sapping environment of the Internet.
I’m Taylor Smith. I’ve been lurking since early 2011. I recently finished a bachelor’s in philosophy but got sort of fed up with it near the end. Discovering the article on belief in belief is what first hooked me on LessWrong, as I’d already had to independently invent this idea to explain a lot of the silly things people around me seemed to be espousing without it actually affecting their behavior. I then devoured the Sequences. Finding LessWrong was like finding all the students and teachers I had hoped to have in the course of a philosophy degree, all in one place. It was like a light switching on. And it made me realize how little I’d actually learned thus far. I’m so grateful for this place.
Now I’m an artist – a writer and a musician.
A frequently-confirmed observation of mine is that art – be it a great sci-fi novel, a protest song, an anti-war film – works as a hack to help to change people’s minds who are resistant or unaccustomed to pure rational argument. This is true especially of ethical issues; works which go for the emotional gut-punch somehow make people change their minds. (I think there are a lot of overlapping reasons for this phenomenon, but one certainly is that a well-told story or convincing song provides an opportunity for empathy. It can also help people envision the real consequences of a mind-change in an environment of relative emotional safety.) This, even though of course the mere fact that someone who holds position X made a good piece of art about X doesn’t actually offer much real evidence for the truth of X. Thus, a perilous power. The negative word for the extreme end of this phenomenon is “propaganda.” Conversely, when folks end up agreeing with whatever a work of art brought them to believe, they praise it as “insightful” or some such. You can sort of understand why Plato was worried about having poets – those irrational, un-philosophic things – in his ideal city, swaying his people’s emotions and beliefs.
If I’m going to help save the world, though, I think I do it best through a) giving money to the efficient altruists and the smart people and b) trying to spread true ideas by being a really successful and popular creator.
But that means I have to be pretty damn certain what the true ideas are first, or I’m just spouting pretty, and pretty useless, nonsense.
So thank you, LessWrongers, for all caring about truth together.
Good point; you're right that his reasoning would be correct if he knew that, e.g., I had used a random number generator to randomly-generate a number between 1 and (total # of beans) and resolved to ask him, only on that numbered bean, to guess the upper bound on the total.
Perhaps to make the bean-game more similar to the original problem, I ought to ask for a guess on the total number after every bean placed, since every bean represents an observer who could be fretting about the Doomsday Argument.
Analogously, it would be misleading to imagine that You the Observer were placed in the human timeline at a single randomly-chosen point by, say, Omega, since every bean (or human) is in fact an observer.
Unfortunately I'm getting muddled and am not clear what consequences this has. Thoughts?