Comment author: MugaSofer 06 September 2014 05:59:53PM *  2 points [-]

The trouble is, anthropic evidence works. I wish it didn't, because I wish the nuclear arms race hadn't come so close to killing us (and may well have killed others), and was instead prevented by some sort of hard-to-observe cooperation.

But it works. Witness the Sleeping Beauty Problem, for example. Or the Sailor's Child, a modified Sleeping Beauty that I could go outside and play a version of right now if I wished.

The winning solution, that gives the right answer, is to use "anthropic" evidence.

If this confuses you, then I (seriously) suggest you re-examine your understanding of how to perform anthropic calculations.


In fact, what you are describing is not "anthropic" evidence, but just ordinary evidence.

I (think I) know that George VI had five siblings (because you told me so.) That observation is more likely in a world where he did have five siblings (because I guessed your line of argument pretty early in the post, so I know you have no reason to trick me.) Therefore, updating on this observation, it is probable that George VI had five siblings.

Is this an explanation? Sort of.

There might be some special reason why George VI had only five siblings - maybe his parents decided to stop after five, say.

More likely, the true "explanation" is that he just happened to have five siblings, randomly. It wasn't unusually probable, it just happened by chance that it was that number.

And if that is the true explanation, then that is what I desire to believe.

Comment author: KnaveOfAllTrades 06 September 2014 06:48:09PM *  1 point [-]

I don't understand the first part of your comment. Different anthropic principles give different answers to e.g. Sleeping Beauty, and the type of dissolution that seems most promising for that problem doesn't feel like what I'd call 'using anthropic evidence'. (The post I just linked to in particular seems like a conceptual precursor to updateless thinking, which seems to me like the obviously correct perfect-logically-omniscient-reasoner solution to anthropics.)

Can you give a concrete example of what you see as an example of where anthropic reasoning wins (or would win if we performed a simple experiment)? If anything, experiments seem like they would highlight ambiguities that naïve anthropic reasoning misses; if I try to write 'halfer' and 'thirder' computer programs for Sleeping Beauty to see which wins more, I run into the problem of defining the payoffs and thereby rederive the dissolution ata gave in the linked post.

Comment author: Stefan_Schubert 02 September 2014 11:01:05AM *  10 points [-]

Good post. Jon Elster (whose works I much recommend; he has one book precisely on Sour Grapes) studies proverbs in his Alchemies of the Mind: Rationality and the Emotions. He notes that there are many contrary proverbs (i.e. of the form “Every S is P” and “No S is P.”) such as "out of sight, out of mind" and "absence makes the heart grow fonder" and "opposites attract" and "like attracts like". Elster argues, if I remember correctly, that these denote different mechanisms. According to this analysis, there would be one mechanism that goes from absence via say more loneliness to more love, whereas there is another that goes from absence to greater possibilities of meeting someone else to less love. Which one is the strongest in any individual case depends on various other factors.

If you're interested in this, I'd recommend reading those parts of Elster's book. In any case, I think that there is a lot to your analysis. Many of these proverbs are essentially devices to stop thinking (there is a LW term for this, right?). Rather than trying to weigh pros and cons people make themselves and others stop thinking by dropping a proverb. Many of them rhyme as well, which increases their effect.

Comment author: KnaveOfAllTrades 03 September 2014 01:51:37AM 2 points [-]

Ah, that's good to know. Thanks for the suggestion!

Overly convenient clusters, or: Beware sour grapes

22 KnaveOfAllTrades 02 September 2014 04:04AM

Related to: Policy Debates Should Not Appear One-Sided

There is a well-known fable which runs thus:

“Driven by hunger, a fox tried to reach some grapes hanging high on the vine but was unable to, although he leaped with all his strength. As he went away, the fox remarked 'Oh, you aren't even ripe yet! I don't need any sour grapes.' People who speak disparagingly of things that they cannot attain would do well to apply this story to themselves.”

This gives rise to the common expression ‘sour grapes’, referring to a situation in which one incorrectly claims to not care about something to save face or feel better after being unable to get it.

This seems to be related to a general phenomenon, in which motivated cognition leads one to flinch away from the prospect of an action that is inconvenient or painful in the short term by concluding that a less-painful option strictly dominates the more-painful one.

In the fox’s case, the allegedly-dominating option is believing (or professing) that he did not want the grapes. This spares him the pain of feeling impotent in face of his initial failure, or the embarrassment of others thinking him to have failed. If he can’t get the grapes anyway, then he might as well erase the fact that he ever wanted them, right? The problem is that considering this line of reasoning will make it more tempting to conclude that the option really was dominating—that he really couldn’t have gotten the grapes. But maybe he could’ve gotten the grapes with a bit more work—by getting a ladder, or making a hook, or Doing More Squats in order to Improve His Vert.

The fable of the fox and the grapes doesn’t feel like a perfect fit, though, because the fox doesn’t engage in any conscious deliberation before giving up on sour grapes; the whole thing takes place subconsciously. Here are some other examples that more closely illustrate the idea of conscious rationalization by use of overly convenient partitions:

The Seating Fallacy:

“Be who you are and say what you feel, because those who mind don't matter and those who matter don't mind.”

This advice is neither good in full generality nor bad in full generality. Clearly there are some situations where some person is worrying too much about other people judging them, or is anxious about inconveniencing others without taking their own preferences into account. But there are also clearly situations (like dealing with an unpleasant, incompetent boss) where fully exposing oneself or saying whatever comes into one’s head is not strategic and outright disastrous. Without taking into account the specifics of the situation of the recipient of the advice, it is of limited use.

It is convenient to absolve oneself of blame by writing off anybody who challenges our first impulse as someone who ‘doesn’t matter’; it means that if something goes wrong, one can avoid the painful task of analysing and modifying one’s behaviour.

In particular, we have the following corollary:

The Fundamental Fallacy of Dating:

“Be yourself and don’t hide who you are. Be up-front about what you want. If it puts your date off, then they wouldn’t have been good for you anyway, and you’ve dodged a bullet!”

In the short-term it is convenient to not have to filter or reflect on what one says (face-to-face) or writes (online dating). In the longer term, having no filter is not a smart way to approach dating. As the biases and heuristics program has shown, people are often mistaken about what they would prefer under reflection, and are often inefficient and irrational in pursuing what they want. There are complicated courtship conventions governing timelines for revealing information about oneself and negotiating preferences, that have evolved to work around these irrationalities, to the benefit of both parties. In particular, people are dynamically inconsistent, and willing to compromise a lot more later on in a courtship than they thought they would earlier on; it is often a favour to both of you to respect established boundaries regarding revealing information and getting ahead of the current stage of the relationship.

For those who have not much practised the skill of avoiding triggering Too Much Information reactions, it can feel painful and disingenuous to even try changing their behaviour, and they rationalise it via the Fundamental Fallacy. At any given moment, changing this behaviour is painful and causes a flinch reaction, even though the value of information of trying a different approach might be very high, and might cause less pain (e.g. through reduced loneliness) in the long term.

We also have:

PR rationalization and incrimination:

“There’s already enough ammunition out there if anybody wants to assassinate my character, launch a smear campaign, or perform a hatchet job. Nothing I say at this point could make it worse, so there’s no reason to censor myself.”

This is an overly convenient excuse. It does not take into account, for example, that new statements provide a new opportunity for one to come to the attention of quote miners in the first place, or that different statements might be more or less easy to seed a smear campaign; ammunition can vary in type and accessibility, so that adding more can increase the convenience of a hatchet job. It might turn out, after weighing the costs and benefits, that speaking honestly is the right decision. But one can’t know that on the strength of a convenient deontological argument that doesn’t consider those costs. Similarly:

“I’ve already pirated so much stuff I’d be screwed if I got caught. Maybe it was unwise and impulsive at first, but by now I’m past the point of no return.”

 This again fails to take into account the increased risk of one’s deeds coming to attention; if most prosecutions are caused by (even if not purely about) offences shortly before the prosecution, and you expect to pirate long into the future, then your position now is the same as when you first pirated; if it was unwise then, then it’s unwise now.

~~~~

The common fallacy in all these cases is that one looks at only the extreme possibilities, and throws out the inconvenient, ambiguous cases. This results in a disconnected space of possibilities that is engineered to allow one to prove a convenient conclusion. For example, the Seating Fallacy throws out the possibility that there are people who mind but also matter; the Fundamental Fallacy of Dating prematurely rules out people who are dynamically inconsistent or are imperfect introspectors, or who have uncertainty over preferences; PR rationalization fails to consider marginal effects and quantify risks in favour of a lossy binary approach.

What are other examples of situations where people (or Less Wrongers specifically) might fall prey to this failure mode?

Comment author: Khoth 26 August 2014 03:30:13PM *  2 points [-]

It's not enough to say "the act of smoking". What's the causal pathway that leads from the lesion to the act of smoking?

Anyway, the smoking lesion problem isn't confusing. It has a clear answer (smoking doesn't cause cancer), and it's only interesting because it can trip up attempts at mathematising decision theory.

Comment author: KnaveOfAllTrades 26 August 2014 04:05:25PM *  2 points [-]

It's not enough to say "the act of smoking". What's the causal pathway that leads from the lesion to the act of smoking?

Exactly, that's part of the problem. You have a bunch of frequencies based on various reference classes, without further information, and you have to figure out how the agent should act on that very limited information, which does not include explicit, detailed causal models. Not all possible worlds are evenly purely causal, so your point about causal pathways is at best an incomplete solution. That's the hard edge of the problem, and even if the correct answer turns out to be 'it depends' or 'the question doesn't make sense' or involves a dissolution of reference classes or whatever, then one paragraph isn't going to provide a solution and cut through the confusions behind the question.

It seems like your argument proves too much because it would dismiss taking Newcomb's problem seriously. 'It's not enough to say the act of two-boxing...' I don't think your attitude would have been productive for the progression of decision theory if people had applied it to other problems that are more mainstream.

It has a clear answer (smoking doesn't cause cancer), and it's only interesting because it can trip up attempts at mathematising decision theory.

That's exactly the point Wei Dai is making in the post I linked!! Decision theory problems aren't necessarily hard to find the correct specific answers to if we imagine ourselves in the situation. The point is that they are litmus tests for decision theories, and they make us draw up more robust general decision processes or illuminate our own decisions.

If you had said

If so, I don't think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn't apply to the smoker, as they're using a different decision-making process.

in response to Newcomb's problem, then most people here would see this as a flinch away from getting your hands dirty engaging with the problem. Maybe you're right and this is a matter whereof we cannot speak, but simply stating that is not useful to those who do not already believe that, and given the world we live in, can come across as a way of bragging about your non-confusion or lowering their 'status' by making it look like they're confused about an easily settled issue, even if that's not what you're (consciously) doing.

If you told a group building robot soccer players that beating their opponents is easy and a five-year-old could do it, or if you told them that they're wasting their time since the robots are using a different soccer-playing process, then that would not be very helpful in actually figuring out how to make/write better soccer-playing robots!

Comment author: Khoth 25 August 2014 08:38:31PM 3 points [-]

If it's not the urge, what is it? The decision algorithm? If so, I don't think it can be meaningfully answered, as any answer you come up with it while thinking about it on the internet doesn't apply to the smoker, as they're using a different decision-making process.

Comment author: KnaveOfAllTrades 26 August 2014 02:25:58PM *  1 point [-]

I don't think you're taking the thought experiment seriously enough and are prematurely considering it (dis)solved by giving a Clever solution. E.g.

If it's not the urge, what is it?

Obvious alternative that occurred to me in <5 seconds: It's not the urge, it's the actual act of smoking or knowing one has smoked. Even if these turn out not to not quite work, you don't show any sign of having even thought of them, which I would not expect if you were seriously engaging with the problem looking for a reduction that does not leave us feeling confused.

Edit: In fact, James already effectively said 'the act of smoking' in the comment to which you were replying!

becomes interesting if even after accounting for the urge to smoke, whether you actually smoke provides information on whether you are likely to get lung cancer.

Comment author: buybuydandavis 25 August 2014 09:12:36PM 3 points [-]

and since my brain makes a rational estimate of the probability of my getting lung cancer when deciding how much anxiety to dump on me

Generally, this is false.

Comment author: KnaveOfAllTrades 26 August 2014 01:57:38PM *  4 points [-]

If I took the time to write a comment laying out a decision theoretic problem and received a response like this (and saw it so upvoted), I would be pretty annoyed and suspect that maybe (though not definitely) the respondent was fighting the hypothetical, and that their flippant remark might change the tone of the conversation enough to discourage others engaging with my query.

I've been frustrated enough times by people nitpicking or derailing (even if only with not-supposed-to-be-derailing throwaway jokes) my attempts to introduce a hypothetical that by this point I would guess that in most cases it's actually rude to respond like this unless you're really, really sure that your nitpick of a premise actually significantly affects the hypothetical or that you've got a really good joke. In Should World people would evaluate the seriousness of a thought experiment on its merits and not by the immediate non-serious responses to it, but experience says to me that's not a property of the world we actually live in.

If I'm interpreting your comment correctly, you're either stating that it's not the case that people's brains make rational probability estimates (which everybody on friggin' LessWrong will already know!), or denying a very specific, intentionally artificial statement about the relation between credences and anxiety that was constructed for a decision theory thought experiment. In either case I'm not sure what the benefits of your comment are.

Am I missing something that you and the upvoters saw in your comment?

Edit: Okay, it occurs to me that maybe you were making an extremely tongue-in-cheek, understated rejection of the premise for comical effect--'Haha, the thought experiments we use are far divorced from the actual vagaries of human thought'. The fact I found it so hard to get this suggests to me that others probably didn't get the intended interpretation of your comment, which still leaves potential for it to have the negative effects I mentioned above. (E.g. maybe someone got your joke immediately, had a hearty laugh, and upvoted, but then the other upvoters thought they were upvoting the literal interpretation of your post.)

Comment author: sixes_and_sevens 25 August 2014 07:58:29PM *  18 points [-]

A few months ago I started using the Ultimate Geography Anki deck after performing quite abysmally on some silly geography quiz that was doing the rounds on Facebook. I now know where all the damn countries are, like an informed citizen of the world. This has proven itself very useful in a variety of ways, not least of which is in reading other material with a geographical backdrop. For example, the chapter in Guns, Germs and Steel on Africa is much more readable if you know where all the African countries are in relation to one another.

(In the process of doing this, coupled with an international event in Sweden, I've learned that the Scandinavian education systems are much, much better than that of the UK at teaching children about the rest of the world)

The geography deck was particularly easy to slip into because it developed an area I already (weakly) knew about. I'm looking for some new Anki content of a similar nature: a cross-domain-application body of knowledge I probably sort-of know a little bit already, that I can comprehensively improve upon.

Suggestions and anecdotes of similar experiences welcome.

Comment author: KnaveOfAllTrades 26 August 2014 01:10:00PM *  6 points [-]

Yep, I find the world a much less confusing place since I learned capitals and location on map. I had (and to some extent still do have) a mental block on geography which was ameliorated by it.

Rundown of positive and negative results:

In a similar but lesser way, I found learning English counties (and to an even lesser extent, Scottish counties) made UK geography a bit less intimidating. I used this deck because it's the only one on the Anki website I found that worked on my old-ass phone; it has a few howlers and throws some cities in there to fuck with you, but I learned to love it.

I suspect that learning the dates of monarchs and Prime Ministers (e.g. of England/UK) would have a similar benefit in contextualising and de-intimidating historical facts, but I never finished those decks and haven't touched them in a while, so never reached the critical mass of knowledge that allowed me to have a good handle on periods of British history. I found it pretty difficult to (for example) keep track of six different Georges and map each to dates, so slow progress put me off. Let me know if you're interested and want to set up a pact, e.g. 'We'll both do at least ten cards from each deck a day and report back to the other regularly' or something. In fact that offer probably stands for any readers.

I installed some decks for learning definitions in areas of math that I didn't know, but found memorising decontextualised definitions hard enough that I wasn't motivated to do it, given everything else I was doing and Anki-ing at the time. I still think repeat exposure to definitions might be a useful developmental strategy for math that nobody seems to be using deliberately and systematically, but I'm not sure Anki is a right way to do it. Or if it is, that shooting so far ahead of my current knowledge was the best way to do it. Similarly a LaTeX deck I got having pretty much never used LaTeX and not practising it while learning the deck.

Canadian provinces/territories I have not yet found useful beyond feeling good for ticking off learning the deck, which was enough for me since I did them in a session or two.

Languages Spoken in Each Country of the World (I was trying to do not just country-->languages but country-->languages with proportions of population speaking the languages) was so difficult and unrewarding in the short term that I lost motivation extremely quickly (this was months ago). The mental association between 'Berber' and 'North Africa' has come up a surprising number of times, though. Most recently yesterday night.

Periodic table (symbol<--->name, name<-->number) took lots of time and hasn't been very useful for me personally (I pretty much just learned it in preparation for a quiz). Learning just which elements are in which groups/sections of the Periodic table might be more useful and a lot quicker (since by far the main difficulty was name<--->number).

I am relatively often wanting for demographic and economic data, e.g. population of countries, population of major world cities, population of UK places, GDP's. Ideally I'd not just do this for major places since I want to get a good intuitive sense of these figures for very large or major places on down to tiny places.

Similarly if one has a hobby horse it could be useful. Examples off the top of my head (not necessarily my hobby horse): Memorising the results from the LessWrong surveys. Memorising the results from the PhilPapers survey. Memorising data about resource costs of meat production vs. other food production. Memorising failed AGI timeline predictions. Etc.

I found starting to learn Booker Prize winners on Memrise has let me have a few 'Ah, I recognise that name and literature seems less opaque to me, yay!' moments, but there's probably higher-priority decks for you to learn unless that's more your area.

Comment author: Nate_Gabriel 22 August 2014 04:04:52AM 4 points [-]

I still don't think George VI having more siblings is an observer-killing event.

Since we now know that George VI didn’t have more siblings, we obtain

Probability(You exist [and know that George VI had exactly five siblings] | George VI had more than five siblings) = 0

I assume you mean "know" the usual way. Not hundred percent certainty, just that I saw it on Wikipedia and now it's a fact I'm aware of. Then P(I exist with this mind state | George VI had more than five siblings) isn't zero, it's some number based on my prior for Wikipedia being wrong.

So my mind state is more likely in a five-sibling world than a six-sibling one, but using it as anthropic evidence would just be double-counting whatever evidence left me with that mind state in the first place.

Comment author: KnaveOfAllTrades 22 August 2014 07:43:54AM 2 points [-]

So my mind state is more likely in a five-sibling world than a six-sibling one, but using it as anthropic evidence would just be double-counting whatever evidence left me with that mind state in the first place.

Yep; in which case the anthropic evidence isn't doing any useful explanatory work, and the thesis 'Anthropics doesn't explain X' holds.

Comment author: RichardKennaway 20 August 2014 09:43:29PM 4 points [-]

Rightly or wrongly, I don't pay much attention to anthropics, but here's another argument to throw into the pot to rebut the argument of part III:

Nuclear exchange (of the sort assumed) results in fewer observers around for me to be any of them. Whereas, more siblings for George VI leaves just as many observers around.

Except I don't think this works. The answer to the question "why did X happen" should not depend on who is asking. Martian historians observing the Earth and asking "how did they avoid blowing themselves up?" are not in a position to answer the question anthropically(1), without going all the way to the absurdity of answering every question "why X?" with "otherwise, you would not be asking why X".

(1) Or whatever the word should be. Perhaps just "anthropically".

Comment author: KnaveOfAllTrades 20 August 2014 10:05:27PM *  1 point [-]

Yes! There's a lot of ways to remove the original observer from the question.

The example I thought of (but ended up not including): If all one's credence were on simula(ta)ble (possibly to arbitrary precision/accuracy even if perfect simulation were not quite possible) models and one could specify a prior over initial conditions at the start of the Cold War, then one could simulate each set of initial conditions forward then run an analysis over the sets of initial conditions to see if any actionable causal factors showed up leading to the presence or absence of a nuclear exchange.

A problem with this is that whether one would expect such a set of simulations to show a nuclear exchange to be the usual outcome or not is pretty much the same as one's prior for a nuclear exchange in the non-simulated Cold War, by conservation of expected evidence. But maybe it suffices to at least show that the selection effect is irrelevant to the causal factors we're interested in. Certainly it gives a way to ask such questions that has a better chance of circumventing anthropic explanations in which one might not be interested.

Comment author: DanielLC 20 August 2014 08:14:35PM *  15 points [-]

Alice notices that George VI had five siblings. She asks Bob why that is. After all, it's so much more likely for him to have a number of siblings other than five. Bob tells her that it's a silly question. The only reason she picked out five is that that's how many siblings he had. If he'd had six children, she (or rather someone else, because it's not going to be the same people) would be asking why he had six siblings. There's no coincidence.

Alice notices that Earth survived the cold war. She asks Bob why that is. After all, so much more likely for Earth not to survive. Bob tells her that it's a silly question. The only reason she picked out Earth is that it's her home planet, which is because it survived the cold war. If Earth died and, say, Pandora survived, she (or rather someone else, because it's not going to be the same people) would be asking why Pandora survived the cold war. There's no coincidence.

Comment author: KnaveOfAllTrades 20 August 2014 09:08:14PM *  5 points [-]

Is this in support of or in opposition to the thesis of the post? Or am I being presumptuous to suppose that it is either?

View more: Prev | Next