1) if something very bad is about to happen to you, what's your credence that you're in a rescue sim and have nothing to fear?
I'd give that some credence, though note that we've talking about subjective anticipation, which is a piece of humanly-compelling nonsense.
However, if you approach them with a serious deal where some bias identified in the lab would lead them to accept unfavorable terms with real consequences, they won't trust their unreliable judgments, and instead they'll ask for third-party advice and see what the normal and usual way to handle such a situation is. If no such guidance is available, they'll fall back on the status quo heuristic. People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by relying on their faulty reasoning too much. This is why for all the systematic biases discussed here, it's extremely hard to actually exploit these biases in practice to make money.
Yeah... that sounds right. Also, suppose that you have an irrational stock price. One or two contrarians can't make much more than double their stake money out of it, because if they go leveraged, the market might get more irrational before it gets less irrational, and wipe their position.
People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by relying on their faulty reasoning too much.
Yeah... this is what Bryan Caplan says in The Myth of the Rational Voter
There is a point I am trying to make with this: the human race is a collective where the individual parts pretend to care about the whole, but actually don't care, and we (mostly) do this the insidious way, i.e. using lots of biased thinking. In fact most people even have themselves fooled, and this is an illusion that they're not keen on being disabused of.
The results... well, we'll see.
Look, maybe it does sound kooky, but people who really genuinely cared might at least invest more time in finding out how good its pedigree was. On the other hand, people who just wanted an excuse to ignore it would say "it's kooky, I'm going to ignore it".
But one could look at other cases, for example direct donation of money to the future (Robin has done this).
Or the relative lack of attention to more scientifically respectable existential risks, or even existential risks in general. (Human extinction risk, etc).
As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.
Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.
Discuss.
Think about it in evolutionary terms. Roughly speaking, taking the action of attempting to kill someone is risky. An attractive female body is pretty much a guaranteed win for the genes concerned, so it's pointless taking risks. [Note: I just made this up, it might be wrong, but definitely look for an evo-psych explanation]
This explanation also accounts for the lower violent crime rate amongst women, since women are, from a gene's point of view, a low risk strategy, whereas violence is a risky business: you might win, but then again, you might die.
It would also predict, other things equal, lower crime rates amongst physically attractive men.
I had heard about the case casually on the news a few months ago. It was obvious to me that Amanda Knox was innocent. My probability estimate of guilt was around 1%. This makes me one of the few people in reasonably good agreement with Eli's conclusion.
I know almost nothing of the facts of the case.
I only saw a photo of Amanda Knox's face. Girls with cute smiles like that don't brutally murder people. I was horrified to see that among 300 posts on Less Wrong, only two mentioned this, and it was to urge people to ignore the photos. Are they all too PC or something? Have they never read Eckman, or at least Gladwell? Perhaps Less Wrong commenters are distrustful of their instincts to the point of throwing out the baby with the bathwater.
http://www.amandadefensefund.org/Family_Photos.html
Perhaps it is confusing to people that the actual killer is probably a scary looking black guy with a sunken brow. Obviously most scary looking black guys with sunken brows never kill anyone. So that guy's appearance is only very weak evidence of his guilt. But wholesome-looking apple-cheeked college girls don't brutally murder people ever, pretty much. So that is strong evidence of her innocence.
Yes, but you can manipulate whether the world getting saved had anything to do with you, and you can influence what kind of world you survive into.
If you make a low-probability, high reward bet that and really commit to donating the money to an X-risks organization, you may find yourself winning that bet more often than you would probabilistically expect.
In general, QI means that you care about the nature of your survival, but not whether you survive.
I was on Robert Wright's side towards the end of this debate when he claimed that there was a higher optimization process that created natural selection for a purpose.
The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)
The optimization process that optimized all these things is called anthropics. Its principle of operation is absurdly simple: you can't find yourself in a part of the universe that can't create you.
When Robert Wright looks at evolution and sees purpose in the existence of the process of evolution itself (and the particular way it happened to play out, including increasing complexity), he is seeing the evidence for anthropics and big worlds.
Once you take away all the meta-purpose that is caused by anthropics, then I really do think there is no more purpose left. Eli should re-do the debate with this insight on the table.
(note 1) (including that evolution on earth happened to create intelligence, which seems to be a highly unlikley outcome of a generic biochemical replicator process on a generic planet; we know this because earth managed to have life for 4 billion years -- half of its total viability as a place for life -- without intelligence emerging, and said intelligence seemed to depend in an essential way on a random asteroid impact at approximately the right moment )