Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Comment author: SforSingularity 07 August 2010 10:31:41PM *  0 points [-]

I was on Robert Wright's side towards the end of this debate when he claimed that there was a higher optimization process that created natural selection for a purpose.

The purpose of natural selection, fine-tuning of physical constants in our universe, and of countless other detailed coincidences (1) was to create me. (Or, for the readers of this comment, to create you)

The optimization process that optimized all these things is called anthropics. Its principle of operation is absurdly simple: you can't find yourself in a part of the universe that can't create you.

When Robert Wright looks at evolution and sees purpose in the existence of the process of evolution itself (and the particular way it happened to play out, including increasing complexity), he is seeing the evidence for anthropics and big worlds.

Once you take away all the meta-purpose that is caused by anthropics, then I really do think there is no more purpose left. Eli should re-do the debate with this insight on the table.

(note 1) (including that evolution on earth happened to create intelligence, which seems to be a highly unlikley outcome of a generic biochemical replicator process on a generic planet; we know this because earth managed to have life for 4 billion years -- half of its total viability as a place for life -- without intelligence emerging, and said intelligence seemed to depend in an essential way on a random asteroid impact at approximately the right moment )

Comment author: cousin_it 25 July 2010 10:09:22AM *  12 points [-]

In the comment section of Roko's banned post, PeerInfinity mentioned "rescue simulations". I'm not going to post the context here because I respect Eliezer's dictatorial right to stop that discussion, but here's another disturbing thought.

An FAI created in the future may take into account our crazy desire that the all the suffering in the history of the world hadn't happened. Barring time machines, it cannot reach into the past and undo the suffering (and we know that hasn't happened anyway), but acausal control allows it to do the next best thing: create large numbers of history sims where bad things get averted. This raises two questions: 1) if something very bad is about to happen to you, what's your credence that you're in a rescue sim and have nothing to fear? 2) if something very bad has already happened to you, does this constitute evidence that we will never build an FAI?

(If this isn't clear: just like PlaidX's post, my comment is intended as a reductio ad absurdum of any fears/hopes concerning future superintelligences. I'd still appreciate any serious answers though.)

Comment author: SforSingularity 26 July 2010 09:57:09PM 4 points [-]

1) if something very bad is about to happen to you, what's your credence that you're in a rescue sim and have nothing to fear?

I'd give that some credence, though note that we've talking about subjective anticipation, which is a piece of humanly-compelling nonsense.

Comment author: Vladimir_M 15 July 2010 05:51:49PM *  8 points [-]

Well, if someone knows about systematic biases that don't go away with incentivization, they're probably too busy making money off that insight to comment here!

In practice, when the stakes are high, it is not so much that people start thinking more accurately -- though this will happen to some extent, and for some people dramatically so -- but rather that they become more cautious.

If you take ordinary folks into the lab and ask them questions they don't care about, it's easy to get them to commit all sorts of logical errors. However, if you approach them with a serious deal where some bias identified in the lab would lead them to accept unfavorable terms with real consequences, they won't trust their unreliable judgments, and instead they'll ask for third-party advice and see what the normal and usual way to handle such a situation is. If no such guidance is available, they'll fall back on the status quo heuristic. People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by relying on their faulty reasoning too much.

This is why for all the systematic biases discussed here, it's extremely hard to actually exploit these biases in practice to make money. It also explains how market bubbles and Ponzi schemes can lead to such awful collective insanity: as the snowball keeps rolling and growing, people see others massively falling for the scam, and conclude that it must be a safe and sound option if all these other normal and respectable folks are doing it. The caution/normality/status quo heuristics break down in this situation.

Comment author: SforSingularity 15 July 2010 09:13:43PM 0 points [-]

However, if you approach them with a serious deal where some bias identified in the lab would lead them to accept unfavorable terms with real consequences, they won't trust their unreliable judgments, and instead they'll ask for third-party advice and see what the normal and usual way to handle such a situation is. If no such guidance is available, they'll fall back on the status quo heuristic. People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by relying on their faulty reasoning too much. This is why for all the systematic biases discussed here, it's extremely hard to actually exploit these biases in practice to make money.

Yeah... that sounds right. Also, suppose that you have an irrational stock price. One or two contrarians can't make much more than double their stake money out of it, because if they go leveraged, the market might get more irrational before it gets less irrational, and wipe their position.

Comment author: Vladimir_M 15 July 2010 05:51:49PM *  8 points [-]

Well, if someone knows about systematic biases that don't go away with incentivization, they're probably too busy making money off that insight to comment here!

In practice, when the stakes are high, it is not so much that people start thinking more accurately -- though this will happen to some extent, and for some people dramatically so -- but rather that they become more cautious.

If you take ordinary folks into the lab and ask them questions they don't care about, it's easy to get them to commit all sorts of logical errors. However, if you approach them with a serious deal where some bias identified in the lab would lead them to accept unfavorable terms with real consequences, they won't trust their unreliable judgments, and instead they'll ask for third-party advice and see what the normal and usual way to handle such a situation is. If no such guidance is available, they'll fall back on the status quo heuristic. People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by relying on their faulty reasoning too much.

This is why for all the systematic biases discussed here, it's extremely hard to actually exploit these biases in practice to make money. It also explains how market bubbles and Ponzi schemes can lead to such awful collective insanity: as the snowball keeps rolling and growing, people see others massively falling for the scam, and conclude that it must be a safe and sound option if all these other normal and respectable folks are doing it. The caution/normality/status quo heuristics break down in this situation.

Comment author: SforSingularity 15 July 2010 09:11:21PM 2 points [-]

People hate to admit their intellectual limitations explicitly, but they're good at recognizing them instinctively before they get themselves into trouble by relying on their faulty reasoning too much.

Yeah... this is what Bryan Caplan says in The Myth of the Rational Voter

Comment author: Jack 09 June 2010 06:09:09AM 27 points [-]

Almost everyone else though they were really weird when they started to try to act on these beliefs.

This is a terribly counter-productive attitude to have. I don't think trying to save the world is what people found weird. Lots of people, especially young people, have aspirations of saving the world. People think the Singularity Institute is weird because SIAI's chosen method of saving the world is really unconventional, not marketable, and pattern matches with bizarre sci-fi fantasies (and some of the promoters of these fantasies are actually connected to the institute). If you think the pool of potential donors are all hypocrites you make it really difficult to bring them in.

Comment author: SforSingularity 14 July 2010 11:06:33PM *  12 points [-]

There is a point I am trying to make with this: the human race is a collective where the individual parts pretend to care about the whole, but actually don't care, and we (mostly) do this the insidious way, i.e. using lots of biased thinking. In fact most people even have themselves fooled, and this is an illusion that they're not keen on being disabused of.

The results... well, we'll see.

Comment author: Jack 09 June 2010 06:09:09AM 27 points [-]

Almost everyone else though they were really weird when they started to try to act on these beliefs.

This is a terribly counter-productive attitude to have. I don't think trying to save the world is what people found weird. Lots of people, especially young people, have aspirations of saving the world. People think the Singularity Institute is weird because SIAI's chosen method of saving the world is really unconventional, not marketable, and pattern matches with bizarre sci-fi fantasies (and some of the promoters of these fantasies are actually connected to the institute). If you think the pool of potential donors are all hypocrites you make it really difficult to bring them in.

Comment author: SforSingularity 14 July 2010 10:42:31PM 5 points [-]

Look, maybe it does sound kooky, but people who really genuinely cared might at least invest more time in finding out how good its pedigree was. On the other hand, people who just wanted an excuse to ignore it would say "it's kooky, I'm going to ignore it".

But one could look at other cases, for example direct donation of money to the future (Robin has done this).

Or the relative lack of attention to more scientifically respectable existential risks, or even existential risks in general. (Human extinction risk, etc).

Comment author: SforSingularity 03 April 2010 03:51:10PM *  4 points [-]

As you grow up, you start to see that the world is full of waste, injustice and bad incentives. You try frantically to tell people about this, and it always seems to go badly for you.

Then you grow up a bit more, get a bit wise, and realize that the mother-of-all-bad-incentives, the worst injustice, and the greatest meta-cause of waste ... is that people who point out such problems get punished, (especially) including pointing out this problem. If you are wise, you then become an initiate of the secret conspiracy of the successful.

Discuss.

Comment author: Alicorn 27 December 2009 08:26:31PM 5 points [-]

Via what mechanism does wholesome appearance and apple-cheekedness correlate with a disinclination to commit murder? For example, does a murderous disposition drain the blood from one's face? Or does having a cute smile prevent people from treating the person in such a way as to engender a murderous disposition from without? I wouldn't be exactly astonished to find a real, strong correlation between looking creepy and being dangerous. But I'd like to know how it works.

Comment author: SforSingularity 01 January 2010 02:28:44PM *  6 points [-]

Think about it in evolutionary terms. Roughly speaking, taking the action of attempting to kill someone is risky. An attractive female body is pretty much a guaranteed win for the genes concerned, so it's pointless taking risks. [Note: I just made this up, it might be wrong, but definitely look for an evo-psych explanation]

This explanation also accounts for the lower violent crime rate amongst women, since women are, from a gene's point of view, a low risk strategy, whereas violence is a risky business: you might win, but then again, you might die.

It would also predict, other things equal, lower crime rates amongst physically attractive men.

Comment author: SforSingularity 27 December 2009 08:05:16PM 1 point [-]

I had heard about the case casually on the news a few months ago. It was obvious to me that Amanda Knox was innocent. My probability estimate of guilt was around 1%. This makes me one of the few people in reasonably good agreement with Eli's conclusion.

I know almost nothing of the facts of the case.

I only saw a photo of Amanda Knox's face. Girls with cute smiles like that don't brutally murder people. I was horrified to see that among 300 posts on Less Wrong, only two mentioned this, and it was to urge people to ignore the photos. Are they all too PC or something? Have they never read Eckman, or at least Gladwell? Perhaps Less Wrong commenters are distrustful of their instincts to the point of throwing out the baby with the bathwater.

http://www.amandadefensefund.org/Family_Photos.html

Perhaps it is confusing to people that the actual killer is probably a scary looking black guy with a sunken brow. Obviously most scary looking black guys with sunken brows never kill anyone. So that guy's appearance is only very weak evidence of his guilt. But wholesome-looking apple-cheeked college girls don't brutally murder people ever, pretty much. So that is strong evidence of her innocence.

Comment author: wedrifid 25 November 2009 07:23:18AM *  2 points [-]

Another advantage of buying triple rollover tickets is that if you adhere to quantum immortality plus the belief that uFAI reliably kills the world, then you'll win the lottery in all the worlds that you care about.

If you had such an attitude then the lottery is irrelevant. You don't care what the 'world-saving probability' is so don't need to manipulate it.

Comment author: SforSingularity 26 November 2009 02:54:40AM 0 points [-]

Yes, but you can manipulate whether the world getting saved had anything to do with you, and you can influence what kind of world you survive into.

If you make a low-probability, high reward bet that and really commit to donating the money to an X-risks organization, you may find yourself winning that bet more often than you would probabilistically expect.

In general, QI means that you care about the nature of your survival, but not whether you survive.

View more: Next