Only You Can Prevent Your Mind From Getting Killed By Politics

39 ChrisHallquist 26 October 2013 01:59PM

Follow-up to: "Politics is the mind-killer" is the mind-killerTrusting Expert Consensus

Gratuitous political digs are to be avoided. Indeed, I edited my post on voting to keep it from sounding any more partisan than necessary. But the fact that writers shouldn't gratuitously mind-kill their readers doesn't mean that, when they do, the readers' reaction is rational. The rules for readers are different from the rules for writers. And it especially doesn't mean that when a writer talks about a "political" topic for a reason, readers can use "politics!" as an excuse for attacking a statement of fact that makes them uncomfortable.

continue reading »

An angle of attack on Open Problem #1

30 Benja 18 August 2012 12:08PM

There is a problem with the proof here and I have to think about whether I can fix it. Thanks to vi21maobk9vp for pointing me in the right direction! I have posted a new and hopefully correct proof attempt. Thanks again to vi21maobk9vp!

In his talk on open problems in Friendly AI, Eliezer's first question is how, given Löb's theorem, an AI can replace itself with a better expected utility maximizer that believes in as much mathematics as the original AI. I know exactly one trick for that sort of problem, so I decided to try that on a toy variant. To my surprise, it more or less just worked. Therefore:

Professor Quirrell proposes a game. You start with a score of one. Professor Quirrell moves first, by choosing a computer program and showing you its source code. You then have three options: Take your winnings; double down; or self-destruct.

If you take your winnings, the game ends, and your score is converted to Quirrell points.

If you self-destruct, the game ends, your score is lost, you'll be sent to bed without dinner, you'll lose 150 House points, Rita Skeeter will write a feature alleging that you're a Death Eater, and Professor Quirrell will publicly critique your performance. You are advised not to pick this option.

If you double down, your score doubles, and you advance to the next round. Professor Quirrell again moves first by choosing a computer program. Then, it's your turn—except that this time, you don't get to choose your move yourself: instead, it'll be chosen by Professor Quirrell's program from the previous round.

Professor Quirrell will endeavor to present an educational sequence of programs.

continue reading »

Solving the two envelopes problem

32 rstarkov 09 August 2012 01:42PM

Suppose you are presented with a game. You are given a red and a blue envelope with some money in each. You are allowed to ask an independent party to open both envelopes, and tell you the ratio of blue:red amounts (but not the actual amounts). If you do, the game master replaces the envelopes, and the amounts inside are chosen by him using the same algorithm as before.

You ask the independent observer to check the amounts a million times, and find that half the time the ratio is 2 (blue has twice as much as red), and half the time it's 0.5 (red has twice as much as blue). At this point, the game master discloses that in fact, the way he chooses the amounts mathematically guarantees that these probabilities hold.

Which envelope should you pick to maximize your expected wealth?

It may seem surprising, but with this set-up, the game master can choose to make either red or blue have a higher expected amount of money in it, or make the two the same. Asking the independent party as described above will not help you establish which is which. This is the surprising part and is, in my opinion, the crux of the two envelopes problem.

continue reading »

Self-skepticism: the first principle of rationality

36 aaronsw 06 August 2012 12:51AM

When Richard Feynman started investigating irrationality in the 1970s, he quickly begun to realize the problem wasn't limited to the obvious irrationalists.

Uri Geller claimed he could bend keys with his mind. But was he really any different from the academics who insisted their special techniques could teach children to read? Both failed the crucial scientific test of skeptical experiment: Geller's keys failed to bend in Feynman's hands; outside tests showed the new techniques only caused reading scores to go down.

What mattered was not how smart the people were, or whether they wore lab coats or used long words, but whether they followed what he concluded was the crucial principle of truly scientific thought: "a kind of utter honesty--a kind of leaning over backwards" to prove yourself wrong. In a word: self-skepticism.

As Feynman wrote, "The first principle is that you must not fool yourself -- and you are the easiest person to fool." Our beliefs always seem correct to us -- after all, that's why they're our beliefs -- so we have to work extra-hard to try to prove them wrong. This means constantly looking for ways to test them against reality and to think of reasons our tests might be insufficient.

When I think of the most rational people I know, it's this quality of theirs that's most pronounced. They are constantly trying to prove themselves wrong -- they attack their beliefs with everything they can find and when they run out of weapons they go out and search for more. The result is that by the time I come around, they not only acknowledge all my criticisms but propose several more I hadn't even thought of.

And when I think of the least rational people I know, what's striking is how they do the exact opposite: instead of viciously attacking their beliefs, they try desperately to defend them. They too have responses to all my critiques, but instead of acknowledging and agreeing, they viciously attack my critique so it never touches their precious belief.

Since these two can be hard to distinguish, it's best to look at some examples. The Cochrane Collaboration argues that support from hospital nurses may be helpful in getting people to quit smoking. How do they know that? you might ask. Well, they found this was the result from doing a meta-analysis of 31 different studies. But maybe they chose a biased selection of studies? Well, they systematically searched "MEDLINE, EMBASE and PsycINFO [along with] hand searching of specialist journals, conference proceedings, and reference lists of previous trials and overviews." But did the studies they pick suffer from selection bias? Well, they searched for that -- along with three other kinds of systematic bias. And so on. But even after all this careful work, they still only are confident enough to conclude "the results…support a modest but positive effect…with caution … these meta-analysis findings need to be interpreted carefully in light of the methodological limitations".

Compare this to the Heritage Foundation's argument for the bipartisan Wyden–Ryan premium support plan. Their report also discusses lots of objections to the proposal, but confidently knocks down each one: "this analysis relies on two highly implausible assumptions ... All these predictions were dead wrong. ... this perspective completely ignores the history of Medicare" Their conclusion is similarly confident: "The arguments used by opponents of premium support are weak and flawed." Apparently there's just not a single reason to be cautious about their enormous government policy proposal!

Now, of course, the Cochrane authors might be secretly quite confident and the Heritage Foundation might be wringing their hands with self-skepticism behind-the-scenes. But let's imagine for a moment that these aren't just reportes intended to persuade others of a belief and instead accurate portrayals of how these two different groups approached the question. Now ask: which style of thinking is more likely to lead the authors to the right answer? Which attitude seems more like Richard Feynman? Which seems more like Uri Geller?

Decision Theories: A Less Wrong Primer

69 orthonormal 13 March 2012 11:31PM

Alpha-beta pruning (from Wikipedia)

Summary: If you've been wondering why people keep going on about decision theory on Less Wrong, I wrote you this post as an answer. I explain what decision theories are, show how Causal Decision Theory works and where it seems to give the wrong answers, introduce (very briefly) some candidates for a more advanced decision theory, and touch on the (possible) connection between decision theory and ethics.

continue reading »

An Intuitive Explanation of Solomonoff Induction

53 Alex_Altair 11 July 2012 08:05AM

This is the completed article that Luke wrote the first half of. My thanks go to the following for reading, editing, and commenting; Luke Muehlhauser, Louie Helm, Benjamin Noble, and Francelle Wax.

People disagree about things. Some say that television makes you dumber; other say it makes you smarter. Some scientists believe life must exist elsewhere in the universe; others believe it must not. Some say that complicated financial derivatives are essential to a modern competitive economy; others think a nation's economy will do better without them. It's hard to know what is true.

And it's hard to know how to figure out what is true. Some argue that you should assume the things you are most certain about and then deduce all other beliefs from your original beliefs. Others think you should accept at face value the most intuitive explanations of personal experience. Still others think you should generally agree with the scientific consensus until it is disproved.

Wouldn't it be nice if determining what is true was like baking a cake? What if there was a recipe for finding out what is true? All you'd have to do is follow the written directions exactly, and after the last instruction you'd inevitably find yourself with some sweet, tasty truth!

In this tutorial, we'll explain the closest thing we’ve found so far to a recipe for finding truth: Solomonoff induction.

There are some qualifications to make. To describe just one: roughly speaking, you don't have time to follow the recipe. To find the truth to even a simple question using this recipe would require you to follow one step after another until long after the heat death of the universe, and you can't do that.

But we can find shortcuts. Suppose you know that the exact recipe for baking a cake asks you to count out one molecule of H2O at a time until you have exactly 0.5 cups of water. If you did that, you might not finish the cake before the heat death of the universe. But you could approximate that part of the recipe by measuring out something very close to 0.5 cups of water, and you'd probably still end up with a pretty good cake.

Similarly, once we know the exact recipe for finding truth, we can try to approximate it in a way that allows us to finish all the steps sometime before the sun burns out.

This tutorial explains that best-we've-got-so-far recipe for finding truth, Solomonoff induction. Don’t worry, we won’t be using any equations, just qualitative descriptions.

Like Eliezer Yudkowsky's Intuitive Explanation of Bayes' Theorem and Luke Muehlhauser's Crash Course in the Neuroscience of Human Motivation, this tutorial is long. You may not have time to read it; that's fine. But if you do read it, we recommend that you read it in sections.

continue reading »