A proposed inefficiency in the Bitcoin markets
[Beliefs about order of magnitude of Bitcoin's future value] --> [Beliefs about Bitcoin's future price] --> [Trading decisions]
To Learn Critical Thinking, Study Critical Thinking
Critical thinking courses may increase students’ rationality, especially if they do argument mapping.
The following excerpts are from “Does philosophy improve critical thinking skills?”, Ortiz 2007.
1 Excerpts
This thesis makes a first attempt to subject the assumption that studying [Anglo-American analytic] philosophy improves critical thinking skills to rigorous investigation.
…Thus the second task, in Chapter 3, is to articulate and critically examine the standard arguments that are raised in support of the assumption (or rather, would be raised if philosophers were in the habit of providing support for the assumption). These arguments are found to be too weak to establish the truth of the assumption. The failure of the standard arguments leaves open the question of whether the assumption is in fact true. The thesis argues at this point that, since the assumption is making an empirical assertion, it should be investigated using standard empirical techniques as developed in the social sciences. In Chapter 4, I conduct an informal review of the empirical literature. The review finds that evidence from the existing empirical literature is inconclusive. Chapter 5 presents the empirical core of the thesis. I use the technique of meta-analysis to integrate data from a large number of empirical studies. This meta-analysis gives us the best yet fix on the extent to which critical thinking skills improve over a semester of studying philosophy, general university study, and studying critical thinking. The meta-analysis results indicate that students do improve while studying philosophy, and apparently more so than general university students, though we cannot be very confident that this difference is not just the result of random variation. More importantly, studying philosophy is less effective than studying critical thinking, regardless of whether one is being taught in a philosophy department or in some other department. Finally, studying philosophy is much less effective than studying critical thinking using techniques known to be particularly effective such as LAMP.
Exploring the Idea Space Efficiently
Simon is writing a calculus textbook. Since there are a lot of textbooks on the market, he wants to make his distinctive by including a lot of original examples. To do this, he decides to first check what sorts of examples are in some of the other books, and then make sure to avoid those. Unfortunately, after skimming through several other books, he finds himself completely unable to think of original examples—his mind keeps returning to the examples he's just read instead of coming up with new ones.
What he's experiencing here is another aspect of priming or anchoring. The way it appears to happen in my brain is that it decides to anchor on the examples it's already seen and explore the idea-space from there, moving from an idea only to ideas that are closely related to it (similarly to a depth-first search)
At first, this search strategy might not seem so bad—in fact, it's ideal if there is one best solution and the closer you get to it the better. For example, if you were shooting arrows at a target, all you'd need to consider is how close to the center you can hit. Where we run into problems, however, is trying to come up with multiple solutions (such as multiple examples of the applications of calculus), or trying to come up with the best solution when there are many plausible solutions. In these cases, our brain's default search algorithm will often grab the first idea it can think of and try to refine it, even if what we really need is a completely different idea.
The Bias You Didn't Expect
There are few places where society values rational, objective decision making as much as it values it in judges. While there is a rather cynical discipline called legal realism that says the law is really based on quirks of individual psychology, "what the judge had for breakfast," there's a broad social belief that the decision of judges are unbiased. And where they aren't unbiased, they're biased for Big, Important, Bad reasons, like racism or classism or politics.
It turns out that legal realism is totally wrong. It's not what the judge had for breakfast. It's how recently the judge had breakfast. A a new study (media coverage) on Israeli judges shows that, when making parole decisions, they grant about 65% after meal breaks, and almost all the way down to 0% right before breaks and at the end of the day (i.e. as far from the last break as possible). There's a relatively linear decline between the two points.
Dead men tell tales: falling out of love with SIA
SIA is the Self Indication Assumption, an anthropic theory about how we should reason about the universe given that we exist. I used to love it; the argument that I've found most convincing about SIA was the one I presented in this post. Recently, I've been falling out of love with SIA, and moving more towards a UDT version of anthropics (objective probabilities and total impact of your decision being of a specific type, including in all copies of you and enemies with the same decision process). So it's time I revisit my old post, and find the hole.
The argument rested on the plausible sounding assumption that creating extra copies and killing them is no different from if they hadn't existed in the first place. More precisely, it rested on the assumption that if I was told "You are not one of the agents I am about to talk about. Extra copies were created to be destroyed," it was exactly the same as hearing "Extra copies were created to be destroyed. And you're not one of them."
But I realised that from the UDT/TDT perspective, there is a great difference between the two situations, if I have the time to update decisions in the course of the sentence. Consider the following three scenarios:
- Scenario 1 (SIA):
Two agents are created, then one is destroyed with 50% probability. Each living agent is entirely selfish, with utility linear in money, and the dead agent gets nothing. Every survivor will be presented with the same bet. Then you should take the SIA 2:1 odds that you are in the world with two agents. This is the scenario I was assuming.
- Scenario 2 (SSA):
Two agents are created, then one is destroyed with 50% probability. Each living agent is entirely selfish, with utility linear in money, and the dead agent is altruistic towards his survivor. This is similar to my initial intuition in this post. Note that every agents have the same utility: "as long as I live, I care about myself, but after I die, I'll care about the other guy", so you can't distinguish them based on their utility. As before, every survivor will be presented with the same bet.
Here, once you have been told the scenario, but before knowing whether anyone has been killed, you should pre-commit to taking 1:1 odds that you are in the world with two agents. And in UDT/TDT precommitting is the same as making the decision.
Statistical Prediction Rules Out-Perform Expert Human Judgments
A parole board considers the release of a prisoner: Will he be violent again? A hiring officer considers a job candidate: Will she be a valuable asset to the company? A young couple considers marriage: Will they have a happy marriage?
The cached wisdom for making such high-stakes predictions is to have experts gather as much evidence as possible, weigh this evidence, and make a judgment. But 60 years of research has shown that in hundreds of cases, a simple formula called a statistical prediction rule (SPR) makes better predictions than leading experts do. Or, more exactly:
When based on the same evidence, the predictions of SPRs are at least as reliable as, and are typically more reliable than, the predictions of human experts for problems of social prediction.1
For example, one SPR developed in 1995 predicts the price of mature Bordeaux red wines at auction better than expert wine tasters do. Reaction from the wine-tasting industry to such wine-predicting SPRs has been "somewhere between violent and hysterical."
How does the SPR work? This particular SPR is called a proper linear model, which has the form:
P = w1(c1) + w2(c2) + w3(c3) + ...wn(cn)
The model calculates the summed result P, which aims to predict a target property such as wine price, on the basis of a series of cues. Above, cn is the value of the nth cue, and wn is the weight assigned to the nth cue.2
In the wine-predicting SPR, c1 reflects the age of the vintage, and other cues reflect relevant climatic features where the grapes were grown. The weights for the cues were assigned on the basis of a comparison of these cues to a large set of data on past market prices for mature Bordeaux wines.3
There are other ways to construct SPRs, but rather than survey these details, I will instead survey the incredible success of SPRs.
How are critical thinking skills acquired? Five perspectives
Link to source: http://timvangelder.com/2010/10/20/how-are-critical-thinking-skills-acquired-five-perspectives/
Previous LW discussion of argument mapping: Argument Maps Improve Critical Thinking, Debate tools: an experience report
How are critical thinking skills acquired? Five perspectives: Tim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don't work, and one with which he and hundreds of his students have had significant success.
In our work in the Reason Project at the University of Melbourne we refined the Practice perspective into what we called the Quality (or Deliberate) Practice Hypothesis. This was based on the foundational work of Ericsson and others who have shown that skill acquisition in general depends on extensive quality practice. We conjectured that this would also be true of critical thinking; i.e. critical thinking skills would be (best) acquired by doing lots and lots of good-quality practice on a wide range of real (or realistic) critical thinking problems. To improve the quality of practice we developed a training program based around the use of argument mapping, resulting in what has been called the LAMP (Lots of Argument Mapping) approach. In a series of rigorous (or rather, as-rigorous-as-possible-under-the-circumstances) studies involving pre-, post- and follow-up testing using a variety of tests, and setting our results in the context of a meta-analysis of hundreds of other studies of critical thinking gains, we were able to establish that critical thinking skills gains could be dramatically accelerated, with students reliably improving 7-8 times faster, over one semester, than they would otherwise have done just as university students. (For some of the detail on the Quality Practice hypothesis and our studies, see this paper, and this chapter.)
LW has been introduced to argument mapping before.
SIA won't doom you
Katja Grace has just presented an ingenious model, claiming that SIA combined with the great filter generates its own variant of the doomsday argument. Robin echoed this on Overcoming Bias. We met soon after Katja had come up with the model, and I signed up to it, saying that I could see no flaw in the argument.
Unfortunately, I erred. The argument does not work in the form presented.
First of all, there is the issue of time dependence. We are not just a human level civilization drifting through the void in blissful ignorance about our position in the universe. We know (approximately) the age of our galaxy, and the time elapsed since the big bang.
How is this relevant? It is relevant because all arguments about the great filter are time-dependent. Imagine we had just reached consciousness and human-level civilization, by some fluke, two thousand years after the creation of our galaxy, by an evolutionary process that took two thousand years. We see no aliens around us. In this situation, we have no reason to suspect any great filter; if we asked ourselves "are we likely to be the first civilization to reach this stage?" then the answer is probably yes. No evidence for a filter.
Imagine, instead, that we had reached consciousness a trillion years into the life of our galaxy, again via an evolutionary process that took two thousand years, and we see no aliens or traces of aliens. Then the evidence for a filter is overwhelming; something must have stopped all those previous likely civilizations from emerging into the galactic plane.
So neither of these civilizations can be included in our reference class (indeed, the second one can only exist if we ourselves are filtered!). So the correct reference class to use is not "the class of all potential civilizations in our galaxy that have reached our level of technological advancement and seen no aliens", but "the class of all potential civilizations in our galaxy that have reached our level of technological advancement at around the same time as us and seen no aliens". Indeed, SIA, once we update on the present, cannot tell us anything about the future.
But there's more.
Necessary, But Not Sufficient
There seems to be something odd about how people reason in relation to themselves, compared to the way they examine problems in other domains.
In mechanical domains, we seem to have little problem with the idea that things can be "necessary, but not sufficient". For example, if your car fails to start, you will likely know that several things are necessary for the car to start, but not sufficient for it to do so. It has to have fuel, ignition, and compression, and oxygen... each of which in turn has further necessary conditions, such as an operating fuel pump, electricity for the spark plugs, electricity for the starter, and so on.
And usually, we don't go around claiming that "fuel" is a magic bullet for fixing the problem of car-not-startia, or argue that if we increase the amount of electricity in the system, the car will necessarily run faster or better.
For some reason, however, we don't seem to apply this sort of necessary-but-not-sufficient thinking to systems above a certain level of complexity... such as ourselves.
When I wrote my previous post about the akrasia hypothesis, I mentioned that there was something bothering me about the way people seemed to be reasoning about akrasia and other complex problems. And recently, with taw's post about blood sugar and akrasia, I've realized that the specific thing bothering me is the absence of causal-chain reasoning there.
The Presumptuous Philosopher's Presumptuous Friend
One day, you and the presumptuous philosopher are walking along, arguing about the size of the universe, when suddenly Omega jumps out from behind a bush and knocks you both out with a crowbar. While you're unconscious, she builds two hotels, one with a million rooms, and one with just one room. Then she makes a million copies of both of you, sticks them all in rooms, and destroys the originals.
You wake up in a hotel room, in bed with the presumptuous philosopher, with a note on the table from Omega, explaining what she's done.
"Which hotel are we in, I wonder?" you ask.
"The big one, obviously" says the presumptuous philosopher. "Because of anthropic reasoning and all that. Million to one odds."
"Rubbish!" you scream. "Rubbish and poppycock! We're just as likely to be in any hotel omega builds, regardless of the number of observers in that hotel."
"Unless there are no observers, I assume you mean" says the presumptuous philosopher.
"Right, that's a special case where the number of observers in the hotel matters. But except for that it's totally irrelevant!"
"In that case," says the presumptuous philosopher, "I'll make a deal with you. We'll go outside and check, and if we're at the small hotel I'll give you ten bucks. If we're at the big hotel, I'll just smile smugly."
"Hah!" you say. "You just lost an expected five bucks, sucker!"
You run out of the room to find yourself in a huge, ten thousand story attrium, filled with throngs of yourselves and smug looking presumptuous philosophers.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)