How are critical thinking skills acquired? Five perspectives
Link to source: http://timvangelder.com/2010/10/20/how-are-critical-thinking-skills-acquired-five-perspectives/
Previous LW discussion of argument mapping: Argument Maps Improve Critical Thinking, Debate tools: an experience report
How are critical thinking skills acquired? Five perspectives: Tim van Gelder discusses acquisition of critical thinking skills, suggesting several theories of skill acquisition that don't work, and one with which he and hundreds of his students have had significant success.
In our work in the Reason Project at the University of Melbourne we refined the Practice perspective into what we called the Quality (or Deliberate) Practice Hypothesis. This was based on the foundational work of Ericsson and others who have shown that skill acquisition in general depends on extensive quality practice. We conjectured that this would also be true of critical thinking; i.e. critical thinking skills would be (best) acquired by doing lots and lots of good-quality practice on a wide range of real (or realistic) critical thinking problems. To improve the quality of practice we developed a training program based around the use of argument mapping, resulting in what has been called the LAMP (Lots of Argument Mapping) approach. In a series of rigorous (or rather, as-rigorous-as-possible-under-the-circumstances) studies involving pre-, post- and follow-up testing using a variety of tests, and setting our results in the context of a meta-analysis of hundreds of other studies of critical thinking gains, we were able to establish that critical thinking skills gains could be dramatically accelerated, with students reliably improving 7-8 times faster, over one semester, than they would otherwise have done just as university students. (For some of the detail on the Quality Practice hypothesis and our studies, see this paper, and this chapter.)
LW has been introduced to argument mapping before.
Intelligence Amplification Open Thread
A place to discuss potentially promising methods of intelligence amplification in the broad sense of general methods, tools, diets, regimens, or substances that boost cognition (memory, creativity, focus, etc.): anything from SuperMemo to Piracetam to regular exercise to eating lots of animal fat to binaural beats, whether it works or not. Where's the highest expected value? What's easiest to make part of your daily routine? Hopefully discussion here will lead to concise top level posts describing what works for a more self-improvement-savvy Less Wrong.
Lists of potential interventions are great, but even better would be a thorough analysis of a single intervention: costs, benefits, ease, et cetera. This way the comment threads will be more structured and organized. Less Wrong is pretty confused about IA, so even if you're not an expert, a quick analysis or link to a metastudy about e.g. exercise could be very helpful.
Added: Adam Atlas is now hosting an IA wiki: BetterBrains! Bookmark it, add to it, make it awesome.
Taking Ideas Seriously
I, the author, no longer endorse this post.
Abstrummary: I describe a central technique of epistemic rationality that bears directly on instrumental rationality, and that I do not believe has been explicitly discussed on Less Wrong before. The technnique is rather simple: it is the practice of taking ideas seriously. I also present the rather simple metaphor of an 'interconnected web of belief nodes' (like a Bayesian network) to describe what it means to take an idea seriously: it is to update a belief and then accurately and completely propagate that belief update through the entire web of beliefs in which it is embedded. I then give a few examples of ideas to take seriously, followed by reasons to take ideas seriously and what bad things happens if you don't (or society doesn't). I end with a few questions for Less Wrong.
Five-minute rationality techniques
Less Wrong tends toward long articles with a lot of background material. That's great, but the vast majority of people will never read them. What would be useful for raising the sanity waterline in the general population is a collection of simple-but-useful rationality techniques that you might be able to teach to a reasonably smart person in five minutes or less per technique.
Carl Sagan had a slogan: "Extraordinary claims require extraordinary evidence." He would say this phrase and then explain how, when someone claims something extraordinary (i.e. something for which we have a very low probability estimate), they need correspondingly stronger evidence than if they'd made a higher-likelihood claim, like "I had a sandwich for lunch." We can talk about this very precisely, in terms of Bayesian updating and conditional probability, but Sagan was able to get a lot of this across to random laypeople in about a minute. Maybe two minutes.
What techniques for rationality can be explained to a normal person in under five minutes? I'm looking for small and simple memes that will make people more rational, on average. Here are some candidates, to get the discussion started:
Candidate 1 (suggested by DuncanS): Unlikely events happen all the time. Someone gets in a car-crash and barely misses being impaled by a metal pole, and people say it's a million-to-one miracle -- but events occur all the time that are just as unlikely. If you look at how many highly unlikely things could happen, and how many chances they have to happen, then it's obvious that we're going to see "miraculous" coincidences, purely by chance. Similarly, with millions of people dying of cancer each year, there are going to be lots of people making highly unlikely miracle recoveries. If they didn't, that would be surprising.
Candidate 2: Admitting that you were wrong is a way of winning an argument. (The other person wins, too.) There's a saying that "It takes a big man to admit he's wrong," and when people say this, they don't seem to realize that it's a huge problem! It shouldn't be hard to admit that you were wrong about something! It shouldn't feel like defeat; it should feel like success. When you lose an argument with someone, it should be time for high fives and mutual jubilation, not shame and anger. The hard part of retraining yourself to think this way is just realizing that feeling good about conceding an argument is even an option.
Candidate 3: Everything that has an effect in the real world is part of the domain of science (and, more broadly, rationality). A lot of people have the truly bizarre idea that some theories are special, immune to whatever standards of evidence they may apply to any other theory. My favorite example is people who believe that prayers for healing actually make people who are prayed for more likely to recover, but that this cannot be scientifically tested. This is an obvious contradiction: they're claiming a measurable effect on the world and then pretending that it can't possibly be measured. I think that if you pointed out a few examples of this kind of special pleading to people, they might start to realize when they're doing it.
Anti-candidate: "Just because something feels good doesn't make it true." I call this an anti-candidate because, while it's true, it's seldom helpful. People trot out this line as an argument against other people's ideas, but rarely apply it to their own. I want memes that will make people actually be more rational, instead of just feeling that way.
This was adapted from an earlier discussion in an Open Thread. One suggestion, based on the comments there: if you're not sure whether something can be explained quickly, just go for it! Write a one-paragraph explanation, and try to keep the inferential distances short. It's good practice, and if we can come up with some really catchy ones, it might be a good addition to the wiki. Or we could use them as rationalist propaganda, somehow. There are a lot of great ideas on Less Wrong that I think can and should spread beyond the usual LW demographic.
How to always have interesting conversations
One of the things that makes Michael Vassar an interesting person to be around is that he has an opinion about everything. If you locked him up in an empty room with grey walls, it would probably take the man about thirty seconds before he'd start analyzing the historical influence of the Enlightenment on the tradition of locking people up in empty rooms with grey walls.
Likewise, in the recent LW meetup, I noticed that I was naturally drawn to the people who most easily ended up talking about interesting things. I spent a while just listening to HughRistik's theories on the differences between men and women, for instance. There were a few occasions when I engaged in some small talk with new people, but not all of them took very long, as I failed to lead the conversation into territory where one of us would have plenty of opinions.
I have two major deficiencies in trying to mimic this behavior. One, I'm by nature more of a listener than speaker. I usually prefer to let other people talk so that I can just soak up the information being offered. Second, my native way of thought is closer to text than speech. At best, I can generate thoughts as fast as I can type. But in speech, I often have difficulty formulating my thoughts into coherent sentences fast enough and frequently hesitate.
Both of these problems are solvable by having a sufficiently well built-up storage of cached thoughts that I don't need to generate everything in real time. On the occasions when a conversations happens to drift into a topic I'm sufficiently familiar with, I'm often able to overcome the limitations and contribute meaningfully to the discussion. This implies two things. First, that I need to generate cached thoughts in more subjects than I currently have. Seconds, that I need an ability to more reliably steer conversation into subjects that I actually do have cached thoughts about.
Akrasia Tactics Review
I recently had occasion to review some of the akrasia tricks I've found on Less Wrong, and it occurred to me that there's probably quite a lot of others who've tried them as well. Perhaps it's a good idea to organize the experiences of a couple dozen procrastinating rationalists?
Therefore, I'll aggregate any such data you provide in the comments, according to the following scheme:
- Note which trick you've tried. If it's something that's not yet on the list below, please provide a link and I'll add it; if there's not a link for it anywhere, you can describe it in your comment and I'll link that.
- Give your experience with it a score from -10 to +10 (0 if it didn't change the status quo, 10 if it ended your akrasia problems forever with no side effects, negative scores if it actually made your life worse, -10 if it nearly killed you); if you don't do so, I'll suggest a score for you based on what else you say.
- Describe your experience with it, including any significant side effects.
Every so often, I'll combine all the data back into the main post, listing average scores, sample size and common effects for each technique. Ready?
Tips and Tricks for Answering Hard Questions
I've collected some tips and tricks for answering hard questions, some of which may be original, and others I may have read somewhere and forgotten the source of. Please feel free to contribute more tips and tricks, or additional links to the sources or fuller explanations.
Don't stop at the first good answer. We know that human curiosity can be prematurely satiated. Sometimes we can quickly recognize a flaw in an answer that initially seemed good, but sometimes we can't, so we should keep looking for flaws and/or better answers.
Explore multiple approaches simultaneously. A hard question probably has multiple approaches that are roughly equally promising, otherwise it wouldn't be a hard question (well, unless it has no promising approaches). If there are several people attempting to answer it, they should explore different approaches. If you're trying to answer it alone, it makes sense to switch approaches (and look for new approaches) once a while.
Trust your intuitions, but don't waste too much time arguing for them. If several people are attempting to answer the same question and they have different intuitions about how best to approach it, it seems efficient for each to rely on his or her intuition to choose the approach to explore. It only makes sense to spend a lot of time arguing for your own intuition if you have some reason to believe that other people's intuitions are much worse than yours.
Go meta. Instead of attacking the question directly, ask "How should I answer a question like this?" It seems that when people are faced with a question, even one that has stumped great minds for ages, many just jump in and try to attack it with whatever intellectual tools they have at hand. For really hard questions, we may need to look for, or build, new tools.
Dissolve the question. Sometimes, the question is meaningless and asking it is just a cognitive error. If you can detect and correct the error then the question may just go away.
Sleep on it. I find that I tend to have a greater than average number of insights in the period of time just after I wake up and before I get out of bed. Our brains seem to continue to work while we're asleep, and it may help to prime it by reviewing the problem before going to sleep. (I think Eliezer wrote a post or comment to this effect, but I can't find it now.)
Be ready to recognize a good answer when you see it. The history of science shows that human knowledge does make progress, but sometimes only by an older generation dying off or retiring. It seems that we often can't recognize a good answer even when it's staring us in the face. I wish I knew more about what factors affect this ability, but one thing that might help is to avoid acquiring a high social status, or the mental state of having high social status. (See also, How To Actually Change Your Mind.)
Applied Picoeconomics
Related to: Akrasia, Hyperbolic Discounting, and Picoeconomics, Fix It And Tell Us What You Did
A while back, ciphergoth posted an article on "picoeconomics", the theory that akrasia could be partially modeled as bargaining between present and future selves. I think the model is incomplete, because it doesn't explain how the analogy is instantiated in the real world, and I'd like to investigate that further sometime1 - but it's a good first-order approximation.
For those of you too lazy to read the article (come on! It has pictures of naked people! Well, one naked person. Suspended from a graph of a hyperbolic curve) Ainslie argues that "intertemporal bargaining" is one way to overcome preference reversal. For example, an alcoholic has two conflicting preferences: right now, he would rather drink than not drink, but next year he would rather be the sort of person who never drinks than remain an alcoholic. But because his brain uses hyperbolic discounting, a process that pays more attention to his current utility than his future utility, he's going to hit the whiskey.
This sticks him in a sorites paradox. Honestly, it's not going to make much of a difference if he has one more drink, so why not hit the whiskey? Ainslie's answer is that he should set a hard-and-fast rule: "I will never drink alcohol". Following this rule will cure his alcoholism and help him achieve his dreams. He now has a very high preference for following the rule; a preference hopefully stronger than his current preference for whiskey.
Ainslie's other point is that this rule needs to really be hard-and-fast. If his rule is "I will drink less whiskey", then that leaves it open for him to say "Well, I'll drink some whiskey now, and none later; that counts as 'less'", and then the whole problem comes back just as bad as before. Likewise, if he says "It's my birthday, I'll let myself break the rule just this once," then soon he's likely to be saying "It's the Sunday before Cinco de Mayo, this calls for a celebration!" Ainslie has some much more formal and convincing ways of framing this, which is why you should read the article instead of just trusting this summary.
The stuff by Ainslie I read (I didn't spring for any of his dead-tree books) didn't offer any specific pointers for increasing your willpower2, but it's pretty easy to read between the lines and figure out what applied picoeconomics ought to look like. In the interest of testing a scientific theory, not to mention the ongoing effort to take control of my own life, I've been testing picoeconomic techniques for the last two months.
Saturation, Distillation, Improvisation: A Story About Procedural Knowledge And Cookies
Most propositional knowledge (knowledge of facts) is pretty easy to come by (at least in principle). There is only one capital of Venezuela, and if you wish to learn the capital of Venezuela, Wikipedia will cooperatively inform you that it is Caracas. For propositional knowledge that Wikipedia knoweth not, there is the scientific method. Procedural knowledge - the knowledge of how to do something - is a different animal entirely. This is true not only with regard to the question of whether Wikipedia will be helpful, but also in the brain architecture at work: anterograde amnesiacs can often pick up new procedural skills while remaining unable to learn new propositional information.
One complication in learning new procedures is that there are usually dozens, if not hundreds, of ways to do something. Little details - the sorts of things that sink into the subconscious with practice but are crucial to know for a beginner - are frequently omitted in casual descriptions. Often, it can be very difficult to break into a new procedurally-oriented field of knowledge because so much background information is required. While there may be acknowledged masters of the procedure, it is rarely the case that their methods are ideal for every situation and potential user, because the success of a procedure depends on a vast array of circumstantial factors.
I propose below a general strategy for acquiring new procedural knowledge. First, saturate by getting a diverse set of instructions from different sources. Then, distill by identifying what all or most of them have in common. Finally, improvise within the remaining search space to find something that works reliably for you and your circumstances.
The strategy is not fully general: I expect it would only work properly for procedures that are widely attempted and shared; that you can afford to try multiple times; that have at least partially independent steps so you can mix and match; and that are in fields you have at least a passing familiarity with. The sort of procedural knowledge that I seek with the most regularity is how to make new kinds of food, so I will illustrate my strategy with a description of how I used it to learn to make meringues. If you find cookies a dreadfully boring subject of discourse, you may not wish to read the rest of this post.
Bad reasons for a rationalist to lose
Reply to: Practical Advice Backed By Deep Theories
Inspired by what looks like a very damaging reticence to embrace and share brain hacks that might only work for some of us, but are not backed by Deep Theories. In support of tinkering with brain hacks and self experimentation where deep science and large trials are not available.
Eliezer has suggested that, before he will try a new anti-akraisia brain hack:
[…] the advice I need is from someone who reads up on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up. And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas.
This doesn't look to me like an expected utility calculation, and I think it should. It looks like an attempt to justify why he can't be expected to win yet. It just may be deeply wrongheaded.
I submit that we don't "need" (emphasis in original) this stuff, it'd just be super cool if we could get it. We don't need to know that the next brain hack we try will work, and we don't need to know that it's general enough that it'll work for anyone who tries it; we just need the expected utility of a trial to be higher than that of the other things we could be spending that time on.
So… this isn't other-optimizing, it's a discussion of how to make decisions under uncertainty. What do all of us need to make a rational decision about which brain hacks to try?
- We need a goal: Eliezer has suggested "I want to hear how I can overcome akrasia - how I can have more willpower, or get more done with less mental pain". I'd push cost in with something like "to reduce the personal costs of akraisia by more than the investment in trying and implementing brain hacks against it plus the expected profit on other activities I could undertake with that time".
- We need some likelihood estimates:
- Chance of a random brain hack working on first trial: ?, second trial: ?, third: ?
- Chance of a random brain hack working on subsequent trials (after the third - the noise of mood, wakefulness, etc. is large, so subsequent trials surely have non-zero chance of working, but that chance will probably diminish): →0
- Chance of a popular brain hack working on first (second, third) trail: ? (GTD is lauded by many many people; your brother in law's homebrew brain hack is less well tried)
- Chance that a brain hack that would work in the first three trials would seem deeply compelling on first being exposed to it: ?
(can these books be judged by their covers? how does this chance vary with the type of exposure? what would you need to do to understand enough about a hack that would work to increase its chance of seeming deeply compelling on first exposure?) - Chance that a brain hack that would not work in the first three trials would seem deeply compelling on first being exposed to it: ? (false positives)
- Chance of a brain hack recommended by someone in your circle working on first (second, third) trial: ?
- Chance that someone else will read up "on a whole lot of experimental psychology dealing with willpower, mental conflicts, ego depletion, preference reversals, hyperbolic discounting, the breakdown of the self, picoeconomics, etcetera, and who, in the process of overcoming their own akrasia, manages to understand what they did in truly general terms - thanks to experiments that give them a vocabulary of cognitive phenomena that actually exist, as opposed to phenomena they just made up. And moreover, someone who can explain what they did to someone else, thanks again to the experimental and theoretical vocabulary that lets them point to replicable experiments that ground the ideas in very concrete results, or mathematically clear ideas", all soon: ? (pretty small?)
- What else do we need to know?
- We need some time/cost estimates (these will vary greatly by proposed brain hack):
- Time required to stage a personal experiment on the hack: ?
- Time to review and understand the hack in sufficient detail to estimate the time required to stage a personal experiment?
- What else do we need?
… and, what don't we need?
- A way to reject the placebo effect - if it wins, use it. If it wins for you but wouldn't win for someone else, then they have a problem. We may choose to spend some effort helping others benefit from this hack, but that seems to be a different task - it's irrelevant to our goal.
How should we decide how much time to spend gathering data and generating estimates on matters such as this? How much is Eliezer setting himself up to lose, and how much am I missing the point?
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)