Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
Quantum Computing Since Democritus got me thinking that we may want a more riveting title for The Sequences, 2006-2009 ebook we're preparing for release (like the FtIE ebook). Maybe it could be something like [Really Catchy Title]: The Less Wrong Sequences, 2006-2009.
The reason for "2006–2009" is that Highly Advanced Epistemology 101 for Beginners will be its own ebook, and future Yudkowskian LW sequences (if there are any) won't be included either.
- The Craft of Rationality: The Less Wrong Sequences, 2006–2009
- The Art of Rationality: The Less Wrong Sequences, 2006–2009
- Becoming Less Wrong: The Sequences, 2006–2009
I've been trying to get clear on something you might call "estimate stability." Steven Kaas recently posted my question to StackExchange, but we might as well post it here as well:
I'm trying to reason about something I call "estimate stability," and I'm hoping you can tell me whether there’s some relevant technical language...
What do I mean by "estimate stability?" Consider these three different propositions:
- We’re 50% sure that a coin (known to be fair) will land on heads.
- We’re 50% sure that Matt will show up at the party.
- We’re 50% sure that Strong AI will be invented by 2080.
These estimates feel different. One reason they feel different is that the estimates have different degrees of "stability." In case (1) we don't expect to gain information that will change our probability estimate. But for cases (2) and (3), we may well come upon some information that causes us to adjust the estimate either up or down.
So estimate (1) is more "stable," but I'm not sure how this should be quantified. Should I think of it in terms of running a Monte Carlo simulation of what future evidence might be, and looking at something like the variance of the distribution of the resulting estimates? What happens when it’s a whole probability distribution for e.g. the time Strong AI is invented? (Do you do calculate the stability of the probability density for every year, then average the result?)
Here are some other considerations that would be useful to relate more formally to considerations of estimate stability:
- If we’re estimating some variable, having a narrow probability distribution (prior to future evidence with respect to which we’re trying to assess the stability) corresponds to having a lot of data. New data, in that case, would make less of a contribution in terms of changing the mean and reducing the variance.
- There are differences in model uncertainty between the three cases. I know what model to use when predicting a coin flip. My method of predicting whether Matt will show up at a party is shakier, but I have some idea of what I’m doing. With the Strong AI case, I don’t really have any good idea of what I’m doing. Presumably model uncertainty is related to estimate stability, because the more model uncertainty we have, the more we can change our estimate by reducing our model uncertainty.
- Another difference between the three cases is the degree to which our actions allow us to improve our estimates, increasing their stability. For example, we can reduce the uncertainty and increase the stability of our estimate about Matt by calling him, but we don’t really have any good ways to get better estimates of Strong AI timelines (other than by waiting).
- Value-of-information affects how we should deal with delay. Estimates that are unstable in the face of evidence we expect to get in the future seem to imply higher VoI. This creates a reason to accept delays in our actions. Or if we can easily gather information that will make our estimates more accurate and stable, that means we have more reason to pay the cost of gathering that information. If we expect to forget information, or expect our future selves not to take information into account, dynamic inconsistency becomes important. This is another reason why estimates might be unstable. One possible strategy here is to precommit to have our estimates regress to the mean.
Thanks for any thoughts!
Just before the Trinity test, Enrico Fermi decided he wanted a rough estimate of the blast's power before the diagnostic data came in. So he dropped some pieces of paper from his hand as the blast wave passed him, and used this to estimate that the blast was equivalent to 10 kilotons of TNT. His guess was remarkably accurate for having so little data: the true answer turned out to be 20 kilotons of TNT.
Fermi had a knack for making roughly-accurate estimates with very little data, and therefore such an estimate is known today as a Fermi estimate.
Why bother with Fermi estimates, if your estimates are likely to be off by a factor of 2 or even 10? Often, getting an estimate within a factor of 10 or 20 is enough to make a decision. So Fermi estimates can save you a lot of time, especially as you gain more practice at making them.
These first two sections are adapted from Guestimation 2.0.
Dare to be imprecise. Round things off enough to do the calculations in your head. I call this the spherical cow principle, after a joke about how physicists oversimplify things to make calculations feasible:
Milk production at a dairy farm was low, so the farmer asked a local university for help. A multidisciplinary team of professors was assembled, headed by a theoretical physicist. After two weeks of observation and analysis, the physicist told the farmer, "I have the solution, but it only works in the case of spherical cows in a vacuum."
By the spherical cow principle, there are 300 days in a year, people are six feet (or 2 meters) tall, the circumference of the Earth is 20,000 mi (or 40,000 km), and cows are spheres of meat and bone 4 feet (or 1 meter) in diameter.
Decompose the problem. Sometimes you can give an estimate in one step, within a factor of 10. (How much does a new compact car cost? $20,000.) But in most cases, you'll need to break the problem into several pieces, estimate each of them, and then recombine them. I'll give several examples below.
Estimate by bounding. Sometimes it is easier to give lower and upper bounds than to give a point estimate. How much time per day does the average 15-year-old watch TV? I don't spend any time with 15-year-olds, so I haven't a clue. It could be 30 minutes, or 3 hours, or 5 hours, but I'm pretty confident it's more than 2 minutes and less than 7 hours (400 minutes, by the spherical cow principle).
Can we convert those bounds into an estimate? You bet. But we don't do it by taking the average. That would give us (2 mins + 400 mins)/2 = 201 mins, which is within a factor of 2 from our upper bound, but a factor 100 greater than our lower bound. Since our goal is to estimate the answer within a factor of 10, we'll probably be way off.
Instead, we take the geometric mean — the square root of the product of our upper and lower bounds. But square roots often require a calculator, so instead we'll take the approximate geometric mean (AGM). To do that, we average the coefficients and exponents of our upper and lower bounds.
So what is the AGM of 2 and 400? Well, 2 is 2×100, and 400 is 4×102. The average of the coefficients (2 and 4) is 3; the average of the exponents (0 and 2) is 1. So, the AGM of 2 and 400 is 3×101, or 30. The precise geometric mean of 2 and 400 turns out to be 28.28. Not bad.
What if the sum of the exponents is an odd number? Then we round the resulting exponent down, and multiply the final answer by three. So suppose my lower and upper bounds for how much TV the average 15-year-old watches had been 20 mins and 400 mins. Now we calculate the AGM like this: 20 is 2×101, and 400 is still 4×102. The average of the coefficients (2 and 4) is 3; the average of the exponents (1 and 2) is 1.5. So we round the exponent down to 1, and we multiple the final result by three: 3(3×101) = 90 mins. The precise geometric mean of 20 and 400 is 89.44. Again, not bad.
Sanity-check your answer. You should always sanity-check your final estimate by comparing it to some reasonable analogue. You'll see examples of this below.
Use Google as needed. You can often quickly find the exact quantity you're trying to estimate on Google, or at least some piece of the problem. In those cases, it's probably not worth trying to estimate it without Google.
Like Eliezer, I "do my best thinking into a keyboard." It starts with a burning itch to figure something out. I collect ideas and arguments and evidence and sources. I arrange them, tweak them, criticize them. I explain it all in my own words so I can understand it better. By then it is nearly something that others would want to read, so I clean it up and publish, say, How to Beat Procrastination. I write essays in the original sense of the word: "attempts."
This time, I'm trying to figure out something we might call "tacit rationality" (c.f. tacit knowledge).
I tried and failed to write a good post about tacit rationality, so I wrote a bad post instead — one that is basically a patchwork of somewhat-related musings on explicit and tacit rationality. Therefore I'm posting this article to LW Discussion. I hope the ensuing discussion ends up leading somewhere with more clarity and usefulness.
Three methods for training rationality
Which of these three options do you think will train rationality (i.e. systematized winning, or "winning-rationality") most effectively?
- Spend one year reading and re-reading The Sequences, studying the math and cognitive science of rationality, and discussing rationality online and at Less Wrong meetups.
- Attend a CFAR workshop, then spend the next year practicing those skills and other rationality habits every week.
- Run a startup or small business for one year.
Option 1 seems to be pretty effective at training people to talk intelligently about rationality (let's call that "talking-rationality"), and it seems to inoculate people against some common philosophical mistakes.
We don't yet have any examples of someone doing Option 2 (the first CFAR workshop was May 2012), but I'd expect Option 2 — if actually executed — to result in more winning-rationality than Option 1, and also a modicum of talking-rationality.
What about Option 3? Unlike Option 2 or especially Option 1, I'd expect it to train almost no ability to talk intelligently about rationality. But I would expect it to result in relatively good winning-rationality, due to its tight feedback loops.
Talking-rationality and winning-rationality can come apart
I've come to believe... that the best way to succeed is to discover what you love and then find a way to offer it to others in the form of service, working hard, and also allowing the energy of the universe to lead you.
Oprah isn't known for being a rational thinker. She is a known peddler of pseudoscience, and she attributes her success (in part) to allowing "the energy of the universe" to lead her.
Yet she must be doing something right. Oprah is a true rags-to-riches story. Born in Mississippi to an unwed teenage housemaid, she was so poor she wore dresses made of potato sacks. She was molested by a cousin, an uncle, and a family friend. She became pregnant at age 14.
But in high school she became an honors student, won oratory contests and a beauty pageant, and was hired by a local radio station to report the news. She became the youngest-ever news anchor at Nashville's WLAC-TV, then hosted several shows in Baltimore, then moved to Chicago and within months her own talk show shot from last place to first place in the ratings there. Shortly afterward her show went national. She also produced and starred in several TV shows, was nominated for an Oscar for her role in a Steven Spielberg movie, launched her own TV cable network and her own magazine (the "most successful startup ever in the [magazine] industry" according to Fortune), and became the world's first female black billionaire.
I'd like to suggest that Oprah's climb probably didn't come merely through inborn talent, hard work, and luck. To get from potato sack dresses to the Forbes billionaire list, Oprah had to make thousands of pretty good decisions. She had to make pretty accurate guesses about the likely consequences of various actions she could take. When she was wrong, she had to correct course fairly quickly. In short, she had to be fairly rational, at least in some domains of her life.
Similarly, I know plenty of business managers and entrepreneurs who have a steady track record of good decisions and wise judgments, and yet they are religious, or they commit basic errors in logic and probability when they talk about non-business subjects.
What's going on here? My guess is that successful entrepreneurs and business managers and other people must have pretty good tacit rationality, even if they aren't very proficient with the "rationality" concepts that Less Wrongers tend to discuss on a daily basis. Stated another way, successful businesspeople make fairly rational decisions and judgments, even though they may confabulate rather silly explanations for their success, and even though they don't understand the math or science of rationality well.
LWers can probably outperform Mark Zuckerberg on the CRT and the Berlin Numeracy Test, but Zuckerberg is laughing at them from atop a huge pile of utility.
Explicit and tacit rationality
Patri Friedman, in Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality, reminded us that skill acquisition comes from deliberate practice, and reading LW is a "shiny distraction," not deliberate practice. He said a real rationality practice would look more like... well, what Patri describes is basically CFAR, though CFAR didn't exist at the time.
In response, and again long before CFAR existed, Anna Salamon wrote Goals for which Less Wrong does (and doesn't) help. Summary: Some domains provide rich, cheap feedback, so you don't need much LW-style rationality to become successful in those domains. But many of us have goals in domains that don't offer rapid feedback: e.g. whether to buy cryonics, which 40-year investments are safe, which metaethics to endorse. For this kind of thing you need LW-style rationality. (We could also state this as "Domains with rapid feedback train tacit rationality with respect to those domains, but for domains without rapid feedback you've got to do the best you can with LW-style "explicit rationality".)
The good news is that you should be able to combine explicit and tacit rationality. Explicit rationality can help you realize that you should force tight feedback loops into whichever domains you want to succeed in, so that you can have develop good intuitions about how to succeed in those domains. (See also: Lean Startup or Lean Nonprofit methods.)
Explicit rationality could also help you realize that the cognitive biases most-discussed in the literature aren't necessarily the ones you should focus on ameliorating, as Aaron Swartz wrote:
Cognitive biases cause people to make choices that are most obviously irrational, but not most importantly irrational... Since cognitive biases are the primary focus of research into rationality, rationality tests mostly measure how good you are at avoiding them... LW readers tend to be fairly good at avoiding cognitive biases... But there a whole series of much more important irrationalities that LWers suffer from. (Let's call them "practical biases" as opposed to "cognitive biases," even though both are ultimately practical and cognitive.)
...Rationality, properly understood, is in fact a predictor of success. Perhaps if LWers used success as their metric (as opposed to getting better at avoiding obvious mistakes), they might focus on their most important irrationalities (instead of their most obvious ones), which would lead them to be more rational and more successful.
Final scattered thoughts
- If someone is consistently winning, and not just because they have tons of wealth or fame, then maybe you should conclude they have pretty good tacit rationality even if their explicit rationality is terrible.
- The positive effects of tight feedback loops might trump the effects of explicit rationality training.
- Still, I suspect explicit rationality plus tight feedback loops could lead to the best results of all.
- I really hope we can develop a real rationality dojo.
- If you're reading this post, you're probably spending too much time reading Less Wrong, and too little time hacking your motivation system, learning social skills, and learning how to inject tight feedback loops into everything you can.
The chapter on judgment under uncertainty in the (excellent) new Oxford Handbook of Cognitive Psychology has a handy little section on recent critiques of the "heuristics and biases" tradition. It also discusses problems with the somewhat-competing "fast and frugal heuristics" school of thought, but for now let me just quote the section on heuristics and biases (pp. 608-609):
The heuristics and biases program has been highly influential; however, some have argued that in recent years the influence, at least in psychology, has waned (McKenzie, 2005). This waning has been due in part to pointed critiques of the approach (e.g., Gigerenzer, 1996). This critique comprises two main arguments: (1) that by focusing mainly on coherence standards [e.g. their rationality given the subject's other beliefs, as contrasted with correspondence standards having to do with the real-world accuracy of a subject's beliefs] the approach ignores the role played by the environment or the context in which a judgment is made; and (2) that the explanations of phenomena via one-word labels such as availability, anchoring, and representativeness are vague, insufficient, and say nothing about the processes underlying judgment (see Kahneman, 2003; Kahneman & Tversky, 1996 for responses to this critique).
The accuracy of some of the heuristics proposed by Tversky and Kahneman can be compared to correspondence criteria (availability and anchoring). Thus, arguing that the tradition only uses the “narrow norms” (Gigerenzer, 1996) of coherence criteria is not strictly accurate (cf. Dunwoody, 2009). Nonetheless, responses in famous examples like the Linda problem can be reinterpreted as sensible rather than erroneous if one uses conversational or pragmatic norms rather than those derived from probability theory (Hilton, 1995). For example, Hertwig, Benz and Krauss (2008) asked participants which of the following two statements is more probable:
[X] The percentage of adolescent smokers in Germany decreases at least 15% from current levels by September 1, 2003.
[X&Y] The tobacco tax in Germany is increased by 5 cents per cigarette and the percentage of adolescent smokers in Germany decreases at least 15% from current levels by September 1, 2003.
According to the conjunction rule, [X&Y cannot be more probable than X] and yet the majority of participants ranked the statements in that order. However, when subsequently asked to rank order four statements in order of how well each one described their understanding of X&Y, there was an overwhelming tendency to rank statements like “X and therefore Y” or “X and X is the cause for Y” higher than the simple conjunction “X and Y.” Moreover, the minority of participants who did not commit the conjunction fallacy in the first judgment showed internal coherence by ranking “X and Y” as best describing their understanding in the second judgment.These results suggest that people adopt a causal understanding of the statements, in essence ranking the probability of X, given Y as more probable than X occurring alone. If so, then arguably the conjunction “error” is no longer incorrect. (See Moro, 2009 for extensive discussion of the reasons underlying the conjunction fallacy, including why “misunderstanding” cannot explain all instances of the fallacy.)
The “vagueness” argument can be illustrated by considering two related phenomena: the gambler’s fallacy and the hot-hand (Gigerenzer & Brighton, 2009). The gambler’s fallacy is the tendency for people to predict the opposite outcome after a run of the same outcome (e.g., predicting heads after a run of tails when flipping a fair coin); the hot-hand, in contrast, is the tendency to predict a run will continue (e.g., a player making a shot in basketball after a succession of baskets; Gilovich, Vallone, & Tversky, 1985). Ayton and Fischer (2004) pointed out that although these two behaviors are opposite - ending or continuing runs - they have both been explained via the label “representativeness.” In both cases a faulty concept of randomness leads people to expect short sections of a sequence to be “representative” of their generating process. In the case of the coin, people believe (erroneously) that long runs should not occur, so the opposite outcome is predicted; for the player, the presence of long runs rules out a random process so a continuation is predicted (Gilovich et al., 1985). The “representativeness” explanation is therefore incomplete without specifying a priori which of the opposing prior expectations will result. More important, representativeness alone does not explain why people have the misconception that random sequences should exhibit local representativeness when in reality they do not (Ayton & Fischer, 2004).
My thanks to MIRI intern Stephen Barnes for transcribing this text.
Co-authored with crazy88. Please let us know when you find mistakes, and we'll fix them. Last updated 03-27-2013.
- 1. What is decision theory?
- 2. Is the rational decision always the right decision?
- 3. How can I better understand a decision problem?
- 4. How can I measure an agent's preferences?
- 5. What do decision theorists mean by "risk," "ignorance," and "uncertainty"?
- 6. How should I make decisions under ignorance?
- 7. Can decisions under ignorance be transformed into decisions under uncertainty?
- 8. How should I make decisions under uncertainty?
- 9. Does axiomatic decision theory offer any action guidance?
- 10. How does probability theory play a role in decision theory?
- 11. What about "Newcomb's problem" and alternative decision algorithms?
Decision theory, also known as rational choice theory, concerns the study of preferences, uncertainties, and other issues related to making "optimal" or "rational" choices. It has been discussed by economists, psychologists, philosophers, mathematicians, statisticians, and computer scientists.
We can divide decision theory into three parts (Grant & Zandt 2009; Baron 2008). Normative decision theory studies what an ideal agent (a perfectly rational agent, with infinite computing power, etc.) would choose. Descriptive decision theory studies how non-ideal agents (e.g. humans) actually choose. Prescriptive decision theory studies how non-ideal agents can improve their decision-making (relative to the normative model) despite their imperfections.
For example, one's normative model might be expected utility theory, which says that a rational agent chooses the action with the highest expected utility. Replicated results in psychology describe humans repeatedly failing to maximize expected utility in particular, predictable ways: for example, they make some choices based not on potential future benefits but on irrelevant past efforts (the "sunk cost fallacy"). To help people avoid this error, some theorists prescribe some basic training in microeconomics, which has been shown to reduce the likelihood that humans will commit the sunk costs fallacy (Larrick et al. 1990). Thus, through a coordination of normative, descriptive, and prescriptive research we can help agents to succeed in life by acting more in accordance with the normative model than they otherwise would.
Two related fields beyond the scope of this FAQ are game theory and social choice theory. Game theory is the study of conflict and cooperation among multiple decision makers, and is thus sometimes called "interactive decision theory." Social choice theory is the study of making a collective decision by combining the preferences of multiple decision makers in various ways.
This FAQ draws heavily from two textbooks on decision theory: Resnik (1987) and Peterson (2009). It also draws from more recent results in decision theory, published in journals such as Synthese and Theory and Decision.
This is the introduction to a paper I started writing long ago, but have since given up on. The paper was going to be an overview of methods for improving human rationality through cognitive change. Since it contains lots of handy references on rationality, I figured I'd publish it, in case it's helpful to others.
During the last half-century, cognitive scientists have catalogued dozens of common errors in human judgment and decision-making (Griffin et al. 2012; Gilovich et al. 2002). Stanovich (1999) provides a sobering introduction:
For example, people assess probabilities incorrectly, they display confirmation bias, they test hypotheses inefficiently, they violate the axioms of utility theory, they do not properly calibrate degrees of belief, they overproject their own opinions onto others, they allow prior knowledge to become implicated in deductive reasoning, they systematically underweight information about nonoccurrence when evaluating covariation, and they display numerous other information-processes biases...
The good news is that researchers have also begun to understand the cognitive mechanisms which produce these errors (Kahneman 2011; Stanovich 2010), they have found several "debiasing" techniques that groups or individuals may use to partially avoid or correct these errors (Larrick 2004), and they have discovered that environmental factors can be used to help people to exhibit fewer errors (Thaler and Sunstein 2009; Trout 2009).
This "heuristics and biases" research program teaches us many lessons that, if put into practice, could improve human welfare. Debiasing techniques that improve human rationality may be able to decrease rates of violence caused by ideological extremism (Lilienfeld et al. 2009). Knowledge of human bias can help executives make more profitable decisions (Kahneman et al. 2011). Scientists with improved judgment and decision-making skills ("rationality skills") may be more apt to avoid experimenter bias (Sackett 1979). Understanding the nature of human reasoning can also improve the practice of philosophy (Knobe et al. 2012; Talbot 2009; Bishop and Trout 2004; Muehlhauser 2012), which has too often made false assumptions about how the mind reasons (Weinberg et al. 2001; Lakoff and Johnson 1999; De Paul and Ramsey 1999). Finally, improved rationality could help decision makers to choose better policies, especially in domains likely by their very nature to trigger biased thinking, such as investing (Burnham 2008), military command (Lang 2011; Williams 2010; Janser 2007), intelligence analysis (Heuer 1999), or the study of global catastrophic risks (Yudkowsky 2008a).
But is it possible to improve human rationality? The answer, it seems, is "Yes." Lovallo and Sibony (2010) showed that when organizations worked to reduce the effect of bias on their investment decisions, they achieved returns of 7% or higher. Multiple studies suggest that a simple instruction to "think about alternative hypotheses" can counteract overconfidence, confirmation bias, and anchoring effects, leading to more accurate judgments (Mussweiler et al. 2000; Koehler 1994; Koriat et al. 1980). Merely warning people about biases can decrease their prevalence, at least with regard to framing effects (Cheng and Wu 2010), hindsight bias (Hasher et al. 1981; Reimers and Butler 1992), and the outcome effect (Clarkson et al. 2002). Several other methods have been shown to meliorate the effects of common human biases (Larrick 2004). Judgment and decision-making appear to be skills that can be learned and improved with practice (Dhami et al. 2012).
In this article, I first explain what I mean by "rationality" as a normative concept. I then review the state of our knowledge concerning the causes of human errors in judgment and decision-making (JDM). The largest section of our article summarizes what we currently know about how to improve human rationality through cognitive change (e.g. "rationality training"). We conclude by assessing the prospects for improving human rationality through cognitive change, and by recommending particular avenues for future research.
Those aching for good rationality writing can get their fix from Great rationality posts by LWers not posted to LW, and also from the Overcoming Bias archives. Some highlights are below, up through June 28, 2007.
- Finney, Foxes vs. Hedgehogs: Predictive Success
- Hanson, When Error is High, Simplify
- Shulman, Meme Lineages and Expert Consensus
- Hanson, Resolving Your Hypocrisy
- Hanson, Academic Overconfidence
- Hanson, Conspicuous Consumption of Info
- Sandberg, Supping with the Devil
- Hanson, Conclusion-Blind Review
- Shulman, Should We Defer to Secret Evidence?
- Shulman, Sick of Textbook Errors
- Hanson, Dare to Deprogram Me?
- Armstrong, Biases, By and Large
- Friedman, A Tough Balancing Act
- Hanson, RAND Health Insurance Experiment
- Armstrong, The Case for Dangerous Testing
- Hanson, In Obscurity Errors Remain
- Falkenstein, Hofstadter's Law
- Hanson, Against Free Thinkers
Ever since Eliezer, Yvain, and myself stopped posting regularly, LW's front page has mostly been populated by meta posts. (The Discussion section is still abuzz with interesting content, though, including original research.)
Luckily, many LWers are posting potentially front-page-worthy content to their own blogs.
Below are some recent-ish highlights outside Less Wrong, for your reading enjoyment. I've added an * to my personal favorites.
Overcoming Bias (Robin Hanson, Rob Wiblin, Katja Grace, Carl Shulman)
- Hanson, Beware Far Values
- Wiblin, Is US Gun Control an Important Issue?
- Wiblin, Morality As Though It Really Mattered
- Grace, Can a Tiny Bit of Noise Destroy Communication?
- Shulman, Nuclear winter and human extinction: Q&A with Luke Oman
- Wiblin, Does complexity bias biotechnology towards doing damage?
- Kurzweil's Law of Accelerating Returns *
- The Great Stagnation
- Epistemic Learned Helplessness *
- The Biodeterminist's Guide to Parenting
- Spreading happiness to the stars seems little harder than just spreading
- Rawls' original position, potential people, and Pascal's Mugging
- Philosophers vs economists on discounting
- Utilitarianism, contractualism, and self-sacrifice
- Are pain and pleasure equally energy-efficient? *
Mark Linsenmayer, one of the hosts of a top philosophy podcast called The Partially Examined Life, has written a critique of the view that Eliezer and I seem to take of philosophy. Below, I respond to a few of Mark's comments. Naturally, I speak only for myself, not for Eliezer.
I'm generally skeptical when someone proclaims that "rationality" itself should get us to throw out 90%+ of philosophy...
But let me be more precise. I do claim that almost all philosophy is useless for figuring out what is true, for reasons explained in several of my posts:
- Philosophy: A Diseased Discipline
- Concepts Don't Work That Way
- Intuitions Aren't Shared That Way
- Train Philosophers with Pearl and Kahneman, Not Plato and Kant
Mark replies that the kinds of unscientific philosophy I dismiss can be "useful at least in the sense of entertaining," which of course isn't something I'd deny. I'm just trying to say that Heidegger is pretty darn useless for figuring out what's true. There are thousands of readings that will more efficiently make your model of the world more accurate.
If you want to read Heidegger as poetry or entertainment, that's fine. I watch Game of Thrones, but not because it's a useful inquiry into truth.
Also, I'm not sure what it would mean to say we should throw out 90% of philosophy because of rationality, but I probably don't agree with the "because" clause, there.
View more: Next