The Best Textbooks on Every Subject
For years, my self-education was stupid and wasteful. I learned by consuming blog posts, Wikipedia articles, classic texts, podcast episodes, popular books, video lectures, peer-reviewed papers, Teaching Company courses, and Cliff's Notes. How inefficient!
I've since discovered that textbooks are usually the quickest and best way to learn new material. That's what they are designed to be, after all. Less Wrong has often recommended the "read textbooks!" method. Make progress by accumulation, not random walks.
But textbooks vary widely in quality. I was forced to read some awful textbooks in college. The ones on American history and sociology were memorably bad, in my case. Other textbooks are exciting, accurate, fair, well-paced, and immediately useful.
What if we could compile a list of the best textbooks on every subject? That would be extremely useful.
Let's do it.
There have been other pages of recommended reading on Less Wrong before (and elsewhere), but this post is unique. Here are the rules:
- Post the title of your favorite textbook on a given subject.
- You must have read at least two other textbooks on that same subject.
- You must briefly name the other books you've read on the subject and explain why you think your chosen textbook is superior to them.
Rules #2 and #3 are to protect against recommending a bad book that only seems impressive because it's the only book you've read on the subject. Once, a popular author on Less Wrong recommended Bertrand Russell's A History of Western Philosophy to me, but when I noted that it was more polemical and inaccurate than the other major histories of philosophy, he admitted he hadn't really done much other reading in the field, and only liked the book because it was exciting.
I'll start the list with three of my own recommendations...
Scientific Self-Help: The State of Our Knowledge
Part of the sequence: The Science of Winning at Life
Some have suggested that the Less Wrong community could improve readers' instrumental rationality more effectively if it first caught up with the scientific literature on productivity and self-help, and then enabled readers to deliberately practice self-help skills and apply what they've learned in real life.
I think that's a good idea. My contribution today is a quick overview of scientific self-help: what professionals call "the psychology of adjustment." First I'll review the state of the industry and the scientific literature, and then I'll briefly summarize the scientific data available on three topics in self-help: study methods, productivity, and happiness.
The industry and the literature
As you probably know, much of the self-help industry is a sham, ripe for parody. Most self-help books are written to sell, not to help. Pop psychology may be more myth than fact. As Christopher Buckley (1999) writes, "The more people read [self-help books], the more they think they need them... [it's] more like an addiction than an alliance."
Where can you turn for reliable, empirically-based self-help advice? A few leading therapeutic psychologists (e.g., Albert Ellis, Arnold Lazarus, Martin Seligman) have written self-help books based on decades of research, but even these works tend to give recommendations that are still debated, because they aren't yet part of settled science.
Lifelong self-help researcher Clayton Tucker-Ladd wrote and updated Psychological Self-Help (pdf) over several decades. It's a summary of what scientists do and don't know about self-help methods (as of about 2003), but it's also more than 2,000 pages long, and much of it surveys scientific opinion rather than experimental results, because on many subjects there aren't any experimental results yet. The book is associated with an internet community of people sharing what does and doesn't work for them.
More immediately useful is Richard Wiseman's 59 Seconds. Wiseman is an experimental psychologist and paranormal investigator who gathered together what little self-help research is part of settled science, and put it into a short, fun, and useful Malcolm Gladwell-ish book. The next best popular-level general self-help book is perhaps Martin Seligman's What You Can Change and What You Can't.
Simpson's Paradox
This is my first attempt at an elementary statistics post, which I hope is suitable for Less Wrong. I am going to present a discussion of a statistical phenomenon known as Simpson's Paradox. This isn't a paradox, and it wasn't actually discovered by Simpson, but that's the name everybody uses for it, so it's the name I'm going to stick with. Along the way, we'll get some very basic practice at calculating conditional probabilities.
A worked example
The example I've chosen is an exercise from a university statistics course that I have taught on for the past few years. It is by far the most interesting exercise in the entire course, and it goes as follows:
You are a doctor in charge of a large hospital, and you have to decide which treatment should be used for a particular disease. You have the following data from last month: there were 390 patients with the disease. Treatment A was given to 160 patients of whom 100 were men and 60 were women; 20 of the men and 40 of the women recovered. Treatment B was given to 230 patients of whom 210 were men and 20 were women; 50 of the men and 15 of the women recovered. Which treatment would you recommend we use for people with the disease in future?
The simplest way to represent these sort of data is to draw a table, we can then pick the relevant numbers out of the table to calculate the required conditional probabilities.
Overall
| A | B | |
| lived | 60 | 65 |
| died | 100 | 165 |
The probability that a randomly chosen person survived if they were given treatment A is 60/160 = 0.375
The probability that a randomly chosen person survived if they were given treatment B is 65/230 = 0.283
So a randomly chosen person given treatment A was more likely to surive than a randomly chosen person given treatment B. Looks like we'd better give people treatment A.
However, since were given a breakdown of the data by gender, let's look and see if treatment A is better for both genders, or if it gets all of its advantage from one or the other.
I
I wrote this story at Michigan State during Clarion 1997, and it was published in the Sept/Oct 1998 issue of Odyssey. It has many faults and anachronisms that still bother me. I'd like to say that this is because my understanding of artificial intelligence and the singularity has progressed so much since then; but it has not. Many anachronisms and implausibilities are compromises between wanting to be accurate, and wanting to communicate.
At least I can claim the distinction of having published the story with the shortest title in the English language - measured horizontally.
I
I was the last person, and this is how he died.
Virtue Ethics for Consequentialists
Meta: Influenced by a cool blog post by Kaj, which was influenced by a cool Michael Vassar (like pretty much everything else; the man sure has a lot of ideas). The name of this post is intended to be taken slightly more literally than the similarly titled Deontology for Consequentialists.
There's been a hip new trend going around the Singularity Institute Visiting Fellows house lately, and it's not postmodernism. It's virtue ethics. "What, virtue ethics?! Are you serious?" Yup. I'm so contrarian I think cryonics isn't obvious and that virtue ethics is better than consequentialism. This post will explain why.
When I first heard about virtue ethics I assumed it was a clever way for people to justify things they did when the consequences were bad and the reasons were bad, too. People are very good at spinning tales about how virtuous they are, even more so than at finding good reasons that they could have done things that turned out unpopular, and it's hard to spin the consequences of your actions as good when everyone is keeping score. But it seems that moral theorists were mostly thinking in far mode and didn't have too much incentive to create a moral theory that benefited them the most, so my Hansonian hypothesis falls flat. Why did Plato and Aristotle and everyone up until the Enlightenment find virtue ethics appealing, then? Well...
The Irrationality Game
Please read the post before voting on the comments, as this is a game where voting works differently.
Warning: the comments section of this post will look odd. The most reasonable comments will have lots of negative karma. Do not be alarmed, it's all part of the plan. In order to participate in this game you should disable any viewing threshold for negatively voted comments.
Here's an irrationalist game meant to quickly collect a pool of controversial ideas for people to debate and assess. It kinda relies on people being honest and not being nitpickers, but it might be fun.
Write a comment reply to this post describing a belief you think has a reasonable chance of being true relative to the the beliefs of other Less Wrong folk. Jot down a proposition and a rough probability estimate or qualitative description, like 'fairly confident'.
Example (not my true belief): "The U.S. government was directly responsible for financing the September 11th terrorist attacks. Very confident. (~95%)."
If you post a belief, you have to vote on the beliefs of all other comments. Voting works like this: if you basically agree with the comment, vote the comment down. If you basically disagree with the comment, vote the comment up. What 'basically' means here is intuitive; instead of using a precise mathy scoring system, just make a guess. In my view, if their stated probability is 99.9% and your degree of belief is 90%, that merits an upvote: it's a pretty big difference of opinion. If they're at 99.9% and you're at 99.5%, it could go either way. If you're genuinely unsure whether or not you basically agree with them, you can pass on voting (but try not to). Vote up if you think they are either overconfident or underconfident in their belief: any disagreement is valid disagreement.
That's the spirit of the game, but some more qualifications and rules follow.
Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality
Introduction
Less Wrong is explicitly intended is to help people become more rational. Eliezer has posted that rationality means epistemic rationality (having & updating a correct model of the world), and instrumental rationality (the art of achieving your goals effectively). Both are fundamentally tied to the real world and our performance in it - they are about ability in practice, not theoretical knowledge (except inasmuch as that knowledge helps ability in practice). Unfortunately, I think Less Wrong is a failure at instilling abilities-in-practice, and designed in a way that detracts from people's real-world performance.
It will take some time, and it may be unpleasant to hear, but I'm going to try to explain what LW is, why that's bad, and sketch what a tool to actually help people become more rational would look like.
(This post was motivated by Anna Salomon's Humans are not automatically strategic and the response, more detailed background in footnote [1].)
Update / Clarification in response to some comments: This post is based on the assumption that a) the creators of Less Wrong wish Less Wrong to result in people becoming better at achieving their goals (instrumental rationality, aka "efficient productivity"), and b) Some (perhaps many) readers read it towards that goal. It is this I think is self-deception. I do not dispute that LW can be used in a positive way (read during fun time instead of the NYT or funny pictures on Digg), or that it has positive effects (exposing people to important ideas they might not see elsewhere). I merely dispute that reading fun things on the internet can help people become more instrumentally rational. Additionally, I think instrumental rationality is really important and could be a huge benefit to people's lives (in fact, is by definition!), and so a community value that "deliberate practice towards self-improvement" is more valuable and more important than "reading entertaining ideas on the internet" would be of immense value to LW as a community - while greatly decreasing the importance of LW as a website.
Why Less Wrong is not an effective route to increasing rationality.
Definition:
Work: time spent acting in an instrumentally rational manner, ie forcing your attention towards the tasks you have consciously determined will be the most effective at achieving your consciously chosen goals, rather than allowing your mind to drift to what is shiny and fun.
By definition, Work is what (instrumental) rationalists wish to do more of. A corollary is that Work is also what is required in order to increase one's capacity to Work. This must be true by the definition of instrumental rationality - if it's the most efficient way to achieve one's goals, and if one's goal is to increase one's instrumental rationality, doing so is most efficiently done by being instrumentally rational about it. [2]
That was almost circular, so to add meat, you'll notice in the definition an embedded assumption that the "hard" part of Work is directing attention - forcing yourself to do what you know you ought to instead of what is fun & easy. (And to a lesser degree, determining your goals and the most effective tasks to achieve them). This assumption may not hold true for everyone, but with the amount of discussion of "Akrasia" on LW, the general drift of writing by smart people about productivity (Paul Graham: Addiction, Distraction, Merlin Mann: Time & Attention), and the common themes in the numerous productivity/self-help books I've read, I think it's fair to say that identifying the goals and tasks that matter and getting yourself to do them is what most humans fundamentally struggle with when it comes to instrumental rationality.
Figuring out goals is fairly personal, often subjective, and can be difficult. I definitely think the deep philosophical elements of Less Wrong and it's contributions to epistemic rationality [3] are useful to this, but (like psychedelics) the benefit comes from small occasional doses of the good stuff. Goals should be re-examined regularly, but occasionally (roughly yearly, and at major life forks). An annual retreat with a mix of close friends and distant-but-respected acquaintances (Burning Man, perhaps) will do the trick - reading a regularly updated blog is way overkill.
And figuring out tasks, once you turn your attention to it, is pretty easy. Once you have explicit goals, just consciously and continuously examining whether your actions have been effective at achieving those goals will get you way above the average smart human at correctly choosing the most effective tasks. The big deal here for many (most?) of us, is the conscious direction of our attention.
What is the enemy of consciously directed attention? It is shiny distraction. And what is Less Wrong? It is a blog, a succession of short fun posts with comments, most likely read when people wish to distract or entertain themselves, and tuned for producing shiny ideas which successfully distract and entertain people. As Merlin Mann says: "Joining a Facebook group about creative productivity is like buying a chair about jogging". Well, reading a blog to overcome akrasia IS joining a Facebook group about creative productivity. It's the opposite of this classic piece of advice.
The Affect Heuristic, Sentiment, and Art
I was having a discussion with a friend and reading some related blog articles about the question of whether race affects IQ. (N.B. This post is NOT about the content of the arguments surrounding that question.) Now, like your typical LessWrong member, I subscribe to the Litany of Gendlin, I don’t want to hide from any truth, I believe in honest intellectual inquiry on all subjects. Also, like your typical LessWrong member, I don’t want to be a bigot. These two goals ought to be compatible, right?
But when I finished my conversation and went to lunch, something scary happened. Something I hesitate to admit publicly. I found myself having a negative attitude to all the black people in the cafeteria.
Needless to say, this wasn’t what I wanted. It makes no sense, and it isn’t the way I normally think. But human beings have an affect heuristic. We identify categories as broadly “good” or “bad,” and we tend to believe all good things or all bad things about a category, even when it doesn’t make sense. When we discuss the IQ’s of black and white people, we’re primed to think “yay white, boo black.” Even the act of reading perfectly sound research has that priming effect.
And conscious awareness and effort doesn’t seem to do much to fix this. The Implicit Awareness Test measures how quickly we group black faces with negative-affect words and white faces with positive-affect words, compared to our speed at grouping the black faces with the positive words and the white faces with the negative words. Nearly everyone, of every race, shows some implicit association of black with “bad.” And the researchers who created the test found no improvement with practice or effort.
The one thing that did reduce implicit bias scores was if test-takers primed themselves ahead of time by reading about eminent black historical figures. They were less likely to associate black with “bad” if they had just made a mental association between black and “good.” Which, in fact, was exactly how I snapped out of my moment of cafeteria racism: I recalled to my mind's ear a recording I like of Marian Anderson singing Schubert. The music affected me emotionally and allowed me to escape my mindset.
City of Lights
Sequence index: Living Luminously
Previously in sequence: Highlights and Shadows
Next in Sequence: Lampshading
Pretending to be multiple agents is a useful way to represent your psychology and uncover hidden complexities.
You may find your understanding of this post significantly improved if you read the sixth story from Seven Shiny Stories.
When grappling with the complex web of traits and patterns that is you, you are reasonably likely to find yourself less than completely uniform. You might have several competing perspectives, possess the ability to code-switch between different styles of thought, or even believe outright contradictions. It's bound to make it harder to think about yourself when you find this kind of convolution.
Unfortunately, we don't have the vocabulary or even the mental architecture to easily think of or describe ourselves (nor other people) as containing such multitudes. The closest we come in typical conversation more resembles descriptions of superficial, vague ambivalence ("I'm sorta happy about it, but kind of sad at the same time! Weird!") than the sort of deep-level muddle and conflict that can occupy a brain. The models of the human psyche that have come closest to approximating this mess are what I call "multi-agent models". (Note: I have no idea how what I am about to describe interacts with actual psychiatric conditions involving multiple personalities, voices in one's head, or other potentially similar-sounding phenomena. I describe multi-agent models as employed by psychiatrically singular persons.)
Multi-agent models have been around for a long time: in Plato's Republic, he talks about appetite (itself imperfectly self-consistent), spirit, and reason, forming a tripartite soul. He discusses their functions as though each has its own agency and could perceive, desire, plan, and act given the chance (plus the possibility of one forcing down the other two to rule the soul unopposed). Not too far off in structure is the Freudian id/superego/ego model. The notion of the multi-agent self even appears in fiction (warning: TV Tropes). It appears to be a surprisingly prevalent and natural method for conceptualizing the complicated mind of the average human being. Of course, talking about it as something to do rather than as a way to push your psychological theories or your notion of the ideal city structure or a dramatization of a moral conflict makes you sound like an insane person. Bear with me - I have data on the usefulness of the practice from more than one outside source.
Seven Shiny Stories
It has come to my attention that the contents of the luminosity sequence were too abstract, to the point where explicitly fictional stories illustrating the use of the concepts would be helpful. Accordingly, there follow some such stories.
1. Words (an idea from Let There Be Light, in which I advise harvesting priors about yourself from outside feedback)
Maria likes compliments. She loves compliments. And when she doesn't get enough of them to suit her, she starts fishing, asking plaintive questions, making doe eyes to draw them out. It's starting to annoy people. Lately, instead of compliments, she's getting barbs and criticism and snappish remarks. It hurts - and it seems to hurt her more than it hurts others when they hear similar things. Maria wants to know what it is about her that would explain all of this. So she starts taking personality tests and looking for different styles of maintaining and thinking about relationships, looking for something that describes her. Eventually, she runs into a concept called "love languages" and realizes at once that she's a "words" person. Her friends aren't trying to hurt her - they don't realize how much she thrives on compliments, or how deeply insults can cut when they're dealing with someone who transmits affection verbally. Armed with this concept, she has a lens through which to interpret patterns of her own behavior; she also has a way to explain herself to her loved ones and get the wordy boosts she needs.
2. Widgets (an idea from The ABC's of Luminosity, in which I explain the value of correlating affect, behavior, and circumstance)
Tony's performance at work is suffering. Not every day, but most days, he's too drained and distracted to perform the tasks that go into making widgets. He's in serious danger of falling behind his widget quota and needs to figure out why. Having just read a fascinating and brilliantly written post on Less Wrong about luminosity, he decides to keep track of where he is and what he's doing when he does and doesn't feel the drainedness. After a week, he's got a fairly robust correlation: he feels worst on days when he doesn't eat breakfast, which reliably occurs when he's stayed up too late, hit the snooze button four times, and had to dash out the door. Awkwardly enough, having been distracted all day tends to make him work more slowly at making widgets, which makes him less physically exhausted by the time he gets home and enables him to stay up later. To deal with that, he starts going for long runs on days when his work hasn't been very tiring, and pops melatonin; he easily drops off to sleep when his head hits the pillow at a reasonable hour, gets sounder sleep, scarfs down a bowl of Cheerios, and arrives at the widget factory energized and focused.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)