Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
A specific bias that Lesswrongers may often get from fiction is the idea that power is proportional to difficulty. The more power something gives you, the harder it should be to get, right?
A mediocre student becomes a powerful mage through her terrible self-sacrifice and years of studying obscure scrolls. Even within the spells she can cast, the truly world-altering ones are those that demand the most laborious preparation, the most precise gestures, and the longest and most incomprehensible stream of syllables. A monk makes an arduous journey to ancient temples and learns secret techniques of spiritual oneness and/or martial asskickery, which require great dedication and self-knowledge. Otherwise, it would be cheating. The whole process of leveling up, of adding ever-increasing modifiers to die rolls, is based on the premise that power comes to those who do difficult things. And it's failsafe - no matter what you put your skill points in, you become better at something. It's a training montage, or a Hero's journey. As with other fictional evidence, these are not "just stories" -- they are powerful cultural narratives. This kind of narrative shapes moral choices and identity. So where do we see this reflected in less obviously fictional contexts?
There's the rags-to-riches story -- the immigrant who came with nothing, but by dint of hard work, now owns a business. University engineering programs are notoriously tough, because you are gaining the ability to do a lot of things (and for signalling reasons). A writer got to where she is today because she wrote and revised and submitted and revised draft after draft after draft.
In every case, there is assumed to be a direct causal link between difficulty and power. Here, these are loosely defined. Roughly, "power" means "ability to have your way", and "difficulty" is "amount of work & sacrifice required." These can be translated into units of social influence - a.k.a money -- and investment, a.k.a. time, or money. In many cases, power is set by supply and demand -- nobody needs a wizard if they can all cast their own spells, and a doctor can command much higher prices if they're the only one in town. The power of royalty or other birthright follows a similar pattern - it's not "difficult", but it is scarce -- only a very few people have it, and it's close to impossible for others to get it.
Each individual gets to choose what difficult things they will try to do. Some will have longer or shorter payoffs, but each choice will have some return. And since power (partly) depends on everybody else's choices, neoclassical economics says that individuals' choices collectively determine a single market rate for the return on difficulty. So anything you do that's difficult should have the same payoff.
Anything equally difficult should have equal payoff. Apparently. Clearly, this is not the world we live in. Admittedly, there were some pretty questionable assumptions along the way, but it's almost-kind-of-reasonable to conclude that, if you just generalize from the fictional evidence. (Consider RPGs: They're designed to be balanced. Leveling up any class will get you to advance in power at a more-or-less equal rate.)
So how does reality differ from this fictional evidence? One direction is trivial: it's easy to find examples where what's difficult is not particularly powerful.
Writing a book is hard, and has a respectable payoff (depending on the quality of the book, publicity, etc.). Writing a book without using the letter "e", where the main character speaks only in palindromes, while typing in the dark with only your toes on a computer that's rigged to randomly switch letters around is much much more difficult, but other than perhaps gathering a small but freakishly devoted fanbase, it does not bring any more power/influence than writing any other book. It may be a sign that you are capable of more difficult things, and somebody may notice this and give you power, but this is indirect and unreliable. Similarly, writing a game in machine code or as a set of instructions for a Turing machine is certainly difficult, but also pretty dumb, and has no significant payoff beyond writing the game in a higher-level language. [Edit - thanks to TsviBT: This is assuming there already is a compiler and relevant modules. If you are first to create all of these, there might be quite a lot of benefit.]
On the other hand, some things are powerful, but not particularly difficult. On a purely physical level, this includes operating heavy machinery, or piloting drones. (I'm sure it's not easy, but the power output is immense). Conceptually, I think calculus comes in this category. It can provide a lot of insight into a lot of disparate phenomena (producing utility and its bastard cousin, money), but is not too much work to learn.
As instrumental rationalists, this is the territory we want to be in. We want to beat the market rate for turning effort into influence. So how do we do this?
This is a big, difficult question. I think it's a useful way to frame many of the goals of instrumental rationality. What major should I study? Is this relationship worthwhile? (Note: This may, if poorly applied, turn you into a terrible person. Don't apply it poorly.) What should I do in my spare time?
These questions are tough. But the examples of powerful-but-easy stuff suggest a useful principle: make use of what already exists. Calculus is powerful, but was only easy to learn because I'd already been learning math for a decade. Bulldozers are powerful, and the effort to get this power is minimal if all you have to do is climb in and drive. It's not so worthwhile, though, if you have to derive a design from first principles, mine the ore, invent metallurgy, make all the parts, and secure an oil supply first.
Similarly, if you're already a writer, writing a new book may gain you more influence than learning plumbing. And so on. This begins to suggest that we should not be too hasty to judge past investments as sunk costs. Your starting point matters in trying to find the closest available power boost. And as with any messy real-world problem, luck plays a major role, too.
Of course, there will always be some correlation between power and difficulty -- it's not that the classical economic view is wrong, there's just other factors at play. But to gain influence, you should in general be prepared to do difficult things. However, they should not be arbitrary difficult things -- they should be in areas you have specifically identified as having potential.
To make this more concrete, think of Methods!Harry. He strategically invests a lot of effort, usually at pretty good ratios -- the Gringotts money pump scheme, the True Patronus, his mixing of magic and science, and Partial Transfiguration. Now that's some good fictional evidence.
 Any kind of fiction, but particularly fantasy, sci-fi, and neoclassical economics. All works of elegant beauty, with a more-or-less tenuous relationship to real life.
 Dehghani, M., Sachdeva, S., Ekhtiari, H., Gentner, D., Forbus, F. "The role of Cultural Narratives in Moral Decision Making." Proceedings of the 31th Annual Conference of the Cognitive Science Society. 2009.
Explaining is a difficult art. You can explain something so that your reader understands the words; [I try to] explain something so that the reader feels it in the marrow of his bones.
My private school taught biology from the infamous creationist textbook Biology for Christian Schools, so my early understanding of evolution was a bit... confused. Lacking the curiosity to, say, check Altavista for a biologist’s explanation (faith is a virtue, don’t ya know), I remained confused about evolution for years.
Eventually I stumbled across an eloquent explanation of the fact that natural selection follows necessarily from heritability, variation, and selection.
Click. I got it.
Explaining is hard. Explainers need to pierce shields of misinformation (creationism), bridge vast inferential distances (probability theory), and cause readers to feel the truth of foreign concepts (quantum entanglement) in their bones. That isn’t easy. Those who do it well are rare and valuable.
Textbook writers are often skilled at explaining complex fields. That’s why I called on my fellow Less Wrongers to name their favorite textbooks (if they had read at least two other textbooks on those subjects). The Best Textbooks on Every Subject now gives 22 textbook recommendations, for fields as diverse as scientific self-help and representation theory.
Now I want to jump down a few levels in granularity. Let’s pool our knowledge to find great explanations for each important idea (in math, science, philosophy, etc.), whether or not there is equal value in the rest of the book or article in which each explanation is found.
Great explanations, in my meaning, have four traits:
A great explanation does more than report facts; it uses analogy and rhetoric and other tools to make readers feel the target idea in their bones.
A great explanation is not a single analogy nor a giant book. It is, roughly, between 2 and 100 pages in length.
A great explanation is comprehensible at best to a young teenager, or at least to a 75th percentile college graduate. (There may be no way to seriously explain string theory to an average 13-year-old.)
A great explanation is exciting to read.
By sharing great explanations we can more often experience that magical click.
The topics of rationality and existential risk reduction need their own Richard Dawkins. Their own Darwin. Their own Voltaire.
Rhetoric moves minds.
Students and masochists aside, people read only what is exciting. So: Want to make an impact? Be exciting. You must be heard before you can turn heads in the right direction.
Thus, I've decided to try harder and actually put effort into the quality of my writing instead of just cranking stuff out quickly so I can fill in inferential gaps and get to the cutting edge of the research subjects I care about.
That's why I asked LWers for their picks of best nonfiction writing on Less Wrong.
It's also why I've been reading lots of good science writing, focusing on those who manage to be exciting while covering fairly complex subjects: Dawkins, Sagan, Gleick, Zimmer, Shermer, Ramachandran, Roach, Sacks, Hawking, Greene, Hofstadter, Penrose, Wilson, Feynman, Kaku, Gould, Bryson, Pinker, Kurzban, and others.
I've also been re-reading lots of books and articles on how to write well: Keys to Great Writing, Style: Lessons in Clarity and Grace, Elements of Style, On Writing Well, The Classic Guide to Better Writing, The Book on Writing, Telling True Stories, Writing Tools, Ideas into Words, The Chicago Guide to Communicating Science, A Field Guide for Science Writers, Six Rules for Rewriting, Writing, Briefly, and Singularity Writing Advice. (Conversations with Eliezer also helped.)
I don't know if I can become the Voltaire of rationality and existential risk reduction, but it seems worth a shot. Every improvement in writing style is beneficial even if my starry goal is never met. Also, it appears I produce better writing without really trying than most people produce with trying. (If you've ever had to grade essays by honors English seniors, you'll know what I mean.) I expect to gain more by striving where I already excel than by pushing where I have little natural talent.
(I won't try to write everything well. Sometimes I should just crank things out. To be honest, I didn't spend much time optimizing this post.)
My other hope is that a few other writers decide they would like to be the Voltaire of rationality and/or existential risk reduction. May this post be useful to them. It's a list of recommendations on writing style pulled from many sources, in no particular order.
Followup to: Eutopia is Scary
"Two roads diverged in the woods. I took the one less traveled, and had to eat bugs until Park rangers rescued me."
Utopia and Dystopia have something in common: they both confirm the moral sensibilities you started with. Whether the world is a libertarian utopia of the non-initiation of violence and everyone free to start their own business, or a hellish dystopia of government regulation and intrusion—you might like to find yourself in the first, and hate to find yourself in the second; but either way you nod and say, "Guess I was right all along."
So as an exercise in creativity, try writing them down side by side: Utopia, Dystopia, and Weirdtopia. The zig, the zag and the zog.
I'll start off with a worked example for public understanding of science:
- Utopia: Most people have the equivalent of an undergrad degree in something; everyone reads the popular science books (and they're good books); everyone over the age of nine understands evolutionary theory and Newtonian physics; scientists who make major contributions are publicly adulated like rock stars.
- Dystopia: Science is considered boring and possibly treasonous; public discourse elevates religion or crackpot theories; stem cell research is banned.
- Weirdtopia: Science is kept secret to avoid spoiling the surprises; no public discussion but intense private pursuit; cooperative ventures surrounded by fearsome initiation rituals because that's what it takes for people to feel like they've actually learned a Secret of the Universe and be satisfied; someone you meet may only know extremely basic science, but they'll have personally done revolutionary-level work in it, just like you. Too bad you can't compare notes.
Followup to: Why is the Future So Absurd?
"The big thing to remember about far-future cyberpunk is that it will be truly ultra-tech. The mind and body changes available to a 23rd-century Solid Citizen would probably amaze, disgust and frighten that 2050 netrunner!"
Pick up someone from the 18th century—a smart someone. Ben Franklin, say. Drop them into the early 21st century.
We, in our time, think our life has improved in the last two or three hundred years. Ben Franklin is probably smart and forward-looking enough to agree that life has improved. But if you don't think Ben Franklin would be amazed, disgusted, and frightened, then I think you far overestimate the "normality" of your own time. You can think of reasons why Ben should find our world compatible, but Ben himself might not do the same.
Movies that were made in say the 40s or 50s, seem much more alien—to me—than modern movies allegedly set hundreds of years in the future, or in different universes. Watch a movie from 1950 and you may see a man slapping a woman. Doesn't happen a lot in Lord of the Rings, does it? Drop back to the 16th century and one popular entertainment was setting a cat on fire. Ever see that in any moving picture, no matter how "lowbrow"?
("But," you say, "that's showing how discomforting the Past's culture was, not how scary the Future is." Of which I wrote, "When we look over history, we see changes away from absurd conditions such as everyone being a peasant farmer and women not having the vote, toward normal conditions like a majority middle class and equal rights...")
Something about the Future will shock we 21st-century folk, if we were dropped in without slow adaptation. This is not because the Future is cold and gloomy—I am speaking of a positive, successful Future; the negative outcomes are probably just blank. Nor am I speaking of the idea that every Utopia has some dark hidden flaw. I am saying that the Future would discomfort us because it is better.
Every Utopia ever constructed—in philosophy, fiction, or religion—has been, to one degree or another, a place where you wouldn't actually want to live. I am not alone in this important observation: George Orwell said much the same thing in "Why Socialists Don't Believe In Fun", and I expect that many others said it earlier.
If you read books on How To Write—and there are a lot of books out there on How To Write, because amazingly a lot of book-writers think they know something about writing—these books will tell you that stories must contain "conflict".
That is, the more lukewarm sort of instructional book will tell you that stories contain "conflict". But some authors speak more plainly.
"Stories are about people's pain." Orson Scott Card.
"Every scene must end in disaster." Jack Bickham.
In the age of my youthful folly, I took for granted that authors were excused from the search for true Eutopia, because if you constructed a Utopia that wasn't flawed... what stories could you write, set there? "Once upon a time they lived happily ever after." What use would it be for a science-fiction author to try to depict a positive Singularity, when a positive Singularity would be...
...the end of all stories?
It seemed like a reasonable framework with which to examine the literary problem of Utopia, but something about that final conclusion produced a quiet, nagging doubt.
Most witches don't believe in gods. They know that the gods exist, of course. They even deal with them occasionally. But they don't believe in them. They know them too well. It would be like believing in the postman.
—Terry Pratchett, Witches Abroad
Once upon a time, I was pondering the philosophy of fantasy stories—
And before anyone chides me for my "failure to understand what fantasy is about", let me say this: I was raised in an SF&F household. I have been reading fantasy stories since I was five years old. I occasionally try to write fantasy stories. And I am not the sort of person who tries to write for a genre without pondering its philosophy. Where do you think story ideas come from?
I was pondering the philosophy of fantasy stories, and it occurred to me that if there were actually dragons in our world—if you could go down to the zoo, or even to a distant mountain, and meet a fire-breathing dragon—while nobody had ever actually seen a zebra, then our fantasy stories would contain zebras aplenty, while dragons would be unexciting.
Now that's what I call painting yourself into a corner, wot? The grass is always greener on the other side of unreality.