The Importance of Sidekicks
[Reposted from my personal blog.]
Mindspace is wide and deep. “People are different” is a truism, but even knowing this, it’s still easy to underestimate.
I spent much of my initial engagement with the rationality community feeling weird and different. I appreciated the principle and project of rationality as things that were deeply important to me; I was pretty pro-self improvement, and kept tsuyoku naritai as my motto for several years. But the rationality community, the people who shared this interest of mine, often seemed baffled by my values and desires. I wasn’t ambitious, and had a hard time wanting to be. I had a hard time wanting to be anything other than a nurse.
It wasn’t until this August that I convinced myself that this wasn’t a failure in my rationality, but rather a difference in my basic drives. It’s around then, in the aftermath of the 2014 CFAR alumni reunion, that I wrote the following post.
I don’t believe in life-changing insights (that happen to me), but I think I’ve had one–it’s been two weeks and I’m still thinking about it, thus it seems fairly safe to say I did.
At a CFAR Monday test session, Anna was talking about the idea of having an “aura of destiny”–it’s hard to fully convey what she meant and I’m not sure I get it fully, but something like seeing yourself as you’ll be in 25 years once you’ve saved the world and accomplished a ton of awesome things. She added that your aura of destiny had to be in line with your sense of personal aesthetic, to feel “you.”
I mentioned to Kenzi that I felt stuck on this because I was pretty sure that the combination of ambition and being the locus of control that “aura of destiny” conveyed to me was against my sense of personal aesthetic.
Kenzi said, approximately [I don't remember her exact words]: “What if your aura of destiny didn’t have to be those things? What if you could be like…Samwise, from Lord of the Rings? You’re competent, but most importantly, you’re *loyal* to Frodo. You’re the reason that the hero succeeds.”
I guess this isn’t true for most people–Kenzi said she didn’t want to keep thinking of other characters who were like this because she would get so insulted if someone kept comparing her to people’s sidekicks–but it feels like now I know what I am.
So. I’m Samwise. If you earn my loyalty, by convincing me that what you’re working on is valuable and that you’re the person who should be doing it, I’ll stick by you whatever it takes, and I’ll *make sure* you succeed. I don’t have a Frodo right now. But I’m looking for one.
It then turned out that quite a lot of other people recognized this, so I shifted from “this is a weird thing about me” to “this is one basic personality type, out of many.” Notably, Brienne wrote the following comment:
Sidekick” doesn’t *quite* fit my aesthetic, but it’s extremely close, and I feel it in certain moods. Most of the time, I think of myself more as what TV tropes would call a “dragon”. Like the Witch-king of Angmar, if we’re sticking of LOTR. Or Bellatrix Black. Or Darth Vader. (It’s not my fault people aren’t willing to give the good guys dragons in literature.)
For me, finding someone who shared my values, who was smart and rational enough for me to trust him, and who was in a much better position to actually accomplish what I most cared about than I imagined myself ever being, was the best thing that could have happened to me.
She also gave me what’s maybe one of the best and most moving compliments I’ve ever received.
In Australia, something about the way you interacted with people suggested to me that you help people in a completely free way, joyfully, because it fulfills you to serve those you care about, and not because you want something from them… I was able to relax around you, and ask for your support when I needed it while I worked on my classes. It was really lovely… The other surprising thing was that you seemed to act that way with everyone. You weren’t “on” all the time, but when you were, everybody around you got the benefit. I’d never recognized in anyone I’d met a more diffuse service impulse, like the whole human race might be your master. So I suddenly felt like I understood nurses and other people in similar service roles for the first time.
Sarah Constantin, who according to a mutual friend is one of the most loyal people who exists, chimed in with some nuance to the Frodo/Samwise dynamic: “Sam isn’t blindly loyal to Frodo. He makes sure the mission succeeds even when Frodo is fucking it up. He stands up to Frodo. And that’s important too.”
Kate Donovan, who also seems to share this basic psychological makeup, added “I have a strong preference for making the lives of the lead heroes better, and very little interest in ever being one.”
Meanwhile, there were doubts from others who didn’t feel this way. The “we need heroes, the world needs heroes” narrative is especially strong in the rationalist community. And typical mind fallacy abounds. It seems easy to assume that if someone wants to be a support character, it’s because they’re insecure–that really, if they believed in themselves, they would aim for protagonist.
I don’t think this is true. As Kenzi pointed out: “The other thing I felt like was important about Samwise is that his self-efficacy around his particular mission wasn’t a detriment to his aura of destiny – he did have insecurities around his ability to do this thing – to stand by Frodo – but even if he’d somehow not had them, he still would have been Samwise – like that kind of self-efficacy would have made his essence *more* distilled, not less.”
Brienne added: “Becoming the hero would be a personal tragedy, even though it would be a triumph for the world if it happened because I surpassed him, or discovered he was fundamentally wrong.”
Why write this post?
Usually, “this is a true and interesting thing about humans” is enough of a reason for me to write something. But I’ve got a lot of other reasons, this time.
I suspect that the rationality community, with its “hero” focus, drives away many people who are like me in this sense. I’ve thought about walking away from it, for basically that reason. I could stay in Ottawa and be a nurse for forty years; it would fulfil all my most basic emotional needs, and no one would try to change me. Because oh boy, have people tried to do that. It’s really hard to be someone who just wants to please others, and to be told, basically, that you’re not good enough–and that you owe it to the world to turn yourself ambitious, strategic, Slytherin.
Firstly, this is mean regardless. Secondly, it’s not true.
Samwise was important. So was Frodo, of course. But Frodo needed Samwise. Heroes need sidekicks. They can function without them, but function a lot better with them. Maybe it’s true that there aren’t enough heroes trying to save the world. But there sure as hell aren’t enough sidekicks trying to help them. And there especially aren’t enough talented, competent, awesome sidekicks.
If you’re reading this post, and it resonates with you… Especially if you’re someone who has felt unappreciated and alienated for being different… I have something to tell you. You count. You. Fucking. Count. You’re needed, even if the heroes don’t realize it yet. (Seriously, heroes, you should be more strategic about looking for awesome sidekicks. AFAIK only Nick Bostrom is doing it.) This community could use more of you. Pretty much every community could use more of you.
I’d like, someday, to live in a culture that doesn’t shame this way of being. As Brienne points out, “Society likes *selfless* people, who help everybody equally, sure. It’s socially acceptable to be a nurse, for example. Complete loyalty and devotion to “the hero”, though, makes people think of brainwashing, and I’m not sure what else exactly but bad things.” (And not all subsets of society even accept nursing as a Valid Life Choice.) I’d like to live in a world where an aspiring Samwise can find role models; where he sees awesome, successful people and can say, “yes, I want to grow up to be that.”
Maybe I can’t have that world right away. But at least I know what I’m reaching for. I have a name for it. And I have a Frodo–Ruby and I are going to be working together from here on out. I have a reason not to walk away.
CFAR in 2014: Continuing to climb out of the startup pit, heading toward a full prototype
Summary: We outline CFAR’s purpose, our history in 2014, and our plans heading into 2015.
- Highlights from 2014.
- Improving operations.
- Attempts to go beyond the current workshop and toward the ‘full prototype’ of CFAR: our experience in 2014 and plans for 2015.
- Nuts, bolts, and financial details.
- The big picture and how you can help.
One of the reasons we’re publishing this review now is that we’ve just launched our annual matching fundraiser, and want to provide the information our prospective donors need for deciding. This is the best time of year to decide to donate to CFAR. Donations up to $120k will be matched until January 31.[1]
To briefly preview: For the first three years of our existence, CFAR mostly focused on getting going. We followed the standard recommendation to build a ‘minimum viable product’, the CFAR workshops, that could test our ideas and generate some revenue. Coming into 2013, we had a workshop that people liked (9.3 average rating on “Are you glad you came?”; a more recent random survey showed 9.6 average rating on the same question 6-24 months later), which helped keep the lights on and gave us articulate, skeptical, serious learners to iterate on. At the same time, the workshops are not everything we would want in a CFAR prototype; it feels like the current core workshop does not stress-test most of our hopes for what CFAR can eventually do. The premise of CFAR is that we should be able to apply the modern understanding of cognition to improve people’s ability to (1) figure out the truth (2) be strategically effective (3) do good in the world. We have dreams of scaling up some particular kinds of sanity. Our next goal is to build the minimum strategic product that more directly justifies CFAR’s claim to be an effective altruist project.[2]
Bayes Academy: Development report 1
Some of you may remember me proposing a game idea that went by the name of The Fundamental Question. Some of you may also remember me talking a lot about developing an educational game about Bayesian Networks for my MSc thesis, but not actually showing you much in the way of results.
Insert the usual excuses here. But thanks to SSRIs and mytomatoes.com and all kinds of other stuff, I'm now finally on track towards actually accomplishing something. Here's a report on a very early prototype.
This game has basically two goals: to teach its players something about Bayesian networks and probabilistic reasoning, and to be fun. (And third, to let me graduate by giving me material for my Master's thesis.)
We start with the main character stating that she is nervous. Hitting any key, the player proceeds through a number of lines of internal monologue:
I am nervous.
I’m standing at the gates of the Academy, the school where my brother Opin was studying when he disappeared. When we asked the school to investigate, they were oddly reluctant, and told us to drop the issue.
The police were more helpful at first, until they got in contact with the school. Then they actually started threatening us, and told us that we would get thrown in prison if we didn’t forget about Opin.
That was three years ago. Ever since it happened, I’ve been studying hard to make sure that I could join the Academy once I was old enough, to find out what exactly happened to Opin. The answer lies somewhere inside the Academy gates, I’m sure of it.
Now I’m finally 16, and facing the Academy entrance exams. I have to do everything I can to pass them, and I have to keep my relation to Opin a secret, too.
???: “Hey there.”
Eep! Someone is talking to me! Is he another applicant, or a staff member? Wait, let me think… I’m guessing that applicant would look a lot younger than staff members! So, to find that out… I should look at him!
[You are trying to figure out whether the voice you heard is a staff member or another applicant. While you can't directly observe his staff-nature, you believe that he'll look young if he's an applicant, and like an adult if he's a staff member. You can look at him, and therefore reveal his staff-nature, by right-clicking on the node representing his apperance.]
Here is our very first Bayesian Network! Well, it's not really much of a network: I'm starting with the simplest possible case in order to provide an easy start for the player. We have one node that cannot be observed ("Student", its hidden nature represented by showing it in greyscale), and an observable node ("Young-looking") whose truth value is equal to that of the Student node. All nodes are binary random variables, either true or false.
According to our current model of the world, "Student" has a 50% chance of being true, so it's half-colored in white (representing the probability of it being true) and half-colored in black (representing the probability of it being false). "Young-looking" inherits its probability directly. The player can get a bit of information about the two nodes by left-clicking on them.
The game also offers alternate color schemes for colorblind people who may have difficulties distinguishing red and green.
Now we want to examine the person who spoke to us. Let's look at him, by right-clicking on the "Young-looking" node.
Not too many options here, because we're just getting started. Let's click on "Look at him", and find out that he is indeed young, and thus a student.
This was the simplest type of minigame offered within the game. You are given a set of hidden nodes whose values you're tasked with discovering by choosing which observable nodes to observe. Here the player had no way to fail, but later on, the minigames will involve a time limit and too many observable nodes to inspect within that time limit. It then becomes crucial to understand how probability flows within a Bayesian network, and which nodes will actually let you know the values of the hidden nodes.
The story continues!
Short for an adult, face has boyish look, teenagerish clothes... yeah, he looks young!
He's a student!
...I feel like I’m overthinking things now.
...he’s looking at me.
I’m guessing he’s either waiting for me to respond, or there’s something to see behind me, and he’s actually looking past me. If there isn’t anything behind me, then I know that he must be waiting for me to respond.
Maybe there's a monster behind me, and he's paralyzed with fear! I should check that possibility before it eats me!
[You want to find out whether the boy is waiting for your reply or staring at a monster behind you. You know that he's looking at you, and your model of the world suggests that he will only look in your direction if he's waiting for you to reply, or if there's a monster behind you. So if there's no monster behind you, you know that he's waiting for you to reply!]
Slightly more complicated network, but still, there's only one option here. Oops, apparently the "Looks at you" node says it's an observable variable that you can right-click to observe, despite the fact that it's already been observed. I need to fix that.
Anyway, right-clicking on "Attacking monster" brings up a "Look behind you" option, which we'll choose.
You see nothing there. Besides trees, that is.
Boy: “Um, are you okay?”
“Yeah, sorry. I just… you were looking in my direction, and I wasn’t sure of whether you were expecting me to reply, or whether there was a monster behind me.”
He blinks.
Boy: “You thought that there was a reasonable chance for a monster to be behind you?”
I’m embarrassed to admit it, but I’m not really sure of what the probability of a monster having snuck up behind me really should have been.
My studies have entirely focused on getting into this school, and Monsterology isn’t one of the subjects on the entrance exam!
I just went with a 50-50 chance since I didn’t know any better.
'Okay, look. Monsterology is my favorite subject. Monsters avoid the Academy, since it’s surrounded by a mystical protective field. There’s no chance of them getting even near! 0 percent chance.'
'Oh. Okay.'
[Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 0%.]
Then stuff happens and they go stand in line for the entrance exam or something. I haven't written this part. Anyway, then things get more exciting, for a wild monster appears!
Stuff happens
AAAAAAH! A MONSTER BEHIND ME!
Huh, the monster is carrying a sword.
Well, I may not have studied Monsterology, but I sure did study fencing!
[You draw your sword. Seeing this, the monster rushes at you.]
He looks like he's going to strike. But is it really a strike, or is it a feint?
If it's a strike, I want to block and counter-attack. But if it's a feint, that leaves him vulnerable to my attack.
I have to choose wisely. If I make the wrong choice, I may be dead.
What did my master say? If the opponent has at least two of dancing legs, an accelerating midbody, and ferocious eyes, then it's an attack!
Otherwise it's a feint! Quick, I need to read his body language before it's too late!
Now get to the second type of minigame! Here, you again need to discover the values of some number of hidden variables within a time limit, but here it is in order to find out the consequences of your decision. In this one, the consequence is simple - either you live or you die. I'll let the screenshot and tutorial text speak for themselves:
[Now for some actual decision-making! The node in the middle represents the monster's intention to attack (or to feint, if it's false). Again, you cannot directly observe his intention, but on the top row, there are things about his body language that signal his intention. If at least two of them are true, then he intends to attack.]
[Your possible actions are on the bottom row. If he intends to attack, then you want to block, and if he intends to feint, you want to attack. You need to inspect his body language and then choose an action based on his intentions. But hurry up! Your third decision must be an action, or he'll slice you in two!]
In reality, the top three variables are not really independent of each other. We want to make sure that the player can always win this battle despite only having three actions. That's two actions for inspecting variables, and one action for actually making a decision. So this battle is rigged: either the top three variables are all true, or they're all false.
...actually, now that I think of it, the order of the variables is wrong. Logically, the body language should be caused by the intention to attack, and not vice versa, so the arrows should point from the intention to body language. I'll need to change that. I got these mixed up because the prototypical exemplar of a decision minigame is one where you need to predict someone's reaction from their personality traits, and there the personality traits do cause the reaction. Anyway, I want to get this post written before I go to bed, so I won't change that now.
Right-clicking "Dancing legs", we now see two options besides "Never mind"!
We can find out the dancingness of the enemy's legs by thinking about our own legs - we are well-trained, so our legs are instinctively mirroring our opponent's actions to prevent them from getting an advantage over us - or by just instinctively feeling where they are, without the need to think about them! Feeling them would allow us to observe this node without spending an action.
Unfortunately, feeling them has "Fencing 2" as a prerequisite skill, and we don't have that. Neither could we have them, in this point of the game. The option is just there to let the player know that there are skills to be gained in this game, and make them look forward to the moment when they can actually gain that skill. As well as giving them an idea of how the skill can be used.
Anyway, we take a moment to think of our legs, and even though our opponent gets closer to us in that time, we realize that our legs our dancing! So his legs must be dancing as well!
With our insider knowledge, we now know that he's attacking, and we could pick "Block" right away. But let's play this through. The network has automatically recalculated the probabilities to reflect our increased knowledge, and is now predicting a 75% chance for our enemy to be attacking, and for "Blocking" to thus be the right decision to make.
Next we decide to find out what his eyes say, by matching our gaze with his. Again, there would be a special option that cost us no time - this time around, one enabled by Empathy 1 - but we again don't have that option.
Except that his gaze is so ferocious that we are forced to look away! While we are momentarily distracted, he closes the distance, ready to make his move. But now we know what to do... block!
Success!
Now the only thing that remains to do is to ask our new-found friend for an explanation.
"You told me there was a 0% chance of a monster near the academy!"
Boy: “Ehh… yeah. I guess I misremembered. I only read like half of our course book anyway, it was really boring.”
“Didn’t you say that Monsterology was your favorite subject?”
Boy: “Hey, that only means that all the other subjects were even more boring!”
“. . .”
I guess I shouldn’t put too much faith on what he says.
[Your model of the world has been updated! The prior of the variable 'Monster Near The Academy' is now 50%.]
[Your model of the world has been updated! You have a new conditional probability variable: 'True Given That The Boy Says It's True', 25%]
And that's all for now. Now that the basic building blocks are in place, future progress ought to be much faster.
Notes:
As you might have noticed, my "graphics" suck. A few of my friends have promised to draw art, but besides that, the whole generic Java look could go. This is where I was originally planning to put in the sentence "and if you're a Java graphics whiz and want to help fix that, the current source code is conveniently available at GitHub", but then getting things to his point took longer than I expected and I didn't have the time to actually figure out how the whole Eclipse-GitHub integration works. I'll get to that soon. Github link here!
I also want to make the nodes more informative - right now they only show their marginal probability. Ideally, clicking on them would expand them to a representation where you could visually see what components their probability composed of. I've got some scribbled sketches of what this should look like for various node types, but none of that is implemented yet.
I expect some of you to also note that the actual Bayes theorem hasn't shown up yet, at least in no form resembling the classic mammography problem. (It is used implicitly in the network belief updates, though.) That's intentional - there will be a third minigame involving that form of the theorem, but somehow it felt more natural to start this way, to give the player a rough feeling of how probability flows through Bayesian networks. Admittedly I'm not sure of how well that's happening so far, but hopefully more minigames should help the player figure it out better.
What's next? Once the main character (who needs a name) manages to get into the Academy, there will be a lot of social scheming, and many mysteries to solve in order for her to find out just what did happen to her brother... also, I don't mind people suggesting things, such as what could happen next, and what kinds of network configurations the character might face in different minigames.
(Also, everything that you've seen might get thrown out and rewritten if I decide it's no good. Let me know what you think of the stuff so far!)
Power and difficulty
A specific bias that Lesswrongers may often get from fiction[1] is the idea that power is proportional to difficulty. The more power something gives you, the harder it should be to get, right?
A mediocre student becomes a powerful mage through her terrible self-sacrifice and years of studying obscure scrolls. Even within the spells she can cast, the truly world-altering ones are those that demand the most laborious preparation, the most precise gestures, and the longest and most incomprehensible stream of syllables. A monk makes an arduous journey to ancient temples and learns secret techniques of spiritual oneness and/or martial asskickery, which require great dedication and self-knowledge. Otherwise, it would be cheating. The whole process of leveling up, of adding ever-increasing modifiers to die rolls, is based on the premise that power comes to those who do difficult things. And it's failsafe - no matter what you put your skill points in, you become better at something. It's a training montage, or a Hero's journey. As with other fictional evidence, these are not "just stories" -- they are powerful cultural narratives. This kind of narrative shapes moral choices[2] and identity. So where do we see this reflected in less obviously fictional contexts?
There's the rags-to-riches story -- the immigrant who came with nothing, but by dint of hard work, now owns a business. University engineering programs are notoriously tough, because you are gaining the ability to do a lot of things (and for signalling reasons). A writer got to where she is today because she wrote and revised and submitted and revised draft after draft after draft.
In every case, there is assumed to be a direct causal link between difficulty and power. Here, these are loosely defined. Roughly, "power" means "ability to have your way", and "difficulty" is "amount of work & sacrifice required." These can be translated into units of social influence - a.k.a money -- and investment, a.k.a. time, or money. In many cases, power is set by supply and demand -- nobody needs a wizard if they can all cast their own spells, and a doctor can command much higher prices if they're the only one in town. The power of royalty or other birthright follows a similar pattern - it's not "difficult", but it is scarce -- only a very few people have it, and it's close to impossible for others to get it.
Each individual gets to choose what difficult things they will try to do. Some will have longer or shorter payoffs, but each choice will have some return. And since power (partly) depends on everybody else's choices, neoclassical economics says that individuals' choices collectively determine a single market rate for the return on difficulty. So anything you do that's difficult should have the same payoff.
Anything equally difficult should have equal payoff. Apparently. Clearly, this is not the world we live in. Admittedly, there were some pretty questionable assumptions along the way, but it's almost-kind-of-reasonable to conclude that, if you just generalize from the fictional evidence. (Consider RPGs: They're designed to be balanced. Leveling up any class will get you to advance in power at a more-or-less equal rate.)
So how does reality differ from this fictional evidence? One direction is trivial: it's easy to find examples where what's difficult is not particularly powerful.
Writing a book is hard, and has a respectable payoff (depending on the quality of the book, publicity, etc.). Writing a book without using the letter "e", where the main character speaks only in palindromes, while typing in the dark with only your toes on a computer that's rigged to randomly switch letters around is much much more difficult, but other than perhaps gathering a small but freakishly devoted fanbase, it does not bring any more power/influence than writing any other book. It may be a sign that you are capable of more difficult things, and somebody may notice this and give you power, but this is indirect and unreliable. Similarly, writing a game in machine code or as a set of instructions for a Turing machine is certainly difficult, but also pretty dumb, and has no significant payoff beyond writing the game in a higher-level language. [Edit - thanks to TsviBT: This is assuming there already is a compiler and relevant modules. If you are first to create all of these, there might be quite a lot of benefit.]
On the other hand, some things are powerful, but not particularly difficult. On a purely physical level, this includes operating heavy machinery, or piloting drones. (I'm sure it's not easy, but the power output is immense). Conceptually, I think calculus comes in this category. It can provide a lot of insight into a lot of disparate phenomena (producing utility and its bastard cousin, money), but is not too much work to learn.
As instrumental rationalists, this is the territory we want to be in. We want to beat the market rate for turning effort into influence. So how do we do this?
This is a big, difficult question. I think it's a useful way to frame many of the goals of instrumental rationality. What major should I study? Is this relationship worthwhile? (Note: This may, if poorly applied, turn you into a terrible person. Don't apply it poorly.) What should I do in my spare time?
These questions are tough. But the examples of powerful-but-easy stuff suggest a useful principle: make use of what already exists. Calculus is powerful, but was only easy to learn because I'd already been learning math for a decade. Bulldozers are powerful, and the effort to get this power is minimal if all you have to do is climb in and drive. It's not so worthwhile, though, if you have to derive a design from first principles, mine the ore, invent metallurgy, make all the parts, and secure an oil supply first.
Similarly, if you're already a writer, writing a new book may gain you more influence than learning plumbing. And so on. This begins to suggest that we should not be too hasty to judge past investments as sunk costs. Your starting point matters in trying to find the closest available power boost. And as with any messy real-world problem, luck plays a major role, too.
Of course, there will always be some correlation between power and difficulty -- it's not that the classical economic view is wrong, there's just other factors at play. But to gain influence, you should in general be prepared to do difficult things. However, they should not be arbitrary difficult things -- they should be in areas you have specifically identified as having potential.
To make this more concrete, think of Methods!Harry. He strategically invests a lot of effort, usually at pretty good ratios -- the Gringotts money pump scheme, the True Patronus, his mixing of magic and science, and Partial Transfiguration. Now that's some good fictional evidence.
[1] Any kind of fiction, but particularly fantasy, sci-fi, and neoclassical economics. All works of elegant beauty, with a more-or-less tenuous relationship to real life.
[2] Dehghani, M., Sachdeva, S., Ekhtiari, H., Gentner, D., Forbus, F. "The role of Cultural Narratives in Moral Decision Making." Proceedings of the 31th Annual Conference of the Cognitive Science Society. 2009.
On Caring
This is an essay describing some of my motivation to be an effective altruist. It is crossposted from my blog. Many of the ideas here are quite similar to others found in the sequences. I have a slightly different take, and after adjusting for the typical mind fallacy I expect that this post may contain insights that are new to many.
1
I'm not very good at feeling the size of large numbers. Once you start tossing around numbers larger than 1000 (or maybe even 100), the numbers just seem "big".
Consider Sirius, the brightest star in the night sky. If you told me that Sirius is as big as a million earths, I would feel like that's a lot of Earths. If, instead, you told me that you could fit a billion Earths inside Sirius… I would still just feel like that's a lot of Earths.
The feelings are almost identical. In context, my brain grudgingly admits that a billion is a lot larger than a million, and puts forth a token effort to feel like a billion-Earth-sized star is bigger than a million-Earth-sized star. But out of context — if I wasn't anchored at "a million" when I heard "a billion" — both these numbers just feel vaguely large.
I feel a little respect for the bigness of numbers, if you pick really really large numbers. If you say "one followed by a hundred zeroes", then this feels a lot bigger than a billion. But it certainly doesn't feel (in my gut) like it's 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 times bigger than a billion. Not in the way that four apples internally feels like twice as many as two apples. My brain can't even begin to wrap itself around this sort of magnitude differential.
This phenomena is related to scope insensitivity, and it's important to me because I live in a world where sometimes the things I care about are really really numerous.
For example, billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease. And though most of them are out of my sight, I still care about them.
The loss of a human life with all is joys and all its sorrows is tragic no matter what the cause, and the tragedy is not reduced simply because I was far away, or because I did not know of it, or because I did not know how to help, or because I was not personally responsible.
Knowing this, I care about every single individual on this planet. The problem is, my brain is simply incapable of taking the amount of caring I feel for a single person and scaling it up by a billion times. I lack the internal capacity to feel that much. My care-o-meter simply doesn't go up that far.
And this is a problem.
Privileging the Question
Related to: Privileging the Hypothesis
Remember the exercises in critical reading you did in school, where you had to look at a piece of writing and step back and ask whether the author was telling the whole truth? If you really want to be a critical reader, it turns out you have to step back one step further, and ask not just whether the author is telling the truth, but why he's writing about this subject at all.
-- Paul Graham
There's an old saying in the public opinion business: we can't tell people what to think, but we can tell them what to think about.
-- Doug Henwood
Many philosophers—particularly amateur philosophers, and ancient philosophers—share a dangerous instinct: If you give them a question, they try to answer it.
Here are some political questions that seem to commonly get discussed in US media: should gay marriage be legal? Should Congress pass stricter gun control laws? Should immigration policy be tightened or relaxed?
These are all examples of what I'll call privileged questions (if there's an existing term for this, let me know): questions that someone has unjustifiably brought to your attention in the same way that a privileged hypothesis unjustifiably gets brought to your attention. The questions above are probably not the most important questions we could be answering right now, even in politics (I'd guess that the economy is more important). Outside of politics, many LWers probably think "what can we do about existential risks?" is one of the most important questions to answer, or possibly "how do we optimize charity?"
Why has the media privileged these questions? I'd guess that the media is incentivized to ask whatever questions will get them the most views. That's a very different goal from asking the most important questions, and is one reason to stop paying attention to the media.
The problem with privileged questions is that you only have so much attention to spare. Attention paid to a question that has been privileged funges against attention you could be paying to better questions. Even worse, it may not feel from the inside like anything is wrong: you can apply all of the epistemic rationality in the world to answering a question like "should Congress pass stricter gun control laws?" and never once ask yourself where that question came from and whether there are better questions you could be answering instead.
I suspect this is a problem in academia too. Richard Hamming once gave a talk in which he related the following story:
Over on the other side of the dining hall was a chemistry table. I had worked with one of the fellows, Dave McCall; furthermore he was courting our secretary at the time. I went over and said, "Do you mind if I join you?" They can't say no, so I started eating with them for a while. And I started asking, "What are the important problems of your field?" And after a week or so, "What important problems are you working on?" And after some more time I came in one day and said, "If what you are doing is not important, and if you don't think it is going to lead to something important, why are you at Bell Labs working on it?" I wasn't welcomed after that; I had to find somebody else to eat with!
Academics answer questions that have been privileged in various ways: perhaps the questions their advisor was interested in, or the questions they'll most easily be able to publish papers on. Neither of these are necessarily well-correlated with the most important questions.
So far I've found one tool that helps combat the worst privileged questions, which is to ask the following counter-question:
What do I plan on doing with an answer to this question?
With the worst privileged questions I frequently find that the answer is "nothing," sometimes with the follow-up answer "signaling?" That's a bad sign. (Edit: but "nothing" is different from "I'm just curious," say in the context of an interesting mathematical or scientific question that isn't motivated by a practical concern. Intellectual curiosity can be a useful heuristic.)
(I've also found the above counter-question generally useful for dealing with questions. For example, it's one way to notice when a question should be dissolved, and asked of someone else it's one way to help both of you clarify what they actually want to know.)
Levels of Action
One of the most useful concepts I have learned recently is the distinction between actions which directly improve the world, and actions which indirectly improve the world.
Suppose that you go onto Mechanical Turk, open an account, and spend a hundred hours transcribing audio. At current market rates, you'd get paid around $100 for your labor. By taking this action, you have made yourself $100 wealthier. This is an example of what I'd call a Level 1 or object-level action: something that directly moves the world from a less desirable state into a more desirable state.
On the other hand, suppose you take a typing class, which teaches you to type twice as fast. On the object level, this doesn't move the world into a better state- nothing about the world has changed, other than you. However, the typing class can still be very useful, because every Level 1 project you tackle later which involves typing will go better- you'll be able to do it more efficiently, and you'll get a higher return on your time. This is what I'd call a Level 2 or meta-level action, because it doesn't make the world better directly - it makes the world better indirectly, by improving the effectiveness of Level 1 actions. There are also Level 3 (meta-meta-level) actions, Level 4 (meta-meta-meta-level actions), and so on.












Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)