Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Simulate and Defer To More Rational Selves

109 BrienneStrohl 17 September 2014 06:11PM

I sometimes let imaginary versions of myself make decisions for me.

I first started doing this after Anna told me (something along the lines of) this story. When she first became the executive director of CFAR, she suddenly had many more decisions to deal with per day than ever before. "Should we hire this person?" "Should I go buy more coffee for the coffee machine, or wait for someone else deal with it?" "How many participants should be in our first workshop?" "When can I schedule time to plan the fund drive?" 

I'm making up these examples myself, but I'm sure you, too, can imagine how leading a brand new organization might involve a constant assault on the parts of your brain responsible for making decisions. She found it exhausting, and by the time she got home at the end of the day, a question like, "Would you rather we have peas or green beans with dinner?" often felt like the last straw. "I don't care about the stupid vegetables, just give me food and don't make me decide any more things!"

She was rescued by the following technique. When faced with a decision, she'd imagine "the Executive Director of CFAR", and ask herself, "What would 'the Executive Director of CFAR' do?" Instead of making a decision, she'd make a prediction about the actions of that other person. Then, she'd just do whatever they'd do!

(I also sometimes imagine what Anna would do, and then do that. I call it "Annajitsu".)

In Anna's case, she was trying to reduce decision fatigue. When I started trying it out myself, I was after a cure for something slightly different.

Imagine you're about to go bungee jumping off a high cliff. You know it's perfectly safe, and all you have to do is take a step forward, just like you've done every single time you've ever walked. But something is stopping you. The decision to step off the ledge is entirely yours, and you know you want to do it because this is why you're here. Yet here you are, still standing on the ledge. 

You're scared. There's a battle happening in your brain. Part of you is going, "Just jump, it's easy, just do it!", while another part--the part in charge of your legs, apparently--is going, "NOPE. Nope nope nope nope NOPE." And you have this strange thought: "I wish someone would just push me so I don't have to decide."

Maybe you've been bungee jumping, and this is not at all how you responded to it. But I hope (for the sake of communication) that you've experienced this sensation in other contexts. Maybe when you wanted to tell someone that you loved them, but the phrase hovered just behind your lips, and you couldn't get it out. You almost wished it would tumble out of your mouth accidentally. "Just say it," you thought to yourself, and remained silent. For some reason, you were terrified of the decision, and inaction felt more like not deciding.

When I heard this story from Anna, I had social anxiety. I didn't have way more decisions than I knew how to handle, but I did find certain decisions terrifying, and was often paralyzed by them. For example, this always happened if someone I liked, respected, and wanted to interact with more asked to meet with them. It was pretty obvious to me that it was a good idea to say yes, but I'd agonize over the email endlessly instead of simply typing "yes" and hitting "send".

So here's what it looked like when I applied the technique. I'd be invited to a party. I'd feel paralyzing fear, and a sense of impending doom as I noticed that I likely believed going to the party was the right decision. Then, as soon as I felt that doom, I'd take a mental step backward and not try to force myself to decide. Instead, I'd imagine a version of myself who wasn't scared, and I'd predict what she'd do. If the party really wasn't a great idea, either because she didn't consider it worth my time or because she didn't actually anticipate me having any fun, she'd decide not to go. Otherwise, she'd decide to go. I would not decide. I'd just run my simulation of her, and see what she had to say. It was easy for her to think clearly about the decision, because she wasn't scared. And then I'd just defer to her.

Recently, I've noticed that there are all sorts of circumstances under which it helps to predict the decisions of a version of myself who doesn't have my current obstacle to rational decision making. Whenever I'm having a hard time thinking clearly about something because I'm angry, or tired, or scared, I can call upon imaginary Rational Brienne to see if she can do any better.

Example: I get depressed when I don't get enough sunlight. I was working inside where it was dark, and Eliezer noticed that I'd seemed depressed lately. So he told me he thought I should work outside instead. I was indeed a bit down and irritable, so my immediate response was to feel angry--that I'd been interrupted, that he was nagging me about getting sunlight again, and that I have this sunlight problem in the first place. 

I started to argue with him, but then I stopped. I stopped because I'd noticed something. In addition to anger, I felt something like confusion. More complicated and specific than confusion, though. It's the feeling I get when I'm playing through familiar motions that have tended to lead to disutility. Like when you're watching a horror movie and the main character says, "Let's split up!" and you feel like, "Ugh, not this again. Listen, you're in a horror movie. If you split up, you will die. It happens every time." A familiar twinge of something being not quite right.

But even though I noticed the feeling, I couldn't get a handle on it. Recognizing that I really should make the decision to go outside instead of arguing--it was just too much for me. I was angry, and that severely impedes my introspective vision. And I knew that. I knew that familiar not-quite-right feeling meant something was preventing me from applying some of my rationality skills. 

So, as I'd previously decided to do in situations like this, I called upon my simulation of non-angry Brienne. 

She immediately got up and went outside.

To her, it was extremely obviously the right thing to do. So I just deferred to her (which I'd also previously decided to do in situations like this, and I knew it would only work in the future if I did it now too, ain't timeless decision theory great). I stopped arguing, got up, and went outside. 

I was still pissed, mind you. I even felt myself rationalizing that I was doing it because going outside despite Eliezer being wrong wrong wrong is easier than arguing with him, and arguing with him isn't worth the effort. And then I told him as much over chat. (But not the "rationalizing" part; I wasn't fully conscious of that yet.)

But I went outside, right away, instead of wasting a bunch of time and effort first. My internal state was still in disarray, but I took the correct external actions. 

This has happened a few times now. I'm still getting the hang of it, but it's working.

Imaginary Rational Brienne isn't magic. Her only available skills are the ones I have in fact picked up, so anything I've not learned, she can't implement. She still makes mistakes. 

Her special strength is constancy

In real life, all kinds of things limit my access to my own skills. In fact, the times when I most need a skill will very likely be the times when I find it hardest to access. For example, it's more important to consider the opposite when I'm really invested in believing something than when I'm not invested at all, but it's much harder to actually carry out the mental motion of "considering the opposite" when all the cognitive momentum is moving toward arguing single-mindedly for my favored belief.

The advantage of Rational Brienne (or, really, the Rational Briennes, because so far I've always ended up simulating a version of myself that's exactly the same except lacking whatever particular obstacle is relevant at the time) is that her access doesn't vary by situation. She can always use all of my tools all of the time.

I've been trying to figure out this constancy thing for quite a while. What do I do when I call upon my art as a rationalist, and just get a 404 Not Found? Turns out, "trying harder" doesn't do the trick. "No, really, I don't care that I'm scared, I'm going to think clearly about this. Here I go. I mean it this time." It seldom works.

I hope that it will one day. I would rather not have to rely on tricks like this. I hope I'll eventually just be able to go straight from noticing dissonance to re-orienting my whole mind so it's in line with the truth and with whatever I need to reach my goals. Or, you know, not experiencing the dissonance in the first place because I'm already doing everything right.

In the mean time, this trick seems pretty powerful.

2014 Less Wrong Census/Survey

84 Yvain 26 October 2014 06:05PM

It's that time of year again.

If you are reading this post and self-identify as a LWer, then you are the target population for the Less Wrong Census/Survey. Please take it. Doesn't matter if you don't post much. Doesn't matter if you're a lurker. Take the survey.

This year's census contains a "main survey" that should take about ten or fifteen minutes, as well as a bunch of "extra credit questions". You may do the extra credit questions if you want. You may skip all the extra credit questions if you want. They're pretty long and not all of them are very interesting. But it is very important that you not put off doing the survey or not do the survey at all because you're intimidated by the extra credit questions.

It also contains a chance at winning a MONETARY REWARD at the bottom. You do not need to fill in all the extra credit questions to get the MONETARY REWARD, just make an honest stab at as much of the survey as you can.

Please make things easier for my computer and by extension me by reading all the instructions and by answering any text questions in the simplest and most obvious possible way. For example, if it asks you "What language do you speak?" please answer "English" instead of "I speak English" or "It's English" or "English since I live in Canada" or "English (US)" or anything else. This will help me sort responses quickly and easily. Likewise, if a question asks for a number, please answer with a number such as "4", rather than "four".

The planned closing date for the survey is Friday, November 14. Instead of putting the survey off and then forgetting to do it, why not fill it out right now?

Okay! Enough preliminaries! Time to take the...

***


[EDIT: SURVEY CLOSED, DO NOT TAKE!]

***

Thanks to everyone who suggested questions and ideas for the 2014 Less Wrong Census/Survey. I regret I was unable to take all of your suggestions into account, because of some limitations in Google Docs, concern about survey length, and contradictions/duplications among suggestions. The current survey is a mess and requires serious shortening and possibly a hard and fast rule that it will never get longer than it is right now.

By ancient tradition, if you take the survey you may comment saying you have done so here, and people will upvote you and you will get karma.

On Caring

82 So8res 15 October 2014 01:59AM

This is an essay describing some of my motivation to be an effective altruist. It is crossposted from my blog. Many of the ideas here are quite similar to others found in the sequences. I have a slightly different take, and after adjusting for the typical mind fallacy I expect that this post may contain insights that are new to many.

1

I'm not very good at feeling the size of large numbers. Once you start tossing around numbers larger than 1000 (or maybe even 100), the numbers just seem "big".

Consider Sirius, the brightest star in the night sky. If you told me that Sirius is as big as a million earths, I would feel like that's a lot of Earths. If, instead, you told me that you could fit a billion Earths inside Sirius… I would still just feel like that's a lot of Earths.

The feelings are almost identical. In context, my brain grudgingly admits that a billion is a lot larger than a million, and puts forth a token effort to feel like a billion-Earth-sized star is bigger than a million-Earth-sized star. But out of context — if I wasn't anchored at "a million" when I heard "a billion" — both these numbers just feel vaguely large.

I feel a little respect for the bigness of numbers, if you pick really really large numbers. If you say "one followed by a hundred zeroes", then this feels a lot bigger than a billion. But it certainly doesn't feel (in my gut) like it's 10 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 000 times bigger than a billion. Not in the way that four apples intenally feels like twice as many as two apples. My brain can't even begin to wrap itself around this sort of magnitude differential.

This phenomena is related to scope insensitivity, and it's important to me because I live in a world where sometimes the things I care about are really really numerous.

For example, billions of people live in squalor, with hundreds of millions of them deprived of basic needs and/or dying from disease. And though most of them are out of my sight, I still care about them.

The loss of a human life with all is joys and all its sorrows is tragic no matter what the cause, and the tragedy is not reduced simply because I was far away, or because I did not know of it, or because I did not know how to help, or because I was not personally responsible.

Knowing this, I care about every single individual on this planet. The problem is, my brain is simply incapable of taking the amount of caring I feel for a single person and scaling it up by a billion times. I lack the internal capacity to feel that much. My care-o-meter simply doesn't go up that far.

And this is a problem.

continue reading »

2014 iterated prisoner's dilemma tournament results

59 tetronian2 30 September 2014 09:23PM

Followup to: Announcing the 2014 program equilibrium iterated PD tournament

In August, I announced an iterated prisoner's dilemma tournament in which bots can simulate each other before making a move. Eleven bots were submitted to the tournament. Today, I am pleased to announce the final standings and release the source code and full results.

All of the source code submitted by the competitors and the full results for each match are available here. See here for the full set of rules and tournament code.

Before we get to the final results, here's a quick rundown of the bots that competed:

AnderBot

AnderBot follows a simple tit-for-tat-like algorithm that eschews simulation:

  • On the first turn, Cooperate.
  • For the next 10 turns, play tit-for-tat.
  • For the rest of the game, Defect with 10% probability or Defect if the opposing bot has defected more times than AnderBot.

continue reading »

The Future of Humanity Institute could make use of your money

51 danieldewey 26 September 2014 10:53PM

Many people have an incorrect view of the Future of Humanity Institute's funding situation, so this is a brief note to correct that; think of it as a spiritual successor to this post. As John Maxwell puts it, FHI is "one of the three organizations co-sponsoring LW [and] a group within the University of Oxford's philosophy department that tackles important, large-scale problems for humanity like how to go about reducing existential risk." (If you're not familiar with our work, this article is a nice, readable introduction, and our director, Nick Bostrom, wrote Superintelligence.) Though we are a research institute in an ancient and venerable institution, this does not guarantee funding or long-term stability.

Academic research is generally funded through grants, but because the FHI is researching important but unusual problems, and because this research is multi-disciplinary, we've found it difficult to attract funding from the usual grant bodies. This has meant that we’ve had to prioritise a certain number of projects that are not perfect for existential risk reduction, but that allow us to attract funding from interested institutions.

With more assets, we could both liberate our long-term researchers to do more "pure Xrisk" research, and hire or commission new experts when needed to look into particular issues (such as synthetic biology, the future of politics, and the likelihood of recovery after a civilization collapse).

We are not in any immediate funding crunch, nor are we arguing that the FHI would be a better donation target than MIRI, CSER, or the FLI. But any donations would be both gratefully received and put to effective use. If you'd like to, you can donate to FHI here. Thank you!

MIRI Research Guide

49 So8res 07 November 2014 07:11PM

We've recently published a guide to MIRI's research on MIRI's website. It overviews some of the major open problems in FAI research, and provides reading lists for those who want to get familiar with MIRI's technical agenda.

This guide updates and replaces the MIRI course list that started me on the path of becoming a MIRI researcher over a year ago. Many thanks to Louie Helm, who wrote the previous version.

This guide is a bit more focused than the old course list, and points you not only towards prerequisite textbooks but also towards a number of relevant papers and technical reports in something approximating the "appropriate order." By following this guide, you can get yourself pretty close to the cutting edge of our technical research (barring some results that we haven't written up yet). If you intend to embark on that quest, you are invited to let me know; I can provide both guidance and encouragement along the way.

I've reproduced the guide below. The canonical version is at intelligence.org/research-guide, and I intend to keep that version up to date. This post will not be kept current.

Finally, a note on content: the guide below discusses a number of FAI research subfields. The goal is to overview, rather than motivate, those subfields. These sketches are not intended to carry any arguments. Rather, they attempt to convey our current conclusions to readers who are already extending us significant charity. We're hard at work producing a number of documents describing why we think these particular subfields are important. (The first was released a few weeks ago, the rest should be published over the next two months.) In the meantime, please understand that the research guide is not able nor intended to provide strong motivation for these particular problems.


Friendly AI theory currently isn't about implementation, it's about figuring out how to ask the right questions. Even if we had unlimited finite computing resources and a solid understanding of general intelligence, we still wouldn't know how to specify a system that would reliably have a positive impact during and after an intelligence explosion. Such is the state of our ignorance.

For now, MIRI's research program aims to develop solutions that assume access to unbounded finite computing power, not because unbounded solutions are feasible, but in the hope that these solutions will help us understand which questions need to be answered in order to the lay the groundwork for the eventual specification of a Friendly AI. Hence, our current research is primarily in mathematics (as opposed to software engineering or machine learning, as many expect).

This guide outlines the topics that one can study to become able to contribute to one or more of MIRI’s active research areas.

continue reading »

The Octopus, the Dolphin and Us: a Great Filter tale

44 Stuart_Armstrong 03 September 2014 09:37PM

Is intelligence hard to evolve? Well, we're intelligent, so it must be easy... except that only an intelligent species would be able to ask that question, so we run straight into the problem of anthropics. Any being that asked that question would have to be intelligent, so this can't tell us anything about its difficulty (a similar mistake would be to ask "is most of the universe hospitable to life?", and then looking around and noting that everything seems pretty hospitable at first glance...).

Instead, one could point at the great apes, note their high intelligence, see that intelligence arises separately, and hence that it can't be too hard to evolve.

One could do that... but one would be wrong. The key test is not whether intelligence can arise separately, but whether it can arise independently. Chimpanzees, Bonobos and Gorillas and such are all "on our line": they are close to common ancestors of ours, which we would expect to be intelligent because we are intelligent. Intelligent species tend to have intelligent relatives. So they don't provide any extra information about the ease or difficulty of evolving intelligence.

To get independent intelligence, we need to go far from our line. Enter the smart and cute icon on many student posters: the dolphin.

continue reading »

Bayesianism for humans: "probable enough"

38 BT_Uytya 02 September 2014 09:44PM

There are two insights from Bayesianism which occurred to me and which I hadn't seen anywhere else before.
I like lists in the two posts linked above, so for the sake of completeness, I'm going to add my two cents to a public domain. Second penny is here.



"Probable enough"

When you have eliminated the impossible, whatever  remains is often more improbable than your having made a mistake in one  of your impossibility proofs.


Bayesian way of thinking introduced me to the idea of "hypothesis which is probably isn't true, but probable enough to rise to the level of conscious attention" — in other words, to the situation when P(H) is notable but less than 50%.

Looking back, I think that the notion of taking seriously something which you don't think is true was alien to me. Hence, everything was either probably true or probably false; things from the former category were over-confidently certain, and things from the latter category were barely worth thinking about.

This model was correct, but only in a formal sense.

Suppose you are living in Gotham, the city famous because of it's crime rate and it's masked (and well-funded) vigilante, Batman. Recently you had read The Better Angels of Our Nature: Why Violence Has Declined by Steven Pinker, and according to some theories described here, Batman isn't good for Gotham at all.

Now you know, for example, the theory of Donald Black that "crime is, from the point of view of the perpetrator, the pursuit of justice". You know about idea that in order for crime rate to drop, people should perceive their law system as legitimate. You suspect that criminals beaten by Bats don't perceive the act as a fair and regular punishment for something bad, or an attempt to defend them from injustice; instead the act is perceived as a round of bad luck. So, the criminals are busy plotting their revenge, not internalizing civil norms.

You believe that if you send your copy of book (with key passages highlighted) to the person connected to Batman, Batman will change his ways and Gotham will become much more nice in terms of homicide rate. 

So you are trying to find out Batman's secret identity, and there are 17 possible suspects. Derek Powers looks like a good candidate: he is wealthy, and has a long history of secretly delegating illegal-violence-including tasks to his henchmen; however, his motivation is far from obvious. You estimate P(Derek Powers employs Batman) as 20%. You have very little information about other candidates, like Ferris Boyle, Bruce Wayne, Roland Daggett, Lucius Fox or Matches Malone, so you assign an equal 5% to everyone else.

In this case you should pick Derek Powers as your best guess when forced to name only one candidate (for example, if you forced to send the book to someone today), but also you should be aware that your guess is 80% likely to be wrong. When making expected utility calculations, you should take Derek Powers more seriously than Lucius Fox, but only by 15% more seriously.

In other words, you should take maximum a posteriori probability hypothesis into account while not deluding yourself into thinking that now you understand everything or nothing at all. Derek Powers hypothesis probably isn't true; but it is useful.

Sometimes I find it easier to reframe question from "what hypothesis is true?" to "what hypothesis is probable enough?". Now it's totally okay that your pet theory isn't probable but still probable enough, so doubt becomes easier. Also, you are aware that your pet theory is likely to be wrong (and this is nothing to be sad about), so the alternatives come to mind more naturally.

These "probable enough" hypothesis can serve as a very concise summaries of state of your knowledge when you simultaneously outline the general sort of evidence you've observed, and stress that you aren't really sure. I like to think about it like a rough, qualitative and more System1-friendly variant of Likelihood ratio sharing.

Planning Fallacy

The original explanation of planning fallacy (proposed by Kahneman and Tversky) is about people focusing on a most optimistic scenario when asked about typical one (instead of trying to do an Outside VIew). If you keep the distinction between "probable" and "probable enough" in mind, you can see this claim in a new light.

Because the most optimistic scenario is the most probable and the most typical one, in a certain sense.

The illustration, with numbers pulled out of thin air, goes like this: so, you want to visit a museum.

The first thing you need to do is to get dressed and take your keys and stuff. Usually (with 80% probability) you do this very quick, but there is a weak possibility of your museum ticket having been devoured by an entropy monster living on your computer table.

The second thing is to catch bus. Usually (p = 80%), bus is on schedule, but sometimes it can be too early or too late. After this, the bus could (20%) or could not (80%) get stuck in a traffic jam.

Finally, you need to find a museum building. You've been there before once, so you sorta remember your route, yet still could be lost with 20% probability.

And there you have it: P(everything is fine) = 40%, and probability of every other scenario is 10% or even less. "Everything is fine" is probable enough, yet likely to be false. Supposedly, humans pick MAP hypothesis and then forget about every other scenario in order to save computations.

Also, "everything is fine" is a good description of your plan. If your friend asks you, "so how are you planning to get to the museum?", and you answer "well, I catch the bus, get stuck in a traffic jam for 30 agonizing minutes, and then just walk from here", your friend is going  to get a completely wrong idea about dangers of your journey. So, in a certain sense, "everything is fine" is a typical scenario. 

Maybe it isn't human inability to pick the most likely scenario which should be blamed. Maybe it is false assumption that "most likely == likely to be correct" which contributes to this ubiquitous error.

In this case you would be better off having picked the "something will go wrong, and I will be late", instead of "everything will be fine".

So, sometimes you are interested in the best specimen out of your hypothesis space, sometimes you are interested in a most likely thingy (and it doesn't matter how vague it would be), and sometimes there are no shortcuts, and you have to do an actual expected utility calculation.

Newcomblike problems are the norm

37 So8res 24 September 2014 06:41PM

This is crossposted from my blog. In this post, I discuss how Newcomblike situations are common among humans in the real world. The intended audience of my blog is wider than the readerbase of LW, so the tone might seem a bit off. Nevertheless, the points made here are likely new to many.

1

Last time we looked at Newcomblike problems, which cause trouble for Causal Decision Theory (CDT), the standard decision theory used in economics, statistics, narrow AI, and many other academic fields.

These Newcomblike problems may seem like strange edge case scenarios. In the Token Trade, a deterministic agent faces a perfect copy of themself, guaranteed to take the same action as they do. In Newcomb's original problem there is a perfect predictor Ω which knows exactly what the agent will do.

Both of these examples involve some form of "mind-reading" and assume that the agent can be perfectly copied or perfectly predicted. In a chaotic universe, these scenarios may seem unrealistic and even downright crazy. What does it matter that CDT fails when there are perfect mind-readers? There aren't perfect mind-readers. Why do we care?

The reason that we care is this: Newcomblike problems are the norm. Most problems that humans face in real life are "Newcomblike".

These problems aren't limited to the domain of perfect mind-readers; rather, problems with perfect mind-readers are the domain where these problems are easiest to see. However, they arise naturally whenever an agent is in a situation where others have knowledge about its decision process via some mechanism that is not under its direct control.

continue reading »

A discussion of heroic responsibility

34 Swimmer963 29 October 2014 04:22AM

[Originally posted to my personal blog, reposted here with edits.]

Introduction

You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” Harry’s face tightened. “That’s why I say you’re not thinking responsibly, Hermione. Thinking that your job is done when you tell Professor McGonagall—that isn’t heroine thinking. Like Hannah being beat up is okay then, because it isn’t your fault anymore. Being a heroine means your job isn’t finished until you’ve done whatever it takes to protect the other girls, permanently.” In Harry’s voice was a touch of the steel he had acquired since the day Fawkes had been on his shoulder. “You can’t think as if just following the rules means you’ve done your duty. –HPMOR, chapter 75.

I like this concept. It counters a particular, common, harmful failure mode, and that it’s an amazingly useful thing for a lot of people to hear. I even think it was a useful thing for me to hear a year ago.

But... I’m not sure about this yet, and my thoughts about it are probably confused, but I think that there's a version of Heroic Responsibility that you can get from reading this description, that's maybe even the default outcome of reading this description, that's also a harmful failure mode. 
 

Something Impossible

A wrong way to think about heroic responsibility

I dealt with a situation at work a while back–May 2014 according to my journal. I had a patient for five consecutive days, and each day his condition was a little bit worse. Every day, I registered with the staff doctor my feeling that the current treatment was Not Working, and that maybe we ought to try something else. There were lots of complicated medical reasons why his decisions were constrained, and why ‘let’s wait and see’ was maybe the best decision, statistically speaking–that in a majority of possible worlds, waiting it out would lead to better outcomes than one of the potential more aggressive treatments, which came with side effects. And he wasn’t actually ignoring me; he would listen patiently to all my concerns. Nevertheless, he wasn’t the one watching the guy writhe around in bed, uncomfortable and delirious, for twelve hours every day, and I felt ignored, and I was pretty frustrated.

On day three or four, I was listening to Ray’s Solstice album on my break, and the song ‘Something Impossible’ came up. 

Bold attempts aren't enough, roads can't be paved with intentions...
You probably don’t even got what it takes,
But you better try anyway, for everyone's sake
And you won’t find the answer until you escape from the
Labyrinth of your conventions.
Its time to just shut up, and do the impossible.
Can’t walk away...
Gotta break off those shackles, and shake off those chains
Gotta make something impossible happen today... 
 
It hit me like a load of bricks–this whole thing was stupid and rationalists should win. So I spent my entire break talking on Gchat with one of my CFAR friends, trying to see if he could help me come up with a suggestion that the doctor would agree was good. This wasn’t something either of us were trained in, and having something to protect doesn't actually give you superpowers, and the one creative solution I came up with was worse than the status quo for several obvious reasons.

I went home on day four feeling totally drained and having asked to please have a different patient in the morning. I came in to find that the patient had nearly died in the middle of the night. (He was now intubated and sedated, which wasn’t great for him but made my life a hell of a lot easier.) We eventually transferred him to another hospital, and I spent a while feeling like I’d personally failed. 

I’m not sure whether or not this was a no-win scenario even in theory. But I don't think I, personally, could have done anything with greater positive expected value. There's a good reason why a doctor with 10 years of school and 20 years of ICU experience can override a newly graduated nurse's opinion. In most of the possible worlds, the doctor is right and I'm wrong. Pretty much the only thing that I could have done better would have been to care less–and thus be less frustrated and more emotionally available to comfort a guy who was having the worst week of his life. 

In short, I fulfilled my responsibilities to my patient. Nurses have a lot of responsibilities to their patients, well specified in my years of schooling and in various documents published by the College of Nurses of Ontario. But nurses aren’t expected or supposed to take heroic responsibility for these things. 

I think that overall, given a system that runs on humans, that's a good thing.  


The Well-Functioning Gear

I feel like maybe the hospital is an emergent system that has the property of patient-healing, but I’d be surprised if any one part of it does.

Suppose I see an unusual result on my patient. I don’t know what it means, so I mention it to a specialist. The specialist, who doesn’t know anything about the patient beyond what I’ve told him, says to order a technetium scan. He has no idea what a technetium scan is or how it is performed, except that it’s the proper thing to do in this situation. A nurse is called to bring the patient to the scanner, but has no idea why. The scanning technician, who has only a vague idea why the scan is being done, does the scan and spits out a number, which ends up with me. I bring it to the specialist, who gives me a diagnosis and tells me to ask another specialist what the right medicine for that is. I ask the other specialist – who has only the sketchiest idea of the events leading up to the diagnosis – about the correct medicine, and she gives me a name and tells me to ask the pharmacist how to dose it. The pharmacist – who has only the vague outline of an idea who the patient is, what test he got, or what the diagnosis is – doses the medication. Then a nurse, who has no idea about any of this, gives the medication to the patient. Somehow, the system works and the patient improves.

Part of being an intern is adjusting to all of this, losing some of your delusions of heroism, getting used to the fact that you’re not going to be Dr. House, that you are at best going to be a very well-functioning gear in a vast machine that does often tedious but always valuable work. –Scott Alexander

The medical system does a hard thing, and it might not do it well, but it does it. There is too much complexity for any one person to have a grasp on it. There are dozens of mutually incomprehensible specialties. And the fact that [insert generic nurse here] doesn't have the faintest idea how to measure electrolytes in blood, or build an MRI machine, or even what's going on with the patient next door, is a feature, not a bug.

The medical system doesn’t run on exceptional people–it runs on average people, with predictably average levels of skill, slots in working memory, ability to notice things, ability to not be distracted thinking about their kid's problems at school, etc. And it doesn’t run under optimal conditions; it runs under average conditions. Which means working overtime at four am, short staffing, three patients in the ER waiting for ICU beds, etc. 

Sure, there are problems with the machine. The machine is inefficient. The machine doesn’t have all the correct incentives lined up. The machine does need fixing–but I would argue that from within the machine, as one of its parts, taking heroic responsibility for your own sphere of control isn’t the way to go about fixing the system.

As an [insert generic nurse here], my sphere of control is the four walls of my patient's room. Heroic responsibility for my patient would mean...well, optimizing for them. In the most extreme case, it might mean killing the itinerant stranger to obtain a compatible kidney. In the less extreme case, I spend all my time giving my patient great care, instead of helping the nurse in the room over, whose patient is much sicker. And then sometimes my patient will die, and there will be literally nothing I can do about it, their death was causally set in stone twenty-four hours before they came to the hospital. 

I kind of predict that the results of installing heroic responsibility as a virtue, among average humans under average conditions, would be a) everyone stepping on everyone else’s toes, and b) 99% of them quitting a year later.
 

Recursive Heroic Responsibility


If you're a gear in a machine, and you notice that the machine is broken, your options are a) be a really good gear, or b) take heroic responsibility for your sphere of control, and probably break something...but that's a false dichotomy. Humans are very flexible tools, and there are also infinite other options, including "step out of the machine, figure out who's in charge of this shit, and get it fixed." 

You can't take responsibility for the individual case, but you can for the system-level problem, the long view, the one where people eat badly and don't exercise and at age fifty, morbidly obese with a page-long medical history, they end up as a slow-motion train wreck in an ICU somewhere. Like in poker, you play to win money–positive EV–not to win hands. Someone’s going to be the Minister of Health for Canada, and they’re likely to be in a position where taking heroic responsibility for the Canadian health care system makes things better. And probably the current Minister of Health isn’t being strategic, isn’t taking the level of responsibility that they could, and the concept of heroic responsibility would be the best thing for them to encounter.

So as an [insert generic nurse here], working in a small understaffed ICU, watching the endless slow-motion train wreck roll by...maybe the actual meta-level right thing to do is to leave, and become the freaking Minister of Health, or befriend the current one and introduce them to the concept of being strategic. 

But it's fairly obvious that that isn't the right action for all the nurses in that situation. I'm wary of advice that doesn't generalize. What's difference between the nurse who should leave in order to take meta-level responsibility, and the nurse who should stay because she's needed as a gear?

Heroic responsibility for average humans under average conditions

I can predict at least one thing that people will say in the comments, because I've heard it hundreds of times–that Swimmer963 is a clear example of someone who should leave nursing, take the meta-level responsibility, and do something higher impact for the usual. Because she's smart. Because she's rational. Whatever. 

Fine. This post isn't about me. Whether I like it or not, the concept of heroic responsibility is now a part of my value system, and I probably am going to leave nursing.

But what about the other nurses on my unit, the ones who are competent and motivated and curious and really care? Would familiarity with the concept of heroic responsibility help or hinder them in their work? Honestly, I predict that they would feel alienated, that they would assume I held a low opinion of them (which I don't, and I really don't want them to think that I do), and that they would flinch away and go back to the things that they were doing anyway, the role where they were comfortable–or that, if they did accept it, it would cause them to burn out. So as a consequentialist, I'm not going to tell them. 

And yeah, that bothers me. Because I'm not a special snowflake. Because I want to live in a world where rationality helps everyone. Because I feel like the reason they would react that was isn't because of anything about them as people, or because heroic responsibility is a bad thing, but because I'm not able to communicate to them what I mean. Maybe stupid reasons. Still bothers me. 

View more: Next