[Originally posted to my personal blog, reposted here with edits.]

Introduction

You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” Harry’s face tightened. “That’s why I say you’re not thinking responsibly, Hermione. Thinking that your job is done when you tell Professor McGonagall—that isn’t heroine thinking. Like Hannah being beat up is okay then, because it isn’t your fault anymore. Being a heroine means your job isn’t finished until you’ve done whatever it takes to protect the other girls, permanently.” In Harry’s voice was a touch of the steel he had acquired since the day Fawkes had been on his shoulder. “You can’t think as if just following the rules means you’ve done your duty. –HPMOR, chapter 75.

I like this concept. It counters a particular, common, harmful failure mode, and that it’s an amazingly useful thing for a lot of people to hear. I even think it was a useful thing for me to hear a year ago.

But... I’m not sure about this yet, and my thoughts about it are probably confused, but I think that there's a version of Heroic Responsibility that you can get from reading this description, that's maybe even the default outcome of reading this description, that's also a harmful failure mode. 
 

Something Impossible

A wrong way to think about heroic responsibility

I dealt with a situation at work a while back–May 2014 according to my journal. I had a patient for five consecutive days, and each day his condition was a little bit worse. Every day, I registered with the staff doctor my feeling that the current treatment was Not Working, and that maybe we ought to try something else. There were lots of complicated medical reasons why his decisions were constrained, and why ‘let’s wait and see’ was maybe the best decision, statistically speaking–that in a majority of possible worlds, waiting it out would lead to better outcomes than one of the potential more aggressive treatments, which came with side effects. And he wasn’t actually ignoring me; he would listen patiently to all my concerns. Nevertheless, he wasn’t the one watching the guy writhe around in bed, uncomfortable and delirious, for twelve hours every day, and I felt ignored, and I was pretty frustrated.

On day three or four, I was listening to Ray’s Solstice album on my break, and the song ‘Something Impossible’ came up. 

Bold attempts aren't enough, roads can't be paved with intentions...
You probably don’t even got what it takes,
But you better try anyway, for everyone's sake
And you won’t find the answer until you escape from the
Labyrinth of your conventions.
Its time to just shut up, and do the impossible.
Can’t walk away...
Gotta break off those shackles, and shake off those chains
Gotta make something impossible happen today... 
 
It hit me like a load of bricks–this whole thing was stupid and rationalists should win. So I spent my entire break talking on Gchat with one of my CFAR friends, trying to see if he could help me come up with a suggestion that the doctor would agree was good. This wasn’t something either of us were trained in, and having something to protect doesn't actually give you superpowers, and the one creative solution I came up with was worse than the status quo for several obvious reasons.

I went home on day four feeling totally drained and having asked to please have a different patient in the morning. I came in to find that the patient had nearly died in the middle of the night. (He was now intubated and sedated, which wasn’t great for him but made my life a hell of a lot easier.) We eventually transferred him to another hospital, and I spent a while feeling like I’d personally failed. 

I’m not sure whether or not this was a no-win scenario even in theory. But I don't think I, personally, could have done anything with greater positive expected value. There's a good reason why a doctor with 10 years of school and 20 years of ICU experience can override a newly graduated nurse's opinion. In most of the possible worlds, the doctor is right and I'm wrong. Pretty much the only thing that I could have done better would have been to care less–and thus be less frustrated and more emotionally available to comfort a guy who was having the worst week of his life. 

In short, I fulfilled my responsibilities to my patient. Nurses have a lot of responsibilities to their patients, well specified in my years of schooling and in various documents published by the College of Nurses of Ontario. But nurses aren’t expected or supposed to take heroic responsibility for these things. 

I think that overall, given a system that runs on humans, that's a good thing.  


The Well-Functioning Gear

I feel like maybe the hospital is an emergent system that has the property of patient-healing, but I’d be surprised if any one part of it does.

Suppose I see an unusual result on my patient. I don’t know what it means, so I mention it to a specialist. The specialist, who doesn’t know anything about the patient beyond what I’ve told him, says to order a technetium scan. He has no idea what a technetium scan is or how it is performed, except that it’s the proper thing to do in this situation. A nurse is called to bring the patient to the scanner, but has no idea why. The scanning technician, who has only a vague idea why the scan is being done, does the scan and spits out a number, which ends up with me. I bring it to the specialist, who gives me a diagnosis and tells me to ask another specialist what the right medicine for that is. I ask the other specialist – who has only the sketchiest idea of the events leading up to the diagnosis – about the correct medicine, and she gives me a name and tells me to ask the pharmacist how to dose it. The pharmacist – who has only the vague outline of an idea who the patient is, what test he got, or what the diagnosis is – doses the medication. Then a nurse, who has no idea about any of this, gives the medication to the patient. Somehow, the system works and the patient improves.

Part of being an intern is adjusting to all of this, losing some of your delusions of heroism, getting used to the fact that you’re not going to be Dr. House, that you are at best going to be a very well-functioning gear in a vast machine that does often tedious but always valuable work. –Scott Alexander

The medical system does a hard thing, and it might not do it well, but it does it. There is too much complexity for any one person to have a grasp on it. There are dozens of mutually incomprehensible specialties. And the fact that [insert generic nurse here] doesn't have the faintest idea how to measure electrolytes in blood, or build an MRI machine, or even what's going on with the patient next door, is a feature, not a bug.

The medical system doesn’t run on exceptional people–it runs on average people, with predictably average levels of skill, slots in working memory, ability to notice things, ability to not be distracted thinking about their kid's problems at school, etc. And it doesn’t run under optimal conditions; it runs under average conditions. Which means working overtime at four am, short staffing, three patients in the ER waiting for ICU beds, etc. 

Sure, there are problems with the machine. The machine is inefficient. The machine doesn’t have all the correct incentives lined up. The machine does need fixing–but I would argue that from within the machine, as one of its parts, taking heroic responsibility for your own sphere of control isn’t the way to go about fixing the system.

As an [insert generic nurse here], my sphere of control is the four walls of my patient's room. Heroic responsibility for my patient would mean...well, optimizing for them. In the most extreme case, it might mean killing the itinerant stranger to obtain a compatible kidney. In the less extreme case, I spend all my time giving my patient great care, instead of helping the nurse in the room over, whose patient is much sicker. And then sometimes my patient will die, and there will be literally nothing I can do about it, their death was causally set in stone twenty-four hours before they came to the hospital. 

I kind of predict that the results of installing heroic responsibility as a virtue, among average humans under average conditions, would be a) everyone stepping on everyone else’s toes, and b) 99% of them quitting a year later.
 

Recursive Heroic Responsibility


If you're a gear in a machine, and you notice that the machine is broken, your options are a) be a really good gear, or b) take heroic responsibility for your sphere of control, and probably break something...but that's a false dichotomy. Humans are very flexible tools, and there are also infinite other options, including "step out of the machine, figure out who's in charge of this shit, and get it fixed." 

You can't take responsibility for the individual case, but you can for the system-level problem, the long view, the one where people eat badly and don't exercise and at age fifty, morbidly obese with a page-long medical history, they end up as a slow-motion train wreck in an ICU somewhere. Like in poker, you play to win money–positive EV–not to win hands. Someone’s going to be the Minister of Health for Canada, and they’re likely to be in a position where taking heroic responsibility for the Canadian health care system makes things better. And probably the current Minister of Health isn’t being strategic, isn’t taking the level of responsibility that they could, and the concept of heroic responsibility would be the best thing for them to encounter.

So as an [insert generic nurse here], working in a small understaffed ICU, watching the endless slow-motion train wreck roll by...maybe the actual meta-level right thing to do is to leave, and become the freaking Minister of Health, or befriend the current one and introduce them to the concept of being strategic. 

But it's fairly obvious that that isn't the right action for all the nurses in that situation. I'm wary of advice that doesn't generalize. What's difference between the nurse who should leave in order to take meta-level responsibility, and the nurse who should stay because she's needed as a gear?

Heroic responsibility for average humans under average conditions

I can predict at least one thing that people will say in the comments, because I've heard it hundreds of times–that Swimmer963 is a clear example of someone who should leave nursing, take the meta-level responsibility, and do something higher impact for the usual. Because she's smart. Because she's rational. Whatever. 

Fine. This post isn't about me. Whether I like it or not, the concept of heroic responsibility is now a part of my value system, and I probably am going to leave nursing.

But what about the other nurses on my unit, the ones who are competent and motivated and curious and really care? Would familiarity with the concept of heroic responsibility help or hinder them in their work? Honestly, I predict that they would feel alienated, that they would assume I held a low opinion of them (which I don't, and I really don't want them to think that I do), and that they would flinch away and go back to the things that they were doing anyway, the role where they were comfortable–or that, if they did accept it, it would cause them to burn out. So as a consequentialist, I'm not going to tell them. 

And yeah, that bothers me. Because I'm not a special snowflake. Because I want to live in a world where rationality helps everyone. Because I feel like the reason they would react that was isn't because of anything about them as people, or because heroic responsibility is a bad thing, but because I'm not able to communicate to them what I mean. Maybe stupid reasons. Still bothers me. 

A discussion of heroic responsibility
New Comment
216 comments, sorted by Click to highlight new comments since:
Some comments are truncated due to high volume. (⌘F to expand all)Change truncation settings
[-][anonymous]390

I kind of predict that the results of installing heroic responsibility as a virtue, among average humans under average conditions, would be a) everyone stepping on everyone else’s toes, and b) 99% of them quitting a year later.

There's a reason it's called heroic responsibility: it's for a fictional hero, who can do Fictional Hero Things like upset the world order on a regular basis and get away with it. He has Plot Armor, and an innately limited world. In fact, the story background even guarantees this: there are only a few tens of thousands or hundreds of thousands of wizards in Britain, and thus the Law of Large Numbers does not apply, and thus Harry is a one-of-a-kind individual rather than a one-among-several-hundred-thousand as he would be in real life. Further, he goes on adventures as an individual, and never has to engage in the kinds of large-scale real-life efforts that take the massive cooperation of large numbers of not-so-phoenix-quality individuals.

Which you very much do. You don't need heroic rationality, you need superrationality, which anyone here who's read up on decision-theory should recognize. The super-rational thing to do is systemic effectiveness, at... (read more)

There's a reason it's called heroic responsibility: it's for a fictional hero, who can do Fictional Hero Things like upset the world order on a regular basis and get away with it.

NO! This is clearly not why it was called heroic responsibility and it is unlikely that the meaning has degraded so completely over time as to refer to the typical behaviour of fictional heroes. That isn't the message of either the book or the excerpt quoted in the post.

Which you very much do. You don't need heroic rationality, you need superrationality, which anyone here who's read up on decision-theory should recognize. The super-rational thing to do is systemic effectiveness, at the level of habits and teams, so that patients' health does not ever depend on one person choosing to be heroic.

Those who have read up on decision theory will be familiar with the term superrationality and notice that you are misusing the term. Incidentally, those who who are familiar with decision theory will also notice that 'heroic responsibility' is already assumed as part of the basic premise (ie. Agents actually taking actions that maximise expectation of desired things occurring doesn't warrant any special labels... (read more)

2Jiro
Superrationality involves assuming that other people using the same reasoning as yourself will produce the same result as yourself, and so you need to decide what is best to do assuming everyone like yourself does it too. That does indeed seem to be what eli is talking about: you support the existing system, knowing that if you think it's a good idea to support the system, so will other people who think like you, and the system will work. I don't think he's confused. While Eliezer's fanfic isn't children's literature, the fact that Harry is a hero with plot armor is not something Eliezer invented; rather, it carries over from the source which is children's literature.
[-]Jiro140

There's a reason it's called heroic responsibility: it's for a fictional hero, who can do Fictional Hero Things like upset the world order on a regular basis and get away with it. He has Plot Armor, and an innately limited world.

But it's my understanding that HPMOR was meant to teach about real-world reasoning.

Is this really supposed to be one of the HPMOR passages which is solely about the fictional character and is not meant to have any application to the real world except as an example of something not to do? It certainly doesn't sound like that.

(Saying this with a straight face)

No, it's pretty clear that the author intends this to be a real-world lesson. It's a recurring theme in the Sequences.

I think Eli was disagreeing with the naive application of that lesson to real-world situations, especially ones where established systems are functional.

That said, I don't want to put words in Eli's mouth, so I'll say instead that I was disagreeing in that way when I said something similar above.

[-]V_V110

Keep in mind that the author perceives himself pretty much like a stereotypical fictional hero: he is the One chosen to Save the World from the Robot Apocalypse, and maybe even Defeat Death and bring us Heaven. No wonder he thinks that advice to fictional heroes is applicable to him.

But when you actually try to apply that advice to people with a "real-life" job which involves coordinating with other people in a complex organization that has to ultimately produce measurable results, you run into problems.

A complex organization, for instance a hospital, needs clear rules detailing who is responsible for what. Sometimes this yields suboptimal outcomes: you notice that somebody is making a mistake and they won't listen to you, or you don't tell them because it would be socially unacceptable to do so. But the alternative where any decision can be second-guessed and argued at length until a consensus is reached would paralyse the organization and amplify the negative outcomes of the Dunner-Kruger effect.

Moreover, a culture of heroic responsibility would make accountability essentially impossible:
If everybody is responsible for everything, then nobody is responsible for anything. Yes, Alice made a mistake, but how can we blame her without also blaming Bob for not noticing it and stopping her? Or Carol, or Dan, or Erin, and so on.

You and Swimmer963 are making the mistake of applying heroic responsibility only to optimising some local properties. Of course that will mean damaging the greater environment: applying "heroic responsibility" basically means you do your best AGI impression, so if you only optimise for a certain subset of your morality your results aren't going to be pleasant.

Heroic responsibility only works if you take responsibility for everything. Not just the one patient you're officially being held accountable for, not just the most likely Everett branches, not just the events you see with your own eyes. If your calling a halt to the human machine you are a part of truly has an expected negative effect, then it is your heroic responsibility to shut up and watch others make horrible mistakes.

A culture of heroic responsibility demands appropriate humility; it demands making damn sure what you're doing is correct before defying your assigned duties. And if human psychology is such that punishing specific people for specific events works, then it is everyone's heroic responsibility to make sure that rule exists.

Applying this in practice would, for most people, boil down to effective altr... (read more)

-1V_V
So "heroic responsibility" just means "total utilitarianism"?
3Philip_W
No: the concept that our ethics is utilitarian is independent from the concept that it is the only acceptable way of making decisions (where "acceptable" is an emotional/moral term).
3V_V
What is an acceptable way of making decisions (where "acceptable" is an emotional/moral term) looks like an ethical question, how can it be independent from your ethics?
1Philip_W
In ethics, the question would be answered by "yes, this ethical system is the only acceptable way to make decisions" by definition. In practice, this fact is not sufficient to make more than 0.01% of the world anywhere near heroically responsible (~= considering ethics the only emotionally/morally/role-followingly acceptable way of making decisions), so apparently the question is not decided by ethics. Instead, roles and emotions play a large part in determining what is acceptable. In western society, the role of someone who is responsible for everything and not in the corresponding position of power is "the hero". Yudkowsky (and HPJEV) might have chosen to be heroically responsible because he knows it is the consistent/rational conclusion of human morality and he likes being consistent/rational very much, or because he likes being a hero, or more likely a combination of both. The decision is made due to the role he wants to lead, not due to the ethics itself.
0Kenny
It just means 'consequentalism'.
1V_V
There are various types of consequentalism. The lack of distinction between ethical necessity and supererogation, and the general focus about optimizing the world, are typical of utilitarianism, which is in fact often associated with effective altruism (although it is not strictly necessary for it).
1Kenny
I think it applies to any and all of them just as well, but I (very stupidly) didn't realize until now that utilitarianism is (a type of) consequentialism.
1Jiro
I know. That's why I had to try to keep a straight face when saying that.
0[anonymous]
You and Swimmer963 are making the mistake of applying heroic responsibility only to optimising some local properties. Of course that will mean damaging the greater environment: "heroic responsibility" basically means you do your best AGI impression, so if you only optimise for a certain subset of your morality your results aren't going to be pleasant. Heroic responsibility only works if you take responsibility for everything. Not just the one patient you're officially being held accountable for, not just the most likely Everett branches, not just the events you see with your own eyes. If your calling a halt to the human machine you are a part of truly has an expected negative effect, then it is your heroic responsibility to shut up and watch others make horrible mistakes. A culture of heroic responsibility demands appropriate humility; it demands making damn sure what you're doing is correct before defying your assigned duties. And if human psychology is such that a criminal justice system is still appropriate (where specific individuals are punished for specific events), then it is everyone's job to make sure there is a criminal justice system. Applying this in practice
8[anonymous]
Well, I can't answer for Eliezer's intentions, but I can repeat something he has often said about HPMoR: the only statements in HPMoR he is guaranteed to endorse with a straight face and high probability are those made about science/rationality, preferably in an expo-speak section, or those made by Godric Gryffindor, his author-avatar. Harry, Dumbledore, Hermione, and Quirrell are fictional characters: you are not necessarily meant to emulate them, though of course you can if you independently arrive to the conclusion that doing so is a Good Idea. I personally think it is one of the passages in which the unavoidable conceits of literature (ie: that the protagonist's actions actually matter on a local-world-historical scale) overcome the standard operation of real life. Eliezer might have a totally different view, but of course, he keeps info about HPMoR close to his chest for maximum Fun.
0[anonymous]
I would like to hear eli_sennesh's response to this...

the Law of Large Numbers does not apply, and thus Harry is a one-of-a-kind individual rather than a one-among-several-hundred-thousand as he would be in real life

I think we need a lot of local heroism. We have a few billions people on this planet, but we also have a few billion problems -- even if we perhaps have only a few thousand repeating patterns of problems.

Maybe it would be good to distinguish between "heroism within a generally functional pattern which happened to have an exception" and a "pattern-changing heroism". Sometimes we need a smart person to invent a solution to the problem. Sometimes we need thousands of people to implement that solution, and also to solve the unexpected problems with the solution, because in real life the solution is never perfect.

Maybe it would be good to distinguish between "heroism within a generally functional pattern which happened to have an exception" and a "pattern-changing heroism".

That's a good distinction and I would also throw in the third kind -- "heroism within a generally disfunctional pattern which continues to exist because regular heroics keep it afloat". This is related to the well-known management concept of the "firefighting mode".

9SilentCal
Superrationality isn't a substitute for heroic responsibility, it's a complement. Heroic responsibility is the ability to really ask the question, "Should I break the rules in a radical effort to change the world?" Superrationality is the tool that will allow you to usually get the correct, negative answer. ETA: When Harry first articulates the concept of heroic responsibility, it's conspicuously missing superrationality. I think that's an instance of the character not being the author. But I think it's later suggested that McGonagall could also use some heroic responsibility, and this clearly does not mean that she should be trying to take over the world.
0[anonymous]
I agree completely. McGonnagal has decision-making authority, so she is exactly the person who should be thinking in terms of absolute responsibility rather than in terms of convention.
2morvkala
This seems to misunderstand the definition of heroic responsibility in the first place. It doesn't require that you're better, smarter, luckier, or anything else than the average person. All that matters is the probability that you can beat the status quo, whether through focused actions to help one person, or systematic changes. If swimmer had strong enough priors that the doctor was neglecting their duty, swimmer would be justified in doing the stereotypically heroic thing. She didn't, so she had to follow the doctors lead. If everyone else cares deeply about solving a problem and there are a lot of smarter minds than your own focusing on the issue, you're probably right to take the long approach and look for any systematic flaws instead of doing something that'll probably be stupid. However, there's lots of problems where the smartest, wealthiest people don't actually have the motivation to solve the problem, and the majority of people who care are entrenched in the status quo, so a mere prole lacking HJPEVesque abilities benefits strongly from heroic responsibility. And sometimes you can't fix the system, but you can save one person and that is okay. It doesn't make the system any better, and you'll still need to fix it another day, but ignoring the cases you think you can solve because you lack the tools to tackle the root of the problem is EXACTLY the kind of behaviour heroic responsibility should be warning you about.
0Jiro
This assumes that you're perfect at figuring out the probability that you can beat the status quo. Human beings are pretty bad at this.
1Philip_W
No, it doesn't. If you're uncertain about your own reasoning, discount the weight of your own evidence proportionally, and use the new value. In heuristic terms: err on the side of caution, by a lot if the price of failure is high.
1private_messaging
Well said. The way I put it, the hero jumps into the cockpit and lands the plane in storm without once asking if there's a certified pilot on board. It is "Heroic Responsibility" because it isn't responsible without qualifiers. Nor is it heroic, it's just a glitch due to the expected amount of getting laid times your primate brain not knowing about birth control times tiny probability of landing the plane yielding >1 surviving copy of your genes. Or, likely, a much cruder calculation, where the impressiveness appears to be greater than the chance of success seem small, on a background of severe miscalibration due to living in a well tuned society.

Brienne, my consort, is currently in Santiago, Chile because I didn't want to see her go through the wintertime of her Seasonal Affective Disorder. While she's doing that, I'm waiting for the load of 25 cheap 15-watt 4500K LED spotlight bulbs I ordered from China via DHgate, so I can wire them into my 25-string of light sockets, aim them at her ceiling, and try to make her an artificial sky. She's coming back the middle of February, one month before the equinox, so we can give that part a fair test.

I don't think I would have done either of these things if I didn't have that strange concept of responsibility. Empirically, despite there being huge numbers of people with SAD, I don't observe them flying to another continent for the winter, or trying to build their own high-powered lighting systems after they discover that the sad little 60-watt off-the-shelf light-boxes don't work sufficiently for them. I recently confirmed in conversation that a certain very wealthy person (who will not be on the list of the first 10 people you think I might be referring to) with SAD, someone who was creative enough to go to the Southern Hemisphere for a few weeks to try to interrupt the dark mom... (read more)

Painting my ceiling light blue (a suggestion I got from a sleep researcher at a top university) was a low cost solution that basically "cured" my SAD.

6Kawoomba
This is a tangent, but to light up the whole environment just to get a few more photons to the retina is a strange approach, even if it seems to be the go-to treatment (light boxes etc.). Why not just light up the retina with a portable device, say glasses with some LED lights tacked on. That way you can take your enlightenment with you! Could be polarised to reflect indirectly off of the glasses into your eye, with little stray radiation. Not saying that you should McGyver that yourself, but I was surprised that such a solution did not seem to exist. But, it's hard to have a truly original thought, so when I googled it I found this. Seems like a good idea, no? Same principle as your artificial sky, if one would work, so should the other. Also, as an aside to the tangent, tangent is a strange phrase, since it doesn't actually touch the main point. Should be polar line or somesuch.
8SilentCal
I think the idea of a tangent is that it touches the discussion at one point and then diverges.
5Rain
Skin reacts to light, too.
0Lumifer
In the visible part of the spectrum (that is, not UV)?
4Eliezer Yudkowsky
Considered the light glasses earlier, but Brienne did not expect to like them, we need morning light, and they also looked too weaksauce for serious SAD.
3undermind
"Tangent" is perfectly appropriate -- it touches a point somewhere on the curve of the main argument, and then diverges. There is something that made the association with the tangent. And, to further overextend this metaphor, this implies that if someone's argument is rough enough (i.e. not differentiable), then it's not even possible to go off it on a tangent.
3James_Miller
If this doesn't work you should experiment with other frequencies of light . I have been using a heat lamp to play with near infrared light therapy, and use changing color light strips to expose myself to red light in the morning and night, and blue light in the early afternoon.
1CronoDAS
Indeed - I don't know what kind of spectrum "white" LEDs give off, but I seem to have gotten the impression somewhere that most lightbulbs don't emit the same spectrum as the sun, which contributes to "sunlight deprivation" conditions such as SAD.

Incandescent bulbs have a blackbody spectrum, usually somewhat redder than the sun's (which is also close to blackbody radiation, modulo a few absorption lines). White LEDs have a much spikier spectrum, usually with two to maybe a half-dozen peaks at different wavelengths, which come from the band gaps of their component diodes (a "single" white LED usually includes two to four) or from the fluorescent qualities of phosphor coatings on them. High-quality LED bulbs use a variety of methods to tune the locations of these peaks and their relative intensities such that they're visually close to sun or incandescent light; lower-quality ones tend to have them in weird places dictated by availability or ease of manufacture, which gives their light odd visual qualities and leads to poor color rendering. There are also tradeoffs involving the number of emitting diodes per unit. Information theory considerations mean that colors are never going to have quite the same fidelity under LED lights that they would under incandescent, but some can get damn close.

The same's true in varying degrees for most other non-incandescent lights. The most extreme example in common use is probably low-pressure sodium lamps (those intense yellow-orange streetlights), which emit almost exclusively at two very close wavelengths, 589.0 and 589.6 nm.

3Lumifer
Yep -- if you take photographs under these lights (e.g. night street scenes), you essentially get tinted monochrome photographs. Under an almost-single-wavelength source of light there are no colors, only illumination intensities.
1buybuydandavis
And you don't get true full spectrum white out of LEDs either, as they're generally a combination of 3 different narrow band LEDs that look white to the eyes, but give a spiked spectrum instead of a full spectrum. There are phosphor coated LEDs that give broader spectrum, but still nothing like the sun's spectrum.
3Vaniver
I learned from family who live in Alaska about "snowbirds)," who live in the North during the summer and the South during the winter. I suspect this is primarily for weather reasons, but no doubt those with SAD are more likely to be snowbirds than those without. Santiago does have 13 hours of sunlight to Austin or Berkeley's 11 or Juneau's 9 (now; the differences will increase as we approach the solstice), so the change is larger, but the other changes are larger as well- having to switch from speaking English outside the house to speaking Spanish outside the house every six months seems costly to me. (New Zealand solves that problem, but adds a time zone problem.) My off the shelf light lamp is 100W, and seems pretty dang bright to me- but I don't have SAD and used it as a soft alarm, so I can't speak to how effective or ineffective it is for SAD.
1buybuydandavis
It really grates on me when people with more money than God don't put it to any particularly good use in their lives, especially when it's a health related issue. Maybe this will encourage me to use the not so much I have to more effect. Anyone try that Valkee for SAD? $300 for a couple of LEDs to stick in my ears grates as well. Supposedly having the training to wire up LEDs together, but not the follow through, doesn't help either. And yes, fraud, scam, placebo controlled, blah blah blah. The proposed mechanism of photoreceptors distributed in the brain and elsewhere seemed interesting and worth checking out.
[-]RobinZ170

True story: when I first heard the phrase 'heroic responsibility', it took me about five seconds and the question, "On TV Tropes, what definition fits this title?" to generate every detail of EY's definition save one. That detail was that this was supposed to be a good idea. As you point out - and eli-sennesh points out, and the trope that most closely resembles the concept points out - 'heroic responsibility' assumes that everyone other than the heroes cannot be trusted to do their jobs. And, as you point out, that's a recipe for everyone getting in everyone else's way and burning out within a year. And, as you point out, you don't actually know the doctor's job better than the doctors do.

In my opinion, what we should be advocating is the concept of 'subsidiarity' that Fred Clark blogs about on Slacktivist:

Responsibility — ethical obligation — is boundless and universal. All are responsible for all. No one is exempt.

Now, if that were all we had to say or all that we could know, we would likely be paralyzed, overwhelmed by an amorphous, undifferentiated ocean of need. We would be unable to respond effectively, specifically or appropriately to any particular dilemma. And

... (read more)
9Philip_W
This would only be true if the hero has infinite resources, actually able to redo everyone's work. In practice, deciding how your resources should be allocated requires a reasonably accurate estimate of how likely everyone is to do their job well. Swimmer963 shouldn't insist on farming her own wheat for her bread (like she would if she didn't trust the supply chain), not because she doesn't have (heroic) responsibility to make sure she stays alive to help patients, but because that very responsibility means she shouldn't waste her time and effort on unfounded paranoia to the detriment of everyone. The main thing about heroic responsibility is that you don't say "you should have gotten it right". Instead you can only say "I was wrong to trust you this much": it's your failure, and whether it's a failure of the person you trusted really doesn't matter for the ethics of the thing.
0RobinZ
My referent for 'heroic responsibility' was HPMoR, in which Harry doesn't trust anyone to do a competent job - not even someone like McGonagall, whose intelligence, rationality, and good intentions he had firsthand knowledge of on literally their second meeting. I don't know the full context, but unless McGonagall had her brain surgically removed sometime between Chapter 6 and Chapter 75, he could actually tell her everything that he knew that gave him reason to be concerned about the continued good behavior of the bullies in question, and then tell her if those bullies attempted to evade her supervision. And, in the real world, that would be a perfect example of comparative advantage and opportunity cost in action: Harry is a lot better at high-stakes social and magical shenanigans relative to student discipline than McGonagall is, so for her to expend her resources on the latter while he expends his on the former would produce a better overall outcome by simple economics. (Not to mention that Harry should face far worse consequences if he screws up than McGonagall would - even if he has his status as Savior of the Wizarding World to protect him.) (Also, leaving aside whether his plans would actually work.) I am advocating for people to take the initiative when they can do good without permission. Others in the thread have given good examples of this. But you can't solve all the problems you touch, and you'll drive yourself crazy if you blame yourself every time you "could have" prevented something that no-one should expect you to have. There are no rational limits to heroic responsibility. It is impossible to fulfill the requirements of heroic responsibility. What you need is the serenity to accept the things you cannot change, the courage to change the things you can, and the wisdom to know the difference.
3Kenny
Did we read the same story? Harry has lots of evidence that McGonagall isn't in fact trustworthy and in large-part it's because she doesn't fully accept heroic responsibility and is too willing to uncritically delegate responsibility to others. I also vaguely remember your point being addressed in HPMoR. I certainly wouldn't guess that Harry wouldn't understand that "there are no rational limits to heroic responsibility". It certainly matters for doing the most good as a creature that can't psychologically handle unlimited responsibility.
8RobinZ
Full disclosure: I stopped reading HPMoR in the middle of Chapter 53. When I was researching my comment, I looked at the immediate context of the initial definition of "heroic responsibility" and reviewed Harry's rationality test of McGonagall in Chapter 6. I would have given Harry a three-step plan: inform McGonagall, monitor situation, escalate if not resolved. Based on McGonagall's characterization in the part of the story I read, barring some drastic idiot-balling since I quit, she's willing to take Harry seriously enough to act based on the information he provides; unless the bullies are somehow so devious as to be capable of evading both Harry and McGonagall's surveillance - and note that, with McGonagall taking point, they wouldn't know that they need to hide from Harry - this plan would have a reasonable chance of working with much less effort from Harry (and much less probability of misfiring) than any finger-snapping shenanigans. Not to mention that, if Harry read the situation wrong, this would give him a chance to be set straight. Not to mention that, if McGonagall makes a serious effort to crack down on bullying, the effect is likely to persist for far longer than Harry's term. On the subject of psychology: really, what made me so emphatic in my denouncing "heroic responsibility" was [edit: my awareness of] the large percentage of adults (~10-18%) subject to anxiety disorders of one kind or another - including me. One of the most difficult problems for such people is how to restrain their instinct to blame themselves - how to avoid blaming themselves for events out of their control. When Harry says, "whatever happens, no matter what, it’s always your fault" to such persons, he is saying, "blame yourself for everything" ... and that makes his suggestion completely useless to guide their behavior.
9wedrifid
Your three step plan seems much more effective than Harry's shenannigans and also serves as an excellent example of heroic responsibility. Normal 'responsibility' in that situation is to do nothing or at most take step one. Heroic responsibility doesn't mean do it yourself through personal power and awesomeness. It means using whatever resources are available to cause the desired thing to occur (unless the cost of doing so is deemed too high relative to the benefit). Institutions, norms and powerful people are valuable resources.
2RobinZ
I'm realizing that my attitude towards heroic responsibility is heavily driven by the anxiety-disorder perspective, but telling me that I am responsible for x doesn't tell me that I am allowed to delegate x to someone else, and - especially in contexts like Harry's decision (and Swimmer's decision in the OP) - doesn't tell me whether "those nominally responsible can't do x" or "those nominally responsible don't know that they should do x". Harry's idea of heroic responsibility led him to conflate these states of affairs re: McGonagall, and the point of advice is to make people do better, not to win philosophy arguments. When I came up with the three-point plan I gave to you, I did not do so by asking, "what would be the best way to stop this bullying?" I did so by asking myself, "if McGonagall is the person best placed to stop bullying, but official school action might only drive bullying underground without stopping it, what should I do?" I asked myself this because subsidiarity includes something that heroic responsibility does not: the idea that some people are more responsible - better placed, better trained, better equipped, etc. - than others for any given problem, and that, unless the primary responsibility-holder cannot do the job, those farther away should give support instead of acting on their own. (Actually, thinking about localism suggested a modification to my Step 1: brief the prefects on the situation in addition to briefing McGonagall. That said, I don't know if that would be a good idea in this case - again, I stopped reading twenty chapters before.)
7dxu
I agree with all of this except the part where you say that heroic responsibility does not include this. As wedrifid noted in the grandparent of this comment, heroic responsibility means using the resources available in order to achieve the desired result. In the context of HPMoR, Harry is responding to this remark by Hermione: Again, as wedrifid noted above, this is step one and only step one. Taking that step alone, however, is not heroic responsibility. I agree that Harry's method of dealing with the situation was far from optimal; however, his general point I agree with completely. Here is his response: Notice that nowhere in this definition is the notion of running to an authority figure precluded! Harry himself didn't consider it because he's used to occupying the mindset that "adults are useless". But if we ignore what Harry actually did and just look at what he said, I'm not seeing anything here that disagrees with anything you said. Perhaps I'm missing something. If so, could you elaborate?
2RobinZ
Neither Hermione nor Harry dispute that they have a responsibility to protect the victims of bullying. There may be people who would have denied that, but none of them are involved in the conversation. What they are arguing over is what their responsibility requires of them, not the existence of a responsibility. In other words, they are arguing over what to do. Human beings are not perfect Bayesian calculators. When you present a human being with criteria for success, they do not proceed to optimize perfectly over the universe of all possible strategies. The task "write a poem" is less constrained than the task "write an Elizabethan sonnet", and in all likelihood the best poem is not an Elizabethan sonnet, but that doesn't mean that you will get a better poem out of a sixth-grader by asking for any poem than by giving them something to work with. The passage from Zen and the Art of Motorcycle Maintenance Eliezer Yudkowsy quoted back during the Overcoming Bias days, "Original Seeing", gave an example of this: the student couldn't think of anything to say in a five-hundred word essay about the United States, Bozeman, or the main street of Bozeman, but produced a five-thousand word essay about the front facade of the Opera House. Therefore, when I evaluate "heroic responsibility", I do not evaluate it as a proposition which is either true or false, but as a meme which either produces superior or inferior results - I judge it by instrumental, not epistemic, standards. Looking at the example in the fanfic and the example in the OP, as a means to inspire superior strategic behavior, it sucks. It tells people to work harder, not smarter. It tells people to fix things, but it doesn't tell them how to fix things - and if you tell a human being (as opposed to a perfect Bayesian calculator) to fix something, it sounds like you're telling them to fix it themselves because that is what it sounds like from a literary perspective. "You've got to get the job done no matter what"
5dxu
That's the part I'm not getting. All Harry is saying is that you should consider yourself responsible for the actions you take, and that delegating that responsibility to someone else isn't a good idea. Delegating responsibility, however, is not the same as delegating tasks. Delegating a particular task to someone else might well be the correct action in some contexts, but you're not supposed to use that as an excuse to say, "Because I delegated the task of handling this situation to someone else, I am no longer responsible for the outcome of this situation." This advice doesn't tell people how to fix things, true, but that's not the point--it tells people how to get into the right mindset to fix things. In other words, it's not object-level advice; it's meta-level advice, and obviously if you treat it as the former instead of the latter you're going to come to the conclusion that it sucks. Sometimes, to solve a problem, you have to work harder. Other times, you have to work smarter. Sometimes, you have to do both. "Heroic responsibility" isn't saying anything that contradicts that. In the context of the conversation in HPMoR, I do not agree with either Hermione or Harry; both of them are overlooking a lot of things. But those are object-level considerations. Once you look at the bigger picture--the level on which Harry's advice about heroic responsibility actually applies--I don't think you'll find him saying anything that runs counter to what you're saying. If anything, I'd say he's actually agreeing with you! Humans are not perfectly rational agents--far from it. System 1 often takes precedence over System 2. Sometimes, to get people going, you need to re-frame the situation in a way that makes both systems "get it". The virtue of "heroic responsibility", i.e. "no matter what happens, you should consider yourself responsible", seems like a good way to get that across.
-2RobinZ
s/work harder, not smarter/get more work done, not how to get more work done/ Why do you believe this to be true?
5dxu
That's an interesting question. I'll try to answer it here. This seems to imply that no matter whatever happens, you should hold yourself responsible in the end. If you take a randomly selected person, which of the following two cases do you think will be more likely to cause that person to think really hard about how to solve a problem? 1. They are told to solve the problem. 2. They are told that they must solve the problem, and if they fail for any reason, it's their fault. Personally, I would find the second case far more pressing and far more likely to cause me to actually think, rather than just take the minimum number of steps required of me in order to fulfill the "role" of a problem-solver, and I suspect that this would be true of many other people here as well. Certainly I would imagine it's true of many effective altruists, for instance. It's possible I'm committing a typical mind fallacy here, but I don't think so. On the other hand, you yourself have said that your attitude toward this whole thing is heavily driven by the fact that you have anxiety disorder, and if that's the case, then I agree that blaming yourself is entirely the wrong way to go about doing things. That being said, the whole point of having something called "heroic responsibility" is to get people to actually put in some effort as opposed to just playing the role of someone who's perceived as putting in effort. If you are able to do that without resorting to holding yourself responsible for the outcomes of situations, then by all means continue to do so. However, I would be hesitant to label advice intended to motivate and galvanize as "useless", especially when using evidence taken from a subset of all people (those with anxiety disorders) to make a general claim (the notion of "heroic responsibility" is useless).
6RobinZ
I think I see what you're getting at. If I understand you rightly, what "heroic responsibility" is intended to affect is the behavior of people such as [trigger warning: child abuse, rape] Mike McQueary during the Penn State child sex abuse scandal, who stumbled upon Sandusky in the act, reported it to his superiors (and, possibly, the police), and failed to take further action when nothing significant came of it. [/trigger warning] McQueary followed the 'proper' procedure, but he should not have relied upon it being sufficient to do the job. He had sufficient firsthand evidence to justify much more dramatic action than what he did. Given that, I can see why you object to my "useless". But when I consider the case above, I think what McQueary was lacking was the same thing that Hermione was lacking in HPMoR: a sense of when the system might fail. Most of the time, it's better to trust the system than it is to trust your ability to outthink the system. The system usually has access to much, much more information than you do; the system usually has people with much, much better training than you have; the system usually has resources that are much, much more abundant than you can draw on. In the vast majority of situations I would expect McQueary or Hermione to encounter - defective equipment, scheduling conflicts, truancy, etc. - I think they would do far worse by taking matters into their own hands than by calling upon the system to handle it. In all likelihood, prior to the events in question, their experiences all supported the idea that the system is sound. So what they needed to know was not that they were somehow more responsible to those in the line of fire than they previously realized, but that in these particular cases they should not trust the system. Both of them had access to enough data to draw that conclusion*, but they did not. If they had, you would not need to tell them that they had a responsibility. Any decent human being would feel that immedi
3dxu
All right, cool. I think that dissolves most of our disagreement.
3RobinZ
Glad to hear it. :)
3Kenny
Again, you're right about the advice being poor – in the way you mention – but I also think it's great advice if you consider it's target the idea that the consequences are irrelevant if you've done the 'right' thing. If you've done the 'right' thing but the consequences are still bad, then you should probably reconsider what you're doing. When aiming at this target, 'heroic responsibility' is just the additional responsibility of considering whether the 'right' thing to do is really right (i.e. will really work). ... And now that I'm thinking about this heroic responsibility idea again, I feel a little more strongly how it's a trap – it is. Nothing can save you from potential devastation at the loss of something or someone important to you. Simply shouldering responsibility for everything you care about won't actually help. It's definitely a practical necessity that groups of people carefully divide and delegate important responsibilities. But even that's not enough! Nothing's enough. So we can't and shouldn't be content with the responsibilities we're expected to meet. I subscribe to the idea that virtue ethics is how humans should generally implement good (ha) consequentialist ethics. But we can't escape the fact that no amount of Virtue is a complete and perfect means of achieving all our desired ends! We're responsible for which virtues we hold as much as we are of learning and practicing them.
-1RobinZ
You are analyzing "heroic responsibility" as a philosophical construct. I am analyzing it as [an ideological mantra]. Considering the story, there's no reason for Harry to have meant it as the former, given that it is entirely redundant with the pre-existing philosophical construct of consequentialism, and every reason for him to have meant it as the latter, given that it explains why he must act differently than Hermione proposes. [Note: the phrase "an ideological mantra" appears here because I'm not sure what phrase should appear here. Let me know if what I mean requires elaboration.]
3Kenny
I think you might be over-analyzing the story; which is fine actually, as I'm enjoying doing the same. I have no evidence that Eliezer considered it so, but I just think Harry was explaining consequentialism to Hermione, without introducing it as a term. I'm unsure if it's connected in any obvious way, but to me the quoted conversation between Harry and Hermione is reminiscent of other conversations between the two characters about heroism generally. In that context, it's obviously a poor 'ideological mantra' as it was targeted towards Hermione. Given what I remember of the story, it worked pretty well for her.
-2RobinZ
I confess, it would make sense to me if Harry was unfamiliar with metaethics and his speech about "heroic responsibility" was an example of him reinventing the idea. If that is the case, it would explain why his presentation is as sloppy as it is.
1wedrifid
Surprisingly, so is mine, yet we've arrived at entirely different philosophical conclusions. Perfectionistic, intelligent idealist with visceral aversions to injustice walk a fine line when it comes to managing anxiety and the potential for either burn out or helpless existential dispair. To remain sane and effectively harness my passion and energy I had to learn a few critical lessons: * Over-responsibility is not 'responsible'. It is right there next to 'completely negligent' inside the class 'irresponsible'. * Trusting that if you do what the proximate social institution suggests you 'should' do then it will take care of problems is absurd. Those cursed with either weaker than normal hypocrisy skills or otherwise lacking the privelidge to maintain a sheltered existence will quickly become distressed from constant disappointment. * For all that the local social institutions fall drastically short of ideals - and even fall short of what we are supposed to pretend to believe of them - they are still what happens to be present in the universe that is and so are a relevant source of power. Finding ways to get what you want (for yourself or others) by using the system is a highly useful skill. * You do not (necessarily) need to fix the system in order to fix a problem that is important to you. You also don't (necessarily) need to subvert it. 'Hermione' style 'responsibility' would be a recipe for insanity if I chose to keep it. I had to abandon it at about the same age she is in the story. It is based on premises that just don't hold in this universe. 'Responsibility' of the kind you can tell others they have is almost always fundamentally different in kind to the 'responsibility' word as used in 'heroic responsibility'. It's a difference that results in frequent accidental equivocation and accidental miscommunicaiton across inferential distances. This is one rather large problem with 'heroic responsibility' as a jargon term. Those who have something to learn ab
-1RobinZ
I may have addressed the bulk of what you're getting at in another comment; the short form of my reply is, "In the cases which 'heroic responsibility' is supposed to address, inaction rarely comes because an individual does not feel responsible, but because they don't know when the system may fail and don't know what to do when it might."
0wedrifid
Short form reply: That seems false. Perhaps you have a different notion of precisely what heroic responsibility is supposed to address?
-1RobinZ
Is the long form also unclear? If so, could you elaborate on why it doesn't make sense?
3Kenny
Your mention of anxiety (disorders) reminds me of Yvain's general point that lots of advice is really terrible for at least some people. As I read HPMoR (and I've read all of it), a lot of the reason why Harry specifically distrusts the relevant authority figures is that they are routinely surprised by the various horrible events that happen and seem unwilling to accept responsibility for anything they don't already expect. McGonagall definitely improves on this point in the story tho. In the story, the advice Harry gives Hermione seems appropriate. Your example would be much better for anyone inclined to anxiety about satisfying arbitrary constraints (i.e. being responsible for arbitrary outcomes) – and probably for anyone, period, if for no other reason than it's easier to edit an existing idea than generate an entirely new one. @wedrifid's correct your plan is better than Harry's in the story, but I think Harry's point – and it's one I agree with – is that even having a plan, and following it, doesn't absolve oneself – and to oneself, if no one else – of coming up with a better plan, or improvising, or delegating some or all of the plan, if that's what's needed to stop kids from being bullied or an evil villain from destroying the world (or whatever). Another way to consider the conversation in the story is that Hermione initially represents virtue ethics: Harry counters with a rendition of consequentialist ethics.
0RobinZ
If I believed you to be a virtue ethicist, I might say that you must be mindful of your audience when dispensing advice. If I believed you to be a deontologist, I might say that you should tailor your advice to the needs of the listener. Believing you to be a consequentialist, I will say that advice is only good if it produces better outcomes than the alternatives. Of course, you know this. So why do you argue that Harry's speech about heroic responsibility is good advice?
0Kenny
It seems like you've already answered your own question!
0RobinZ
No, I haven't answered my own question. In what way was Harry's monologue about consequentialist ethics superior to telling Hermione why McGonagall couldn't be counted upon?
3Philip_W
HPJEV isn't supposed to be a perfect executor of his own advice and statements. I would say that it's not the concept of heroic responsibility is at fault, but his own self-serving reasoning which he applies to justify breaking the rules and doing something cool. In doing so, he fails his heroic responsibility to the over 100 expected people whose lives he might have saved by spending his time more effectively (by doing research which results in an earlier friendly magitech singularity, and buying his warm fuzzies separately by learning the spell for transfiguring a bunch of kittens or something), and HPJEV would feel appropriately bad about his choices if he came to that realisation. Depending on what you mean by "blame", I would either disagree with this statement, or I would say that heroic responsibility would disapprove of you blaming yourself too. By heroic responsibility, you don't have time to feel sorry for yourself that you failed to prevent something, regardless of how realistically you could have. Where do you get the idea of "requirements" from? When a shepherd is considered responsible for his flock, is he not responsible for every sheep? And if we learn that wolves will surely eat a dozen over the coming year, does that make him any less responsible for any one of his sheep? IMO no: he should try just as hard to save the third sheep as the fifth, even if that means leaving the third to die when it's wounded so that 4-10 don't get eaten because they would have been traveling more slowly. It is a basic fact of utilitarianism that you can't score a perfect win. Even discounting the universe which is legitimately out of your control, you will screw up sometimes as point of statistical fact. But that does not make the utilons you could not harvest any less valuable than the ones you could have. Heroic responsibility is the emotional equivalent of this fact. That sounds wise, but is it actually true? Do you actually need that serenity/acceptance part? T
5wedrifid
Yes, I do. Most other humans do, too and it's a sufficiently difficult and easy to neglect skill that it is well worth preserving as 'wisdom'. Non-human intelligences will not likely have 'serenity' or 'acceptance' but will need some similar form of the generalised trait of not wasting excessive amounts of computational resources exploring parts of solution space that have insufficient probability of significant improvement.
1Philip_W
In that case, I'm confused about what serenity/acceptance entails, why you seem to believe heroic responsibility to be incongruent with it, and why it doesn't just fall under "courage" and "wisdom" (as the emotional fortitude to withstand the inevitable imperfection/partial failure and accurate beliefs respectively). Not wasting (computational) resources on efforts with low expected utility is part of your responsibility to maximise utility, and I don't see a reason to have a difference between things I "can't change" and things I might be able to change but which are simply suboptimal.
2wedrifid
A human psychological experience and tool that can approximately be described by referring to allocating attention and resources efficiently in the face of some adverse and difficult to influence circumstance. I don't. I suspect you are confusing me with someone else. Yes. Yet for some reason merely seeing an equation and believing it must be maximised is an insufficient guide to optimally managing the human machinery we inhabit. We have to learn other things - including things which can be derived from the equation - in detail and and practice them repetitively. The Virtue of Narrowness may help you. I have different names for "DDR Ram" and "A replacement battery for my Sony Z2 android" even though I can see how they both relate to computers.
1Philip_W
Right, I thought you were RobinZ. By the context, it sounds like he does consider serenity incongruous with heroic responsibility: With my (rhetorical) question, I expressed doubt towards his interpretation of the phrase, not (necessarily) all reasonable interpretations of it. For me at least, saying something "can't be changed" roughly means modelling something as P(change)=0. This may be fine as a local heuristic when there are significantly larger expected utilities on the line to work with, but without a subject of comparison it seems inappropriate, and I would blame it for certain error modes, like ignoring theories because they have been labeled impossible at some point. To approach it another way, I would be fine with just adding adjectives to "extremely ridiculously [...] absurdly unfathomably unlikely" to satisfy the requirements of narrowness, rather than just saying something can't be done. I would call this "level-headedness". By my intuition, serenity is a specific calm emotional state, which is not required to make good decisions, though it may help. My dataset luckily isn't large, but I have been able to get by on "numb" pretty well in the few relevant cases.
0wedrifid
I agree. I downvoted RobinZ's comment and ignored it because the confusion about what heroic responsibility means was too fundamental, annoyingly difficult to correct and had incidentally already been argued for far more eloquently elsewhere in the thread. In contrast I fundamentally agree with most of what you have said on this thread so the disagreement on one conclusion regarding a principle of rationality and psychology is more potentially interesting. I agree with your rejection of the whole paragraph. My objection seems to be directed at the confusion about heroic (and arguably mundane) responsibility rather than the serenity wisdom heuristic. I can empathize with being uncomfortable with colloquial expressions which deviate from literal meaning. I can also see some value in making a stand against that kind of misuse due to the way such framing can influence our thinking. Overconfident or premature ruling out of possibilities is something humans tend to be biased towards. Whatever you call it it sounds like you have the necessary heuristics in place to avoid the failure modes the wisdom quote is used to prevent. (Avoiding over-responsibility and avoiding pointless worry loops). The phrasing "The X to" intuitively brings to my mind a relative state rather than an absolute one. That is, while getting to some Zen endpoint state of inner peace or tranquillity is not needed but there are often times when moving towards that state to a sufficient degree will allow much more effective action. ie. it translates to "whatever minimum amount of acceptance of reality and calmness is needed to allow me correctly account for opportunity costs and decide according to the bigger picture". That can work. If used too much it sometimes seems to correlate with developing pesky emotional associations (like 'Ugh fields') with related stimulus but that obviously depends on which emotional cognitive processes result in the 'numbness' and soforth.
-1RobinZ
I would rather you tell me that I am misunderstanding something than downvote silently. My prior probability distribution over reasons for the -1 had "I disagreed with Eliezer Yudkowsky and he has rabid fans" orders of magnitude more likely than "I made a category error reading the fanfic and now we're talking past each other", and a few words from you could have reversed that ratio.
1wedrifid
Thankyou for your feedback. I usually ration my explicit disagreement with people on the internet but your replies prompt me to add "RobinZ" to the list of people worth actively engaging with.
0RobinZ
...huh. I'm glad to have been of service, but that's not really what I was going for. I meant that silent downvoting for the kind of confusion you diagnosed in me is counterproductive generally - "You keep using that word. I do not think it means what you think it means" is not a hypothesis that springs naturally to mind. The same downvote paired with a comment saying: ...would have been more like what I wanted to encourage.
-5wedrifid
3Lumifer
Medical expert systems are getting pretty good, I don't see why you wouldn't just jump straight to an auto-updated list of most likely diagnoses (generated by a narrow AI) given the current list of symptoms and test results.
1hyporational
Most patient cases are so easy and common that filling forms for an AI would greatly slow the system down. AI could be useful when the diagnosis isn't clear however. A sufficiently smart AI could pick up the relevant data from the notes but usually the picture that the diagnostician has in their mind is much more complete than any notes they make. Note that I'm looking at this from a perspective where implementing theoretically smart systems has usually done nothing but increased my workload.
1Lumifer
I am assuming you're not filling out any forms specially for the AI -- just that the record-keeping system is computerized and the AI has access to it. In trivial cases the AI won't have much data (e.g. no fever, normal blood pressure, complains of a running nose and cough, that's it) and its diagnoses will be low-credence, but that's fine, you as a doctor won't need its assistance in those cases.
3hyporational
The AI would need to know natural language to be of any use or else it will miss most of the relevant data. I suppose Watson is pretty close to that and have read that it's tested in some hospitals. I wonder how this is implemented. I suspect doctors carry a lot more data in their heads than is readily apparent and much of this data will never make it to their notes and thus to the computerized records. Taking a history, evaluating it's reliability and using the senses to observe the patients are something machines can't do for quite some time. On top of this I roughly know hundreds of patients now that I will see time and again and this helps immensely when judging their most acute presentations. By this I don't mean I know them as lists of symptoms, but I know their personalities too and how this affects how they tell their stories and how seriously they take their symptoms from minor complaints to major problems. I could never take the approach of jumping from a hospital to hospital now that I've experienced this first hand.
3Vaniver
This is the reason Watson is a game-changer, despite expert prediction systems (using linear regression!) performing at the level of expert humans for ~50 years. Doctors may carry a lot of information in their heads, but I've yet to meet a person that's able to mentally invert matrices of non-trivial size, which helps quite a bit with determining the underlying structure of the data and how best to use it. I think machines have several comparative advantages here. An AI with basic conversational functions can take a history, and is better at evaluating some parts of the reliability and worse at others. It can compare with 'other physicians' more easily, or check public records, but probably can't determine whether or not it's a coherent narrative as easily ("What is Toronto?"). A webcam can measure pulse rate just by looking, and so I suspect it'll be about as good at detecting deflection and lying as the average doctor. (I don't remember seeing doctors as being particularly good at lie-detection, but it's been a while since I've read any of the lie-detection literature.) Note that if the AI is sufficiently broadly used (here I'm imagining, say, the NHS in the UK using just one) then everyone will always have access to a doctor that's known them as long as they've been in the system.
5hyporational
Is this because using them is incredibly slow or something else? Lies make no sense medically, or make too much sense. Once I've spotted a few lies, many of them fit a stereotypical pattern many patients use even if there aren't any other clues. I don't need to rely on body language much. People also misremember things, or have a helpful relative misremember things for them, or home care providers feeding their clueless preliminary diagnoses for these people. People who don't remember fill in the gap with something they think is plausible. Some people are also psychotic or don't even remember what year it is or why they came in the first place. Some people treat every little ache like it's the end of the world and some don't seem to care if their leg's missing. I think even an independent AI could make up for many of its faults simply by being more accurate at interpreting the records and current test results. I hope that when an AI can do my job I don't need a job anymore :)
1Vaniver
My understanding is that the ~4 measurements the system would use as inputs were typically measured by the doctor, and by the time the doctor had collected the data they had simultaneously come up with their own diagnosis. Typing the observations into the computer to get the same level of accuracy (or a few extra percentage points) rarely seemed worth it, and turning the doctor from a diagnostician to a tech was, to put it lightly, not popular with doctors. :P There are other arguments which would take a long time to go into. One is "but what about X?", where the linear regression wouldn't take into account some other variable that the human could take into account, and so the human would want an override option. But, as one might expect, the only way for the regression to outperform the human is for the regression to be right more often than not when the two of them disagree, and humans are unfortunately not very good at determining whether or not the case in front of them is a special case where an override will increase accuracy or a normal case where an override will decrease accuracy. Here's probably the best place to start if interested in reading more.
1Lumifer
A rather limited subset of the natural language, I think it's a surmountable problem. All true, which is why I think a well-designed diagnostic AI will work in partnership with a doctor instead of replacing him.
1hyporational
I agree with you, but I fear that makes for a boring conversation :) The language is already relatively standardized and I suppose you could standardize it more to make it easier for the AI. I suspect any attempt to mold the system for an AI would meet heavy resistance however.
1RobinZ
Largely for the same reasons that weather forecasting still involves human meteorologists and the draft in baseball still includes human scouts: a system that integrates both human and automated reasoning produces better outcomes, because human beings can see patterns a lot better than computers can. Also, we would be well-advised to avoid repeating the mistake made by the commercial-aviation industry, which seems to have fostered such extreme dependence on the automated system that many 'pilots' don't know how to fly a plane. A system which automates almost all diagnoses would do that.
-1Lumifer
I am not saying this narrow AI should be given direct control of IV drips :-/ I am saying that a doctor, when looking at a patient's chart, should be able to see what the expert system considers to be the most likely diagnoses and then the doctor can accept one, or ignore them all, or order more tests, or do whatever she wants. No, I don't think so because even if you rely on an automated diagnosis you still have to treat the patient.
1RobinZ
Even assuming that the machine would not be modified to give treatment recommendations, that wouldn't change the effect I'm concerned about. If the doctor is accustomed to the machine giving the correct diagnosis for every patient, they'll stop remembering how to diagnose disease and instead remember how to use the machine. It's called "transactive memory". I'm not arguing against a machine with a button on it that says, "Search for conditions matching recorded symptoms". I'm not arguing against a machine that has automated alerts about certain low-probability risks - if there was a box that noted the conjunction of "from Liberia" and "temperature spiking to 103 Fahrenheit" in Thomas Eric Duncan during his first hospital visit, there'd probably only be one confirmed case of ebola in the US instead of three, and Duncan might be alive today. But no automated system can be perfectly reliable, and I want doctors who are accustomed to doing the job themselves on the case whenever the system spits out, "No diagnosis found".
3Lumifer
You are using the wrong yardstick. Ain't no thing is perfectly reliable. What matters is whether an automated system will be more reliable than the alternative -- human doctors. Commercial aviation has a pretty good safety record while relying on autopilots. Are you quite sure that without the autopilot the safety record would be better? And why do you think a doctor will do better in this case?
3Swimmer963 (Miranda Dixon-Luinenburg)
I was going to say "doctor's don't have the option of not picking the diagnosis", but that's actually not true; they just don't have the option of not picking a treatment. I've had plenty of patients who were "symptom X not yet diagnosed" and the treatment is basically supportive, "don't let them die and try to notice if they get worse, while we figure this out." I suspect that often it never gets figured out; the patient gets better and they go home. (Less so in the ICU, because it's higher stakes and there's more of an attitude of "do ALL the tests!")
0EGI
They do, they call the problem "psychosomatic" and send you to therapy or give you some echinacea "to support your immune system" or prescribe "something homeopathic" or whatever... And in very rare cases especially honest doctors may even admit that they do not have any idea what to do.
-1RobinZ
Because the cases where the doctor is stumped are not uniformly the cases where the computer is stumped. The computer might be stumped because a programmer made a typo three weeks ago entering the list of symptoms for diphtheria, because a nurse recorded the patient's hiccups as coughs, because the patient is a professional athlete whose resting pulse should be three standard deviations slower than the mean ... a doctor won't be perfectly reliable either, but like a professional scout who can say, "His college batting average is .400 because there aren't many good curveball pitchers in the league this year", a doctor can detect low-prior confounding factors a lot faster than a computer can.
3Lumifer
Well, let's imagine a system which actually is -- and that might be a stretch -- intelligently designed. This means it doesn't say "I diagnose this patient with X". It says "Here is a list of conditions along with their probabilities". It also doesn't say "No diagnosis found" -- it says "Here's a list of conditions along with their probabilities, it's just that the top 20 conditions all have probabilities between 2% and 6%". It also says things like "The best way to make the diagnosis more specific would be to run test A, then test B, and if it came back in this particular range, then test C". A doctor might ask it "What about disease Y?" and the expert system will answer "It's probability is such-and-such, it's not zero because of symptoms Q and P, but it's not high because test A came back negative and test B showed results in this range. If you want to get more certain with respect to disease Y, use test C." And there probably would be button which says "Explain" and pressing it will show the precisely what leads to the probability of disease X being what it is, and the doctor should be able to poke around it and say things like "What happens if we change these coughs to hiccups?" An intelligently designed expert system often does not replace the specialist -- it supports her, allows her to interact with it, ask questions, refine queries, etc. If you have a patient with multiple nonspecific symptoms who takes a dozen different medications every day, a doctor cannot properly evaluate all the probabilities and interactions in her head. But an expert system can. It works best as a teammate of a human, not as something which just tells her.
-1RobinZ
Us? I'm a mechanical engineer. I haven't even read The Checklist Manifesto. I am manifestly unqualified either to design a user interface or to design a system for automated diagnosis of disease - and, as decades of professional failure have shown, neither of these is a task to be lightly ventured upon by dilettantes. The possible errors are simply too numerous and subtle for me to be assured of avoiding them. Case in point: prior to reading that article about Air France Flight 447, it never occurred to me that automation had allowed some pilots to completely forget how to fly a plane. The details of automation are much less important to me than the ability of people like Swimmer963 to be a part of the decision-making process. Their position grants them a much better view of what's going on with one particular patient than a doctor who reads a chart once a day or a computer programmer who writes software intended to read billions of charts over its operational lifespan. The system they are incorporated in should take advantage of that.

As I interpret it, heroic responsibility doesn't mean not accepting roles; it means not accepting roles by default.

2Kenny
i.e. not accepting roles when doing so is worse than the alternative.

Heroic responsibility always struck as me the kind of thing that a lot of people probably have too little of, but also like the kind of thing that will just make you a miserable wreck if you take it too seriously. After all, interpreted literally, it means that every person dying of a terrible disease, every war, every case of domestic violence, etc. etc. happening in the world, now or in the future, is because you didn't stop it.

The concept is useful to have as a way to remind ourselves that often supposed "impossibles" just mean we're unwilling... (read more)

There may be Dunning-Kruger effect though...

I don't know about the medical context but in the software context, the "heroically responsible" developer is the new guy who is waxing poetic about switching to another programming language (for no reason and entirely unaware of all the bindings that would need to be implemented), who wants others to do unit tests in the situation where they're inapplicable or do some sort of agile development where more formal process with tests is necessary, and fails to recognize unit testing already in place, etc. ... (read more)

6Swimmer963 (Miranda Dixon-Luinenburg)
This may indeed be a failure mode that new people on teams are prone to, and maybe even something that new people on teams are especially prone to if they've read HPMOR, but I don't think it's the same as the thing I'm talking about–and in particular this doesn't sound like me, as a new nurse who's read HPMOR. I think the analog in nursing would be the new grad who's carrying journal articles around everywhere, overconfident in their fresh-out-of-school knowledge, citing the new Best Practice Guidelines and nagging all the experienced nurses about not following them. Whereas I'm pretty much always underconfident, trying to watch how the experienced nurses do things and learn for them, asking for help lots, and offering my help to everyone all the time. Which is probably annoying sometimes, but not in the same way. I think that there is a spirit of heroic responsibility that makes people genuinely stronger, which Eliezer is doing his best to describe in HPMOR, and what you described is very much not in the spirit of heroic responsibility.
3private_messaging
That's a bit of self contradictory statement, isn't it? (People can be unassertive but internally very overconfident, by the way). So you have that patient, and you have your idea on the procedures that should have been done, and there's doctor's, and you in retrospect think you were under-confident that your treatment plan was superior? What if magically you were in the position where you'd actually have to take charge? Where ordering a wrong procedure hurts the patient? It's my understanding that there's a very strong initial bias to order unnecessary procedures, that takes years of experience to overcome. I suspect it's one of things that look very different from the inside and from the outside... None of those arrogant newbies would have seen themselves in my description (up until they wisen up). Also, your prototype here is the heroic responsibility for saving the human race, taken upon by someone who neither completed formal education in relevant subjects, nor (which would actually be better to see) produced actual working software products of relevance, nor other things of such nature evaluated to be correct in a way that's somewhat immune to rationalization. And a straightforwardly responsible thing to do is to try to do more of rationalization-immune things to practice, because the idea is that screwing up here has very bad consequences. Other issue is that you are essentially thinking meat, and if the activation of the neurons used for responsibility is outside a specific range, things don't work right, performance is impaired, responsibility too is impaired, etc, whether the activation is too low or too high. edit: to summarize with an analogy, say, driving a car without having passed a driving test is irresponsible, right? No matter how much you feel that you can drive the bus better than the person who's legally driving it, the responsible thing to do is to pass a driving test first. Now, the heroes, they don't need no stinking tests. They jump into

So you have that patient, and you have your idea on the procedures that should have been done, and there's doctor's, and you in retrospect think you were under-confident that your treatment plan was superior?

I'm not sure that the doctor and I disagreed on that much. So we had this patient, who weighed 600 pounds and had all the chronic diseases that come with it, and he was having more and more trouble breathing–he was in heart failure, with water backing up into his lungs, basically. Which we were treating with diuretics, but he was already slowly going into kidney failure, and giving someone big doses of diuretics can push them into complete kidney failure, and also can make you deaf–so the doses we were giving him weren't doing anything, and we couldn't give him more. Normally it would have been an easy decision to intubate him and put him on a ventilator around Day 3, but at 600 pounds, with all that medical history, if we did that he'd end up in the hospital for six months, with a tracheotomy, all that. So the doctor had a good reason for wanting to delay the inevitable as long as possible. We were also both expecting that he would need dialysis sooner or later...but we cou... (read more)

3private_messaging
Well, from your description it may be that doctor has less hyperbolic discounting (due to having worked longer). Being more able to weight the chance of avoiding intrusive procedures and long term hospitalization, which carry huge risks as well as huge amount of total pain over time.
0wedrifid
No, that is an entirely coherent claim for a person to make and not even a particularly implausible one.
2private_messaging
To say that you're underconfident is to say that you believe you're correct more often than you believe yourself to be correct. The claim of underconfidence is not a claim underconfident people tend to make. Underconfident people usually don't muster enough confidence about their tendency to be right to conclude that they're underconfident.
5gjm
It's self-contradictory only in the same way as "I believe a lot of false things" is. (Maybe a closer analogy: "I make a lot of mistakes.".) In other words, it make a general claim that conflicts with various (unspecified) particular beliefs one has from time to time. I am generally underconfident. That is: if I look at how sure I am about things (measured by how I feel, what I say, what in some cases how willing I am to take risks based on those opinions), with hindsight it turns out that my confidence is generally too low. In some sense, recognizing this should automatically increase my confidence levels until they stop being too low -- but in practice my brain doesn't work that way. (I repeat: in some sense it should, and that's the only sense in which saying "I am generally underconfident" is self-contradictory.) I make a lot of mistakes. That is: if I look at the various things I have from time to time believed to be true, with hindsight it turns out that quite often those beliefs are incorrect. It seems likely that I have a bunch of incorrect current beliefs, but of course I don't know which ones they are. (Perhaps I've introduced a new inconsistency by saying both "I am generally underconfident" and "I make a lot of mistakes". As it happens, on the whole I think I haven't; in any case that's a red herring.)
-1private_messaging
Yes, that's why I said it was a bit self contradictory. The point is, you got to have two confidence levels involved that aren't consistent with each other one being lower than the other.

I probably am going to leave nursing.

This makes me sad to hear. It sounds like you've been really enjoying it. And I think that those of us here on LW have benefited from your perspective as a nurse in many ways -- you've demonstrated its worth as a career choice, and challenged people's unwarranted assumptions.

This was really, really good for me to hear. I think permission to not be a hero was something I needed. (The following is told vaguely and with HP:MOR metaphors to avoid getting too personal.)

I had a friend who I tried really hard to help, in different ways at different times, but most of it all relating to the same issue. I remember once spending several days thinking really hard about an imminently looming crisis, trying to find some creative way out, and eventually I did but it was almost as bad an idea as using hufflepuff bones to make weapons so I di... (read more)

My $0.02: it matters whether I trust the system as a whole (for example, the hospital) to be doing good.

If I do, then if I'm going to be "heroically" responsible I'm obligated to take that into account and make sure my actions promote the functioning of the system as a whole, or at least don't impede it. Of course, that's a lot more difficult than just focusing on a particular bit of the environment that I can improve through my actions. But, well, the whole premise underlying "heroic" responsibility is that difficulty doesn't matter,... (read more)

You might be wrestling with a hard trade-off between wanting to do as much good as possible and wanting to fit in well with a respected peer group. Those are both good things to want, and it's not obvious to me that you can maximize both of them at the same time.

I have some thoughts on your concepts of "special snowflake" and "advice that doesn't generalize." I agree that you are not a special snowflake in the sense of being noticeably smarter, more virtuous, more disciplined, whatever than the other nurses on your shift. I'll concede t... (read more)

[-]Shmi50

But what about the other nurses on my unit, the ones who are competent and motivated and curious and really care? Would familiarity with the concept of heroic responsibility help or hinder them in their work? Honestly, I predict that they would feel alienated, that they would assume I held a low opinion of them (which I don't, and I really don't want them to think that I do), and that they would flinch away and go back to the things that they were doing anyway, the role where they were comfortable–or that, if they did accept it, it would cause them to bur

... (read more)

Leaving aside the scale implied by the word "heroic", another word for "heroic responsibility" is "initiative". A frame of mind in which the thought, "I don't know how to solve this" is immediately followed not by "therefore I can do nothing" but by "therefore I will find a way."

I kind of predict that the results of installing heroic responsibility as a virtue, among average humans under average conditions, would be a) everyone stepping on everyone else’s toes, and b) 99% of them quitting a year later.

You are probably right. That would be a horrific lesson in the valley of bad rationality. I really do not want people to start actually acting on their beliefs and values. That makes things (literally) explode.

Someone’s going to be the Minister of Health for Canada, and they’re likely to be in a position where taking heroic responsibility for the Canadian health care system makes things better.

Maybe they are in such a position - but they are gears too. More powerful gears, but they are part of a machine that selects these gears by certain properties and puts them in places where they bear more load. By this analogy I wouldn't want such a gear spring out of place. It could disrupt the whole machine. At best you can hope that the gear spins a bit faster or slower than expected. But maybe the machine analogy is broken.

All of the discussion here has been based on the assumption that heroic responsibility is advocated by HPMOR as a fundamental moral virtue. But it is advocated by Harry Potter. Eliezer wrote somewhere about what in HPMOR can and what cannot be taken as the author's own views. I forget the exact criterion, but I'm sure it did not include "everything said by HP".

Heroic responsibility is a moral tool. That not everyone is able to use the tool, that the tool should not always be employed, that the tool exacts its own costs: these are all true. The to... (read more)

4hargup
This is mentioned at the beginning of the book " please keep in mind that, beyond the realm of science, the views of the characters may not be those of the author. Not everything the protagonist does is a lesson in wisdom, and advice offered by darker characters may be untrustworthy or dangerously double-edged."
4Lumifer
You roll for it :-P
2Emily
I think the newer buzzword that means roughly the same thing might be "proactivity"?

I'm wary of advice that doesn't generalize.

I'm wary of advice that does claim to generalize. Giving good advice is a hard problem, partly because it's so context-specific. Yes, there are general principles, but there are tons of exceptions, and even quite similar situations can trigger these exceptions.

Kant got into this kind of problem with (the first formulation of) the categorical imperative. There are many things that are desirable if some people, but not everybody, does them -- say, learning any specific skill or filling a particular social functi... (read more)

[-]Cyan20

FWIW, in my estimation your special-snowflake-nature is somewhere between "more than slightly, less than somewhat" and "potential world-beater". Those are wide limits, but they exclude zero.

4XFrequentist
Ooh ooh, do mine!
4Cyan
Same special-snowflake level credible limits, but for different reasons. Swimmer963 has an innate drive to seek out and destroy (whatever she judges to be) her personal inadequacies. She wasn't very strategic about it in teenager-hood, but now she has the tools to wield it like a scalpel in the hands of a skilled surgeon. Since she seems to have decided that a standard NPC job is not for her, I predict she'll become a PC shortly. You're already a PC; your strengths are a refusal to tolerate mediocrity in the long-term (or let us say, in the "indefinite" term, in multiple senses) and your vision for controlling and eradicating disease.

One possible thing you could do while being a nurse is starting a blog about problems nurses face. A blog where other nurses could also post anyonymously (but you would moderate it to remove the crazy stuff).

There is a chance that the new Minister of Health would read it. Technically, you could just send them a hyperlink, when the articles will already be there.

3NancyLebovitz
And possibly ending up in Atul Gawnde's position, which I hope is doing good in addition to what he could do as an individual doctor. 15 nursing blogs. I recommend skipping the introduction and going straight to the links. I don't know if any of them are from a rationalist angle.

There's an interesting concept Adam Grant introduced to me in Originals: the "risk portfolio". For him, people who are wildly creative and take risks in one domain compensate by being extra cautious in another domain ("drive carefully on your way to the casino"). The same might apply for heroic responsibility: continue working as a cog in the system on Mondays, write well-written thought-provoking posts on LessWrong (where the median person wants to take over the world) on Sundays. 

I think you're wrong about how the other nurses on your unit, and other people generally, would react to the idea of 'heroic responsibility', depending on you were to both bring it up and present it.

The key part of the quote with which I would expect lots of people to agree is:

“You can’t think as if just following the rules means you’ve done your duty."

I'd expect everyone to have encountered an incompetent or ineffective authority figure. I'd also expect nurses to routinely help each other out, and help their patients, by taking actions that aren'... (read more)

I kind of feel that heroic responsibility works better in situations where small individuals have the potential to make a large difference.

For example, in the world of HPMoR, it makes sense for one person to have a sort of heroic responsibility, because a sufficiently powerful wizard can actually make waves, can actually play a keystone role in the shaping of events.

On the other hand, take an imaginary planet where all the inhabitants are of equal size, shape and intelligence and there are well over a zillion inhabitants. On this planet, it is very hard t... (read more)

In short, I don't see any "philisophical" points to discuss here, just practical ones. I appoligize if I'm being too literal and missing out on something. Please let me know if I am.

All I got from the idea of heroic responsibility is, "Delegating responsibility to authorities is a heuristic. Heuristics sacrifice accuracy for speed, and will thus sometimes be inaccurate. People tend to apply this heuristic way too much in real life without thinking about whether or not doing so makes sense."


Concrete questions:

  • How should a nurse act i
... (read more)

It seems you have just closed the middle road.

1private_messaging
I don't think it can be closed. I mean, when one derives that level of heroism smugness from something as little as a few lightbulbs... a lot of people add a lot of lights just because they like it brighter. Which is ultimately what it boils down to if you go with qualitative 'more light is better for mood'.

I think the problem is mixing heroic responsibility with the idea that responsibility is something you can consistently fulfil. You can fulfil your responsibility as a nurse. Just do your job. Heroic responsibility isn't like that. You will let someone die about 1.8 times per second. Just save as many as you can. And to do that, start with the ones that are easiest to save. GiveWell has some tips for that.

I think for most things, it's important to have a specific person in charge, and have that person be responsible for the success of the thing as a whole. Having someone in charge makes sure there's a coherent vision in one person, makes a specific person accountable, and helps make sure nothing falls through the cracks because it was "someone else's job". When you're in charge, everything is your job.

If no one else has taken charge, stepping up yourself can be a good idea. In my software job, I often feel this way when no one is really championin... (read more)

I think heroic responsibility is essentially a response to being in a situation where not enough people are both competent at and willing to make changes to improve things. The authority figures are mad or untrustworthy, so a person has to figure out their own way to make the right things happen and put effective methods in place. It is particularly true of HPMOR where Harry plays the role of Only Sane Man. So far as I can tell, we're in a similar situation in real life at the minute: we have insufficient highly sane people taking heroic responsibility. If... (read more)

9Shmi
To me Snowden is one of the best examples of taking heroic responsibility. All the way to potentially breaking the laws and getting into the harms way to make the world a better place.
9Lumifer
I don't know about that. Let me offer you an example of a not-mad person who took heroic responsibility: Lenin. Generally speaking, it's all very tightly tied to values. If you share the values, the person "takes heroic responsibility", if you don't share the values, the person is just a fanatic.
0Jackercrack
You say he's not-mad, but isn't he the spitting image of the revolutionary that power corrupts? Wasn't Communism the archetype of the affective death spiral?It would appear he was likely suffering from syphilis, a disease that can cause confusion, dementia and memory problems. Anyway, isn't that an ad hominem argument?
5wedrifid
No. It is an argument which happens to use the perceived negative consequences of an individual's actions as a premise. Use of 'ad hominem!' to reject a claim only (legitimately) applies when there is a fallacy of relevance that happens to be a personal attack that doesn't support the conclusion. It does not apply whenever an argument happens to contain content that reflects badly on an individual.
3Lumifer
Lenin in the 1920s is not relevant to this argument, I would say he "took heroic responsibility" around, say, 1915-1918, and It looks to me that it would be hard to make the argument that he was already corrupted by power at this point. But if you don't like this example I'm sure I can find others. The underlying point is rather simple -- imagine "enough sane people taking heroic responsibility" with these people having a value system you find unacceptable...
0Jackercrack
I think we're using a different meaning of the word sane. See, I hold sanity to a rather high standard which excludes a huge breadth of people, probably myself as well until I've progressed somewhat. I am imagining enough sane people taking heroic responsibility, the world looks rather different than this and it seems to be better run. We already have people in charge with value systems unacceptable to me, making them at least competent and getting them to use evidence-based strategies seems like a step forwards. People will have a normal range of value systems, if a particularly aberrant person comes with a particularly strange value system, then they'll still have to outsmart all the other people to actually get their unacceptable value system in place Honestly lumifer, I'm beginning to thing you never want to change anything about any power structure in case it goes horribly wrong. How are things to progress if no changes are allowed?
-2Lumifer
Why is it a step forward? If these people have value systems unacceptable to you, presumably you want them stopped or at least slowed. You do NOT want them to become more efficient. That, um, is entirely non-obvious to me. Not to mention that I have no idea what do you mean by "normal". Oh, I do, I do. Usually, the first thing I want to do is reduce its power, though :-D But here I'm basically pointing out that both rationality and willingness to do something at any cost (which is what heroic responsibility is) are orthogonal to values. There are two consequences. First, heroic responsibility throws overboard the cost-benefit analysis. That's not really a good thing for people who run the world to do. "At any cost" is rarely justified. Second, I very much do NOT want people with values incompatible with mine to become more efficient, more effective, and more active. Muslim suicide bombers, for example, take heroic responsibility and I don't want more of them. True-believer cultists often take heroic responsibility, and no, I don't think it's a good thing either. It really does depend on the values involved.
2Jackercrack
See, you're ignoring the qualifier 'sane' again. I do not consider suicide bombers sane. Suicide bombers are extreme outliers, and they kill negligible numbers of people. Last time I checked they kill less people per year on average than diseases I had never heard of. Quite frankly, they are a non-issue when you actually look at the numbers. It is not obvious to me that heroic responsibility implies that a thing should be done without cost/benefit analysis or at any cost. Of course it depends on the values systems involved, I just happen to be fine with most values systems. I'll rephrase normal values systems to be more clear: People will on average end up with an average range of value systems. The majority will probably be somewhat acceptable to me, so in aggregate I'm fine with it. Is there a specific mechanism by which reducing government power would do good? What countries have been improved when that path has been taken? It seems like it would just shift power to even less accountable companies.
0Lumifer
Well, would you like to define it, then? I am not sure I understand your use of this word. In particular, does it involve any specific set of values? Things done on the basis of cost-benefit analysis are just rational things to do. The "heroic" part must stand for something, no? Ahem. Most out of which set? Are there temporal or geographical limits? That's a complicated discussion that should start with what is meant by "good" (we're back to value systems again), maybe we should take it up another time...
2Jackercrack
I'll put this in a separate post because it is not to do with heroic responsibility and it has been bugging me. What evidence do you have that your favoured idea of reducing political power does what you want it to do? Are there states which have switched to this method and benefited? Are there countries that have done this and what happened to them? Why do you believe what you believe?
2Lumifer
Well, before we wade into mindkilling territory, let me set the stage and we'll see if you find the framework reasonable. Government power is multidimensional. It's very common to wish for more government power in one area but less in another area. Therefore government power in aggregate is a very crude metric. However if you try to imagine government power as an n-dimensional body in a high-dimensional space, you can think of the volume of that n-dimensional body as total government power and that gives you a handle on what that means. Government power, generally speaking, has costs and benefits. Few people prefer either of the two endpoints -- complete totalitarianism or stateless anarchy. Most arguments are about which trade-offs are advantageous and about where the optimal point on the axis is located. To talk about optimality you need a yardstick. That yardstick is people's value system. Since people have different value systems, different people will prefer different optimal points. If you consider the whole population you can (theoretically) build a preference distribution and interpret one of its centrality measures (e.g. mean, median, or mode) as the "optimal" optimal point, but that needs additional assumptions and gets rather convoluted rather fast. There are multiple complicating factors in play here. Let me briefly list two. First, the population's preferences do not arise spontaneously in a pure and sincere manner. They are a function of local culture and the current memeplex, for example (see the Overton window), and are rather easily manipulated. Manipulating the political sentiments of the population is a time-honored and commonplace activity, you can assume by default that it is happening. There are multiple forces attempting the manipulation, of course, with different goals, so the balance is fluid and uncertain. Consider the ideas of "manufacturing consent" or the concept of "engines of consent" -- these ideas were put forward by such diverse
0Jackercrack
All of it looks reasonable to me apart from the last paragraph. I can see times when governments do willingly contract. There are often candidates who campaign on a platform of tax cuts, the UK had one in power from 1979-1990 and the US had one in power from 2001-2009. Tax cuts necessarily require eventual reductions in government spending and thus the power of government, agreed?
1Nornagest
If they're sustained long enough, yeah. But a state has more extensive borrowing powers than an individual does, and an administration so inclined can use those powers to spend beyond its means for rather a long time -- certainly longer than the term in office of a politician who came to power on a promise of tax cuts. The US federal budget has been growing for a long time, including over the 2001-2009 period, and the growth under low-tax regimes has been paid for by deficit spending. (Though you'd really want to be looking at federal spending as a percentage of GDP. There seems to be some disagreement over the secular trend there, but the sources I've found agree that the trend 2001-2009 was positive.)
0Jackercrack
Yes, I was going to comment on how a clever politician could spend during their own term to intentionally screw over the next party to take power, but I wanted to avoid the possible political argument that could ensue.
0Lumifer
Yeah, the "starve the beast" strategy looked appealing in theory but rather spectacularly failed in practice...
0V_V
Even if the tax cut are funded by reduction in government spending why would that imply a reduction of government power?
0Jackercrack
They don't necessarily have to, but generally do. For instance during austerity measures spending is generally reduced in most areas. Police forces have less funding and thus lose the ability to have as great an effect on an area, that is they have less power. Unless you're talking about power as a state of laws instead of a state of what is physically done to people?
0Lumifer
Do you think UK had an austerity period recently?
0Jackercrack
Well, yes, it was all over the news. This feels like a trick question. Are you about to tell me that spending went up during the recession or something?
0Lumifer
You have good instincts :-) Yes, this was a trap: behold.
0Jackercrack
Then what was all that stuff on the news about cutting government jobs, trying desperately to ensure frontline services weren't effected and so on about? Edit: I knew it! No wonder I felt so confused. It would seem the reduction in spending just took a while to come into effect. Take a look at the years after 2011 that your chart is missing. Unfortunately it's not adjusted for inflation but you still get the idea. If you change category to protection and the subcategory to 'police', 'prisons' or 'law courts', you can see the reduction in police funding over the course of the recession.
0Lumifer
So, my trap backfired? Ouch. :-( I guess I should be more careful about where I dig them :-) But I shall persevere anyway! :-D First, let me point out that the UK public spending contracted for a single year (2013) and 2014 is already projected to top all previous years. That's not a meaningful contraction. Second, we are talking about the power of the government. Did you feel this power lessened is some way around 2013? Sure, some programs were cut or didn't grow as fast as some people wanted, but is there any discernible way in which the government was weaker in 2013 than it was in 2012?
0EHeller
Fewer police on the street, for one. I've seen declining numbers of officers in my visits to the UK since probably around late 2010.
0Lumifer
That's true, it seems in England and Wales the number of police officers dropped by about 10% since the peak of 2009 (source).
0Jackercrack
Right, it's time we got back on track. Now that we using the same definition of power and we've come to the conclusion that a reduction in tax revenues can reduce physical projection of power but is unlikely to remove the laws that determine what maximum level of power is legally allowed to be projected. I believe you were talking about optimal levels of power when compared to growth?
0Lumifer
Not at all. I was talking about optimal levels of power from the point of view of my system of values.
1Jackercrack
Right, well would you please continue? I believe the question that started all this off was how do you know said theory corresponds to reality.
0Lumifer
Which particular theory? You asked why do I want the reduce the power of the government and what does that mean. I tried to answer to the best of my ability, but there is no falsifiable theory about my values. They are what they are.
3Jackercrack
A theory of government is not an terminal value, it is an instrumental one. You believe that that particular way of government will make people happy/autonomous/free/healthy/whatever your value system is. What is lacking is evidence that this particular government actually achieves those aims. It's a reasonable a priori argument, but so are dozens of other arguments for other governments. We need to distinguish which reality we are actually living in. By what metric can your goals be measured and where would you expect them to be highest? Are there countries/states trying this and what is the effect? Are there countries doing the exact opposite and what would you expect to be the result of that? Your belief must be falsifiable or else it is permeable to flour and meaningless. Stage a full crisis of faith if you have to. No retreating into a separate magesterium, why do you believe what you believe?
0Lumifer
Which "this particular government"? I don't think I'm advocating any specific government. May I point you here? My preferences neither are nor need to be falsifiable. Why do I believe what?
0Jackercrack
That large government is worse than small government.
-2Lumifer
Because a larger government takes more of my money, because it limits me in certain areas where I would prefer not to be limited, and because it has scarier and more probable failure modes.
-2Jackercrack
It finally makes sense, you're looking at it from a personal point of view. Consider it from the view of the average wellbeing of the entire populace. Zoom out to consider the entire country, the full system of which the government is just a small part. A larger government has more probable failure modes, but a small one simply outsources its failure modes to companies and extremely rich individuals. Power abhors a vacuum. You and I are not large enough or typical enough for considerations about our optimality to enter into the running of a country. People are eternal and essentially unchanging, the average level of humanity rises but slowly. The only realistic way to improve their lot is to change the situation in which the decision is made. The structure of the system they flow through is too important to be left to market forces and random chance. I don't care much if it inconveniences me so long as on average the lot of humanity is improved. Edit: I fully expect you to disagree with me, but at least that's one mystery solved.
-1Lumifer
Sure. A larger government takes more of their money, limits them in areas where they would prefer to be not limited, and has scarier and more probable failure modes. No, I don't think so, not the really scary failure modes. Things like Pol Pot's Kampuchea cannot be outsourced. The second half of that sentence contradicts the first half. I don't know of anyone who proposes random chance as a guiding political principle. As to the market forces, well, they provide the best economy human societies have ever seen. A lot of people thought they could do better -- they all turned out to be wrong. You're still missing a minor part -- showing that a large government does indeed do that better compared to a smaller one. By the way, are you saying that the current government size and power (say, typical for EU countries) are optimal? too small?
2Jackercrack
You misunderstand me. I am not saying that a large government is definitely better. I'm simply playing devils advocate. I find it worrying that you can't find any examples of good things in larger government though. Do socialised single payer healthcare, lower crime rates due to more police, better roads, better infrastructure, environmental protections and higher quality schools not count as benefit? These are all things that require taxes and can be improved with greater spending on them. Edit: In retrospect maybe this is how a changed humanity looks already. That seems to fit the reality better.
1Lumifer
Of course I can. Recall me talking about the multidimensionality of government power and how most people (including me) would prefer more in one dimension but less in another. On the whole I would prefer a weaker government, but not necessarily in every single aspect. However I would stress once again the cost-benefit balance. More is only better is you're below the optimal point, go above it and more will be worse.
0Jackercrack
And neither of us have the evidence required to find this point (if indeed it is just one point instead of several optimal peaks). I'm tapping out. If you have any closing points I'll try to take them into account in my thinking. Regardless, it seems like we agree on more than we disagree on.
-1Azathoth123
Some of these things are, some aren't. Let's go through the list: In the countries I'm most familiar with the socialized health care system is something you want to avoid if you have an alternative. Ok, those are examples. Even if the the crime rates that make more police necessary are due to other stupid government policies. Well these days a lot of environmental protection laws are insane, as in we must divert water from the farms because if we don't the delta smelt population might be reduced (this is California's actual water policy). Other times they're just excuses for extreme NIMBYism. Well, in the US the rule of thumb is that the more control government exercises over schools the worse they are.
5TheAncientGeek
Kind of trueish but, not in a way that supports your point, Public healthcare systems tend to be run on something of a shoe string, so an Individual who can easily afford private treatment is often better off with that option, However, that does not translate to the total population or average person.. Analogously , the fact that travelling in a chauffeur limo is more pleasant than travelling on a train, for those who can afford it, is no justification for dismantling public transportation systems. And it's not either/or, anyway. Ok stupid government bad. But what's the relationship between large government and stupid government? Large government has at least the capacity to hire expert consultants, and implement checks and balances. And there's plenty of examples of autocratic rulers who were batshit crazy. In the US? Doesn't generalize. Ditto.
1Azathoth123
Um. Do you mean the money allocated in the budget for the healthcare system or the money that actually trickles down to the actual doctors? Because the former tends to be larger than the latter.
3TheAncientGeek
I believe that private healthcare deliverers have nonzero administrative costs as well. http://epianalysis.wordpress.com/2012/07/18/usversuseurope/
0Azathoth123
Yes, but they actually have incentives to keep those costs down.
1TheAncientGeek
Taxpayers don't like paying tax, which is the incentive to keep down costs in a public healthcare system, and it works because they are all cheaper than the US system.
-1Azathoth123
To the extend this incentive exists its fulfilled by degrading quality rather than improving efficiency.
2TheAncientGeek
Taxpayers don't like poor quality healthcare either. And degraded from what? It's not like there was ever a golden age where the average person had top quality and affordable healthcare, and then someone came along and spoiled everything. Public healthcare is like public transport: it's not supposed to be the best in-money-is-no-object terms, it is supposed to better than nothing. And lets remind ourselves that, factually, a number of public healthcare systems deliver equal .or better results to the US system for less money.
-1Azathoth123
But they have to solve a rational ignorance and a collective action problem to do something about it.
2TheAncientGeek
And lets remind ourselves, again, that, factually, a number of public healthcare systems deliver equal .or better results to the US system for less money. So it looks like they have.
0A1987dM
Even the former is much smaller than what you guys pay in the US.
0wedrifid
Such things are referred to as 'safety nets' for a reason. Falling from the tightrope still isn't advised.
-4TheAncientGeek
Larger government gives more and invests more...governments don't just burn money. Large government doesn't automatically mean less freedom...the average person in mediaeval Europe was not particularly free. Large government can rescue large corporations when they fail....
-1Lumifer
You seem to be well on the roads towards the "if you want a small government why don't you GO AND LIVE IN SOMALIA" argument.... And why in the world would that be a good thing?
-3TheAncientGeek
Why not answer the points I actually made? Because ineffective corporations continuing to exist is less bad in terms of human suffering than major economic collapse.
2MarkusRamikin
Raising the spectre of "major economic collapse" at the notion that big corporations might have to operate under the same market conditions and risks as everyone else seems like an argument straight from a corporate lobbyist. Don't government rescues reward poor management and incentivise excessive risk, thus leading to economic troubles which necessitate them in the first place? It is not at all clear to me that the hypothetical world in which bailouts don't happen and corporations know it and act accordingly contains more suffering. Especially after you consider the costs imposed on the competent to rescue the failures, and the cost to the economy from uneven competition (between those who can afford to take bigger risks, or simply manage themselves sloppier, knowing that they are "too big to fail", and those who cannot).
2TheAncientGeek
Calling it a spectre makes it sound mythical, but it has been known to happen. The fallacy lies in not having sufficient evidence it will happen in any particular case. You can reduce risky behaviour by regulation. Baillouts without regulation is the worst possible word. Bailouts involve disutility. My argument is that by spreading the costs over more people and more time, they entail less suffering.
0Lumifer
Because I didn't see a point, just a bunch of straw. First, I don't think that is true. Second, there was a bit of sleight of hand -- you replaced the failure of large corporations with "major economic collapse". That's, um, not exactly the same thing :-/
0TheAncientGeek
Free free to specify the non straw versions. Feel free to support that claim with an argument. There are good reasons for thinking that the collapse of a large financial institution, in particular can cause a domino affect. It's happened before. And it's hardly debatable that recessions cause suffering...the suicide rate goes up, for one thing. No, and it's not completely disjoint , neither.
0Lumifer
So, how much did the government actually contract under Maggie or under Ronnie? :-) Did that contraction stick? Oh, not at all. You just borrow more. Besides, spending is only part of the power of the government. Consider e.g. extending the reach of the laws which does not necessarily require any budgetary increases.
0Strange7
And/or authorize the police to steal. http://en.wikipedia.org/wiki/Asset_forfeiture
0Lumifer
It works best if you let the cops keep part of their robbery hauls :-/
0Jackercrack
There does come a point when the bill must be paid though, even if it is over a long time. Even if it's over 40 years as you pay back the interest on the debt. Before we go further, I think we need to be sure we're talking about the same thing when we say power. See, when you said a reduction in government power, what I heard was essentially less money, smaller government. I'm getting the feeling that that is not entirely what you meant, could you clarify?
0Lumifer
That too, but not only that. There is nothing tricky here, I'm using the word "power" in its straightforward meaning. Power includes money, but it also includes things like the monopoly on (legal) violence, the ability to create and enforce laws and regulations, give or withhold permission to do something (e.g. occupational licensing), etc. etc.
1gjm
I had always assumed it was intended to stand for doing things that are rational even if they're really hard or scary and unanticipated. If you do a careful cost-benefit calculation and conclude (depending on your values and beliefs) that ... * ... the biggest risk facing humanity in the nearish future is that of a runaway AI doing things we really don't want but are powerless to stop, and preventing this requires serious hard work in mathematics and philosophy and engineering that no one seems to be doing; or * ... most of the world's population is going to spend eternity in unimaginable torment because they don't know how to please the gods; or * ... there are billions of people much, much worse off than you, and giving away almost everything you have and almost everything you earn will make the world a substantially better place than keeping it in order to have a nicer house, better food, more confidence of not starving when you get old, etc. and if you are a normal person then you shrug your shoulders, say "damn, that's too bad", and get on with your life; but if you are infused with a sense of heroic responsibility then you devote your life to researching AI safety (and propagandizing to get other people thinking about it too), or become a missionary, or live in poverty while doing lucrative but miserable work in order to save lives in Africa. If it turns out that you picked as good a cause as you think you did, and if you do your heroic job well and get lucky, then you can end up transforming the world for the better. If you picked a bad cause (saving Germany from the Jewish menace, let's say) and do your job well and get lucky, you can (deservedly) go down in history as an evil genocidal tyrant and one of the worst people who ever lived. And if you turn out not to have the skill and luck you need, you can waste your life failing to solve the problem you took aim at, and end up neither accomplishing anything of importance nor having a comfortable life.
0Jiro
If you're a normal person, the fact that you shrug your shoulders when faced with such things is beneficial because shrugging your shoulders instead of being heroic when faced with the destruction of civilization serves as immunity against crazy ideas and because you're running on corrupted hardware, you probably aren't as good at figuring out how to avoid the destruction of civilization as you think. Just saying "I'm not going to shrug my shoulders; I'm going to be heroic instead" is removing the checks and balances that are irrational themselves but protect you against bad rationality of other types, leaving you worse off overall.
0gjm
I am inclined to agree; I am not a fan of the idea of "heroic responsibility". (Though I think most of us could stand to be a notch or two more heroic than we currently are.)
0Lumifer
Well, here is a counter-example. I can't imagine that was too intimidating :-/
0Jackercrack
Okay, my definition of sane is essentially: rational enough to take actions that generally work towards your goals and to create goals that are effective ways to satisfy your terminal values. It's a rather high bar. Suicide bombers do not achieve their goals, cultists have had their cognitive machinery hijacked to serve someone else's goals instead of their own. The reason I think this would be okay in aggregate is the psychological unity of mankind: we're mostly pretty similar and there are remarkably low numbers of evil mutants. Being pretty similar, most people's goals would be acceptable to me. I disagree with some things China does for example, but I find their overwhelming competence makes up for it in aggregate wellbeing of their populace. gjm gives some good examples of heroic responsibility, but I understand the term slightly differently. Heroic responsibility is to have found a thing that you have decided is important, generally by reasoned cost/benefit and then take responsibility to get it done regardless of what life throws your way. It may be an easy task or a hard task, but it must be an important task. The basic idea is that you don't stop when you feel like you tried, if your first attempt doesn't work you do more research and come up with a new strategy. If your second plan doesn't work because of unfair forces you take those unfair forces into account and come up with another plan. If that still doesn't work you try harder again, then you keep going until you either achieve the goal, it becomes clear that you cannot achieve the goal or the amount of effort you would have to put into the problem becomes significantly greater than the size of the benefit you expect. For example, the benefit for FAI is humanities continued existence, there is essentially no amount of effort one person could put in that could be too much. To use the example of Eliezer in this thread, the benefit of a person being happier and more effective for months each year is al
0Azathoth123
Really, last time I checked there is now a Caliphate in what is still nominal Iraq and Syria.
2Lumifer
Not quite. A collection of semi-local militias who managed to piss off just about everyone does not a caliphate make. P.S. Though as a comment on the grandparent post, some suicide bombers certainly achieve their goals (and that's even ignoring the obvious goal to die a martyr for the cause).
1Azathoth123
But not enough for "everyone" to mount an effective campaign to destroy them.
0Jackercrack
Achieved almost entirely by fighting through normal means, guns and such so I hardly see the relevant. Suicide bombing kills a vanishing small number of people. IED's are an actual threat. Their original goal as rebels was to remove a central government and now they're fighting a war of genocide against other rebel factions. I wonder how they would have responded if you'd told them at the start that a short while later they'd be slaughtering fellow muslims in direct opposition to their holy book.
0Lumifer
The definition you give sounds like a pretty low bar to me. The fact that you're calling the bar high means that there are implied but unstated things around this definition -- can you be more explicit? "Generally work towards your goals" looks to me like what 90% of the population is doing... Is it basically persistence/stubborness/bloodymindedness, then?
0Jackercrack
Persistence is a good word for it, plus a sense of making it work even if the world is unfair, the odds are stacked against you. No sense of having fought the good fight and lost, if you failed and there were things you possibly could done beforehand, general strategies that would have been effective even if you did not know what was coming, then that is your own responsibility. It is not, I think, a particularly healthy way of looking at most things. It can only really be useful as a mindset for things that really matter. Ah, sorry, I insufficiently unpacked "effective ways to satisfy terminal values". The hidden complexity was in "effectively". By effectively I meant in an efficient and >75% optimal manner. Many people do not know their own terminal values. Most people also don't know that what makes a human happy, which is often different from what a human wants. Of those that do know their values, few have effective plans to satisfy them. Looking back on it now, this is quite a large inferential distance behind the innocuous looking work 'sane'. I shall try to improve on that in the future.
0Lumifer
Is there an implication that someone or something does know? That strikes me as awfully paternalistic.
0Jackercrack
It's a statement of fact, not a political agenda. Neuroscientists know more about people's brains than normal people do, as a result of spending years and decades studying the subject.
0Lumifer
Huh? Neuroscientists know my terminal values better than I do because they studied brains? Sorry, that's nonsense.
0Jackercrack
Not yours specifically, but the general average across humanity. lukeprog wrote up a good summary of the factors correlated with happiness which you've probably read as well as an attempt to discern the causes. Not that happiness is the be-all and end-all of terminal values, but it certainly shows how little the average person knows about what they would actually happy with vs what they think they'd be happy with. I believe that small sub-sequence on the science of winning at life is far more than the average person knows on the subject, or else people wouldn't give such terrible advice.
0Lumifer
Aren't you making the assumption that the average applies to everyone? It does not. There is a rather wide spread and pretending that a single average value represents it well enough is unwarranted. There are certainly things biologically hardwired into human brains but not all of them are terminal values and for things that are (e.g. survival) you don't need a neurobiologist to point that out. Frankly, I am at loss to see what neurobiologists can say about terminal values. It's like asking Intel chip engineers about what a piece of software really does. I don't know about that. Do you have evidence? If a person's ideas about her happiness diverge from the average ones, I would by default assume that she's different from the average, not that she is wrong.
[-][anonymous]00

I think heroic responsibility is essentially a response to being in a situation where not enough people are both competent at and willing to make changes to improve things. The authority figures are mad or untrustworthy, so a person has to figure out their own way to make the right things happen and put effective methods in place. It is particularly true of HPMOR where Harry plays the role of Only Sane Man. So far as I can tell, we're in a similar situation in real life at the minute: we have insufficient highly sane people taking heroic responsibility. If... (read more)

[This comment is no longer endorsed by its author]Reply