Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Wikipedia articles from the future

12 snarles 29 October 2014 12:49PM

Speculation is important for forecasting; it's also fun.  Speculation is usually conveyed in two forms: in the form of an argument, or encapsulated in fiction; each has their advantages, but both tend to be time-consuming.  Presenting speculation in the form of an argument involves researching relevant background and formulating logical arguments.  Presenting speculation in the form of fiction requires world-building and storytelling skills, but it can quickly give the reader an impression of the "big picture" implications of the speculation; this can be more effective at establishing the "emotional plausibility" of the speculation.

I suggest a storytelling medium which can combine attributes of both arguments and fiction, but requires less work than either. That is the "wikipedia article from the future." Fiction written by inexperienced sci-fi writers tends to generate into a speculative encyclopedia anyways--why not just admit that you want to write an encyclopedia in the first place?  Post your "Wikipedia articles from the future" below.

A discussion of heroic responsibility

17 Swimmer963 29 October 2014 04:22AM

[Originally posted to my personal blog, reposted here with edits.]

Introduction

You could call it heroic responsibility, maybe,” Harry Potter said. “Not like the usual sort. It means that whatever happens, no matter what, it’s always your fault. Even if you tell Professor McGonagall, she’s not responsible for what happens, you are. Following the school rules isn’t an excuse, someone else being in charge isn’t an excuse, even trying your best isn’t an excuse. There just aren’t any excuses, you’ve got to get the job done no matter what.” Harry’s face tightened. “That’s why I say you’re not thinking responsibly, Hermione. Thinking that your job is done when you tell Professor McGonagall—that isn’t heroine thinking. Like Hannah being beat up is okay then, because it isn’t your fault anymore. Being a heroine means your job isn’t finished until you’ve done whatever it takes to protect the other girls, permanently.” In Harry’s voice was a touch of the steel he had acquired since the day Fawkes had been on his shoulder. “You can’t think as if just following the rules means you’ve done your duty. –HPMOR, chapter 75.

I like this concept. It counters a particular, common, harmful failure mode, and that it’s an amazingly useful thing for a lot of people to hear. I even think it was a useful thing for me to hear a year ago.

But... I’m not sure about this yet, and my thoughts about it are probably confused, but I think that there's a version of Heroic Responsibility that you can get from reading this description, that's maybe even the default outcome of reading this description, that's also a harmful failure mode. 
 

Something Impossible

A wrong way to think about heroic responsibility

I dealt with a situation at work a while back–May 2014 according to my journal. I had a patient for five consecutive days, and each day his condition was a little bit worse. Every day, I registered with the staff doctor my feeling that the current treatment was Not Working, and that maybe we ought to try something else. There were lots of complicated medical reasons why his decisions were constrained, and why ‘let’s wait and see’ was maybe the best decision, statistically speaking–that in a majority of possible worlds, waiting it out would lead to better outcomes than one of the potential more aggressive treatments, which came with side effects. And he wasn’t actually ignoring me; he would listen patiently to all my concerns. Nevertheless, he wasn’t the one watching the guy writhe around in bed, uncomfortable and delirious, for twelve hours every day, and I felt ignored, and I was pretty frustrated.

On day three or four, I was listening to Ray’s Solstice album on my break, and the song ‘Something Impossible’ came up. 

Bold attempts aren't enough, roads can't be paved with intentions...
You probably don’t even got what it takes,
But you better try anyway, for everyone's sake
And you won’t find the answer until you escape from the
Labyrinth of your conventions.
Its time to just shut up, and do the impossible.
Can’t walk away...
Gotta break off those shackles, and shake off those chains
Gotta make something impossible happen today... 
 
It hit me like a load of bricks–this whole thing was stupid and rationalists should win. So I spent my entire break talking on Gchat with one of my CFAR friends, trying to see if he could help me come up with a suggestion that the doctor would agree was good. This wasn’t something either of us were trained in, and having something to protect doesn't actually give you superpowers, and the one creative solution I came up with was worse than the status quo for several obvious reasons.

I went home on day four feeling totally drained and having asked to please have a different patient in the morning. I came in to find that the patient had nearly died in the middle of the night. (He was now intubated and sedated, which wasn’t great for him but made my life a hell of a lot easier.) We eventually transferred him to another hospital, and I spent a while feeling like I’d personally failed. 

I’m not sure whether or not this was a no-win scenario even in theory. But I don't think I, personally, could have done anything with greater positive expected value. There's a good reason why a doctor with 10 years of school and 20 years of ICU experience can override a newly graduated nurse's opinion. In most of the possible worlds, the doctor is right and I'm wrong. Pretty much the only thing that I could have done better would have been to care less–and thus be less frustrated and more emotionally available to comfort a guy who was having the worst week of his life. 

In short, I fulfilled my responsibilities to my patient. Nurses have a lot of responsibilities to their patients, well specified in my years of schooling and in various documents published by the College of Nurses of Ontario. But nurses aren’t expected or supposed to take heroic responsibility for these things. 

I think that overall, given a system that runs on humans, that's a good thing.  


The Well-Functioning Gear

I feel like maybe the hospital is an emergent system that has the property of patient-healing, but I’d be surprised if any one part of it does.

Suppose I see an unusual result on my patient. I don’t know what it means, so I mention it to a specialist. The specialist, who doesn’t know anything about the patient beyond what I’ve told him, says to order a technetium scan. He has no idea what a technetium scan is or how it is performed, except that it’s the proper thing to do in this situation. A nurse is called to bring the patient to the scanner, but has no idea why. The scanning technician, who has only a vague idea why the scan is being done, does the scan and spits out a number, which ends up with me. I bring it to the specialist, who gives me a diagnosis and tells me to ask another specialist what the right medicine for that is. I ask the other specialist – who has only the sketchiest idea of the events leading up to the diagnosis – about the correct medicine, and she gives me a name and tells me to ask the pharmacist how to dose it. The pharmacist – who has only the vague outline of an idea who the patient is, what test he got, or what the diagnosis is – doses the medication. Then a nurse, who has no idea about any of this, gives the medication to the patient. Somehow, the system works and the patient improves.

Part of being an intern is adjusting to all of this, losing some of your delusions of heroism, getting used to the fact that you’re not going to be Dr. House, that you are at best going to be a very well-functioning gear in a vast machine that does often tedious but always valuable work. –Scott Alexander

The medical system does a hard thing, and it might not do it well, but it does it. There is too much complexity for any one person to have a grasp on it. There are dozens of mutually incomprehensible specialties. And the fact that [insert generic nurse here] doesn't have the faintest idea how to measure electrolytes in blood, or build an MRI machine, or even what's going on with the patient next door, is a feature, not a bug.

The medical system doesn’t run on exceptional people–it runs on average people, with predictably average levels of skill, slots in working memory, ability to notice things, ability to not be distracted thinking about their kid's problems at school, etc. And it doesn’t run under optimal conditions; it runs under average conditions. Which means working overtime at four am, short staffing, three patients in the ER waiting for ICU beds, etc. 

Sure, there are problems with the machine. The machine is inefficient. The machine doesn’t have all the correct incentives lined up. The machine does need fixing–but I would argue that from within the machine, as one of its parts, taking heroic responsibility for your own sphere of control isn’t the way to go about fixing the system.

As an [insert generic nurse here], my sphere of control is the four walls of my patient's room. Heroic responsibility for my patient would mean...well, optimizing for them. In the most extreme case, it might mean killing the itinerant stranger to obtain a compatible kidney. In the less extreme case, I spend all my time giving my patient great care, instead of helping the nurse in the room over, whose patient is much sicker. And then sometimes my patient will die, and there will be literally nothing I can do about it, their death was causally set in stone twenty-four hours before they came to the hospital. 

I kind of predict that the results of installing heroic responsibility as a virtue, among average humans under average conditions, would be a) everyone stepping on everyone else’s toes, and b) 99% of them quitting a year later.
 

Recursive Heroic Responsibility


If you're a gear in a machine, and you notice that the machine is broken, your options are a) be a really good gear, or b) take heroic responsibility for your sphere of control, and probably break something...but that's a false dichotomy. Humans are very flexible tools, and there are also infinite other options, including "step out of the machine, figure out who's in charge of this shit, and get it fixed." 

You can't take responsibility for the individual case, but you can for the system-level problem, the long view, the one where people eat badly and don't exercise and at age fifty, morbidly obese with a page-long medical history, they end up as a slow-motion train wreck in an ICU somewhere. Like in poker, you play to win money–positive EV–not to win hands. Someone’s going to be the Minister of Health for Canada, and they’re likely to be in a position where taking heroic responsibility for the Canadian health care system makes things better. And probably the current Minister of Health isn’t being strategic, isn’t taking the level of responsibility that they could, and the concept of heroic responsibility would be the best thing for them to encounter.

So as an [insert generic nurse here], working in a small understaffed ICU, watching the endless slow-motion train wreck roll by...maybe the actual meta-level right thing to do is to leave, and become the freaking Minister of Health, or befriend the current one and introduce them to the concept of being strategic. 

But it's fairly obvious that that isn't the right action for all the nurses in that situation. I'm wary of advice that doesn't generalize. What's difference between the nurse who should leave in order to take meta-level responsibility, and the nurse who should stay because she's needed as a gear?

Heroic responsibility for average humans under average conditions

I can predict at least one thing that people will say in the comments, because I've heard it hundreds of times–that Swimmer963 is a clear example of someone who should leave nursing, take the meta-level responsibility, and do something higher impact for the usual. Because she's smart. Because she's rational. Whatever. 

Fine. This post isn't about me. Whether I like it or not, the concept of heroic responsibility is now a part of my value system, and I probably am going to leave nursing.

But what about the other nurses on my unit, the ones who are competent and motivated and curious and really care? Would familiarity with the concept of heroic responsibility help or hinder them in their work? Honestly, I predict that they would feel alienated, that they would assume I held a low opinion of them (which I don't, and I really don't want them to think that I do), and that they would flinch away and go back to the things that they were doing anyway, the role where they were comfortable–or that, if they did accept it, it would cause them to burn out. So as a consequentialist, I'm not going to tell them. 

And yeah, that bothers me. Because I'm not a special snowflake. Because I want to live in a world where rationality helps everyone. Because I feel like the reason they would react that was isn't because of anything about them as people, or because heroic responsibility is a bad thing, but because I'm not able to communicate to them what I mean. Maybe stupid reasons. Still bothers me. 

Link: Open-source programmable, 3D printable robot for anyone to experiment with

1 polymathwannabe 29 October 2014 02:21PM

Its name is Poppy.

"Both hardware and software are open source. There is not one single Poppy humanoid robot but as many as there are users. This makes it very attractive as it has grown from a purely technological tool to a real social platform."

LW Supplement use survey

8 FiftyTwo 28 October 2014 09:28PM

I've put together a very basic survey using google forms inspired by NancyLebovitz recent discussion post on supplement use 

Survey includes options for "other" and "do not use supplements." Results are anonymous and you can view all the results once you have filled it in, or use this link

 

Link to the Survey

Things to consider when optimizing: Sleep

8 mushroom 28 October 2014 05:26PM

I'd like to have a series of discussion posts, where each post is of the form "Let's brainstorm things you might consider when optimizing X", where X is something like sleep, exercise, commuting, studying, etc. Think of it like a specialized repository.

In the spirit of try more things, the direct benefit is to provide insights like "Oh, I never realized that BLAH is a knob I can fiddle. This gives me an idea of how I might change BLAH given my particular circumstances. I will try this and see what happens!"

The indirect benefit is to practice instrumental rationality using the "toy problem" provided by a general prompt.

Accordingly, participation could be in many forms:

* Pointers to scientific research
* General directions to consider
* Personal experience
* Boring advice
* Intersections with other community ideas, biases
* Cost-benefit, value-of-information analysis
* Related questions
* Other musings, thoughts, speculation, links, theories, etc.

This post is on sleep and circadian rhythms.

Meetup : Urbana-Champaign: Fun and Games

1 Manfred 28 October 2014 08:00PM

Discussion article for the meetup : Urbana-Champaign: Fun and Games

WHEN: 02 November 2014 03:00:00PM (-0500)

WHERE: 206 S. Cedar St, Urbana IL

Come for the fun and games, stay for practicing meditation. Also: halloween-candy-based elocution exercises.

Discussion article for the meetup : Urbana-Champaign: Fun and Games

Cross-temporal dependency, value bounds and superintelligence

2 joaolkf 28 October 2014 03:26PM

In this short post I will attempt to put forth some potential concerns that should be relevant when developing superintelligences, if certain meta-ethical effects exist. I do not claim they exist, only that it might be worth looking for them since their existence would mean some currently irrelevant concerns are in fact relevant. 

 

These meta-ethical effects would be a certain kind of cross-temporal dependency on moral value. First, let me explain what I mean by cross-temporal dependency. If value is cross-temporal dependent it means that value at t2 could be affected by t1, independently of any causal role t1 has on t2. The same event X at t2 could have more or less moral value depending onwhether Z or Y happened at t1. For instance, this could be the case on matters of survival. If we kill someone and replace her with a slightly more valuable person some would argue there was a lost rather than a gain of moral value; whereas if a new person with moral value equal to the difference of the previous two is created where there was none, most would consider an absolute gain. Furthermore, some might consider small, gradual and continual improvements are better than abrupt and big ones. For example, a person that forms an intention and a careful detailed plan to become better, and forceful self-wrought to be better could acquire more value than a person that simply happens to take a pill and instantly becomes a better person - even if they become that exact same person. This is not because effort is intrinsically valuable, but because of personal continuity. There are more intentions, deliberations and desires connecting the two time-slices of the person who changed through effort than there are connecting the two time-slices of the person who changed by taking a pill. Even though both persons become equally morally valuable in isolated terms, they do so from different paths that differently affects their final value.

More examples. You live now in t1. If suddenly in t2 you were replaced by an alien individual with the same amount of value as you would otherwise have in t2, then t2 may not have the exact same amount of value as it would otherwise have, simply in virtue of the fact that in t1 you were alive and the alien's previous time slice was not. 365 individuals with a 1 day life do not amount for the same value as a single individual living through 365 days. Slice history in 1 day periods, each day the universe contains one unique advanced civilization with the same overall total moral value, each civilization being completely alien and ineffable to another, each civilization only lives for one day, and then it would be gone forever. This universe does not seem to hold the same moral value as the one where only one of those civilizations flourishes for eternity. On all these examples the value of a period of time seems to be affected by the existence or not of certain events at other periods. They indicate that there is, at least, some cross-temporal dependency.

 

Now consider another type of effect, bounds on value. There could be a physical bound – transfinite or not - on the total amount of moral value that can be present per instant. For instance, if moral value rests mainly on sentient well-being, which can be categorized as a particular kind of computation, and there is a bound on the total amount of such computation which can be performed per instant, then there is a bound on the amount of value per instant. If, arguably, we are currently extremely far from such bound, and this bound will eventually be reached by a superintelligence (or any other structure), then the total moral value of the universe would be dominated by the value of this physical bound, given that regions where the physical bound wasn't reached would make negligible contributions. How much faster the bound can be reached, also how much more negligible pre-bound values are.

 

Finally, if there is a form of value cross-temporal dependence where preceding events leading to a superintelligence could alter the value of this physical bound, then we not only ought to make sure we safely construct a superintelligence, but also that we do so following the path that maximizes such bound. It might be the case that an overly abrupt superintelligence would decrease such bound, thus all future moral value would be diminished by the fact there was a huge discontinuity in the past in the events leading to this future. Even small decreases on such bound would have dramatic effects. Although I do not know of any plausible cross-temporal effect of such kind, it seems this question deserves at least a minimal amount of though. Both cross-temporal dependency and bounds on value seem plausible (in fact I believe some form of them are true), so it is not at all prima facie inconceivable that we could have cross-temporal effects changing the bound up or down.

Link: Elon Musk wants gov't oversight for AI

7 polymathwannabe 28 October 2014 02:15AM

"I'm increasingly inclined to thing there should be some regulatory oversight, maybe at the national and international level just to make sure that we don't do something very foolish."

http://www.cnet.com/news/elon-musk-we-are-summoning-the-demon-with-artificial-intelligence/#ftag=CAD590a51e

Stupid Questions (10/27/2014)

11 drethelin 27 October 2014 09:27PM

I think it's past time for another Stupid Questions thread, so here we go. 

 

This thread is for asking any questions that might seem obvious, tangential, silly or what-have-you. Please respect people trying to fix any ignorance they might have, rather than mocking that ignorance. 

 

Donation Discussion - alternatives to the Against Malaria Foundation

3 ancientcampus 28 October 2014 03:00AM

About a year and a half ago, I made a donation to the Against Malaria Foundation. This was during jkaufman's generous matching offer.

That was 20 months ago, and my money is still in the "underwriting" phase - funding projects that are still, of yet, just plans and no nets.

Now, the AMF has had a reasonable reason it was taking longer than expected:

"A provisional, large distribution in a province of the [Democratic Republic of the Congo] will not proceed as the distribution agent was unable to agree to the process requested by AMF during the timeframe needed by our co-funding partner."

So they've hit a snag, the earlier project fell through, and they are only now allocating my money to a new project. Don't get me wrong, I am very glad they are telling me where my money is going, and especially glad it didn't just end up in someone's pocket instead. With that said, though, I still must come to this conclusion:

The AMF seems to have more money than they can use, right now.

So, LW, I have the following questions:

  1. Is this a problem? Should one give their funds to another charity for the time being?
  2. Regardless of your answer to the above, are there any recommendations for other transparent, efficient charities? [other than MIRI]

View more: Next