My Kind of Moral Responsibility

3 Gram_Stone 02 May 2016 05:54AM

The following is an excerpt of an exchange between Julia Galef and Massimo Pigliucci, from the transcript for Rationally Speaking Podcast episode 132:

Massimo: [cultivating virtue and 'doing good' locally 'does more good' than directly eradicating malaria]

Julia: [T]here's lower hanging fruit [in the developed world than there is in the developing world]. By many order of magnitude, there's lower hanging fruit in terms of being able to reduce poverty or disease or suffering in some parts of the world than other parts of the world. In the West, we've picked a lot of the low hanging fruit, and by any sort of reasonable calculation, it takes much more money to reduce poverty in the West -- because we're sort of out in the tail end of having reduced poverty -- than it does to bring someone out of poverty in the developing world.

Massimo: That kind of reasoning brings you quickly to the idea that everybody here is being a really really bad person because they spent money for coming here to NECSS listening to us instead of saving children on the other side of the world. I resist that kind of logic.

Massimo (to the audience): I don't think you guys are that bad! You see what I mean?

I see a lot of people, including bullet-biters, who feel a lot of internal tension, and even guilt, because of this apparent paradox.

Utilitarians usually stop at the question, "Are the outcomes different?"

Clearly, they aren't. But people still feel tension, so it must not be enough to believe that a world where some people are alive is better than a world where those very people are dead. The confusion has not evaporated in a puff of smoke, as we should expect.

After all, imagine a different gedanken where a virtue ethicist and a utilitarian each stand in front of a user interface, with each interface bearing only one shiny red button. Omega tells each, "If you press this button, then you will prevent one death. If you do not press this button, then you will not prevent one death."

There would be no disagreement. Both of them would press their buttons without a moment of hesitation.

So, in a certain sense, it's not only a question of which outcome is better. The repugnant part of the conclusion is the implication for our intuitions about moral responsibility. It's intuitive that you should save ten lives instead of one, but it's counterintuitive that the one who permits death is just as culpable as the one who causes death. You look at ten people who are alive when they could be dead, and it feels right to say that it is better that they are alive than that they are dead, but you juxtapose a murderer and your best friend who is not an ascetic, and it feels wrong to say that the one is just as awful as the other.

The virtue-ethical response is to say that the best friend has lived a good life and the murderer has not. Of course, I don't think that anyone who says this has done any real work.

So, if you passively don't donate every cent of discretionary income to the most effective charities, then are you morally culpable in the way that you would be if you had actively murdered everyone that you chose not to save who is now dead?

Well, what is moral responsibility? Hopefully we all know that there is not one culpable atom in the universe.

Perhaps the most concrete version of this question is: what happens, cognitively, when we evaluate whether or not someone is responsible for something? What's the difference between situations where we consider someone responsible and situations where we don't? What happens in the brain when we do these things? How do different attributions of responsibility change our judgments and decisions?

Most research on feelings has focused only on valence, how positiveness and negativeness affect judgment. But there's clearly a lot more to this: sadness, anger, and guilt are all negative feelings, but they're not all the same, so there must be something going on beyond valence.

One hypothesis is that the differences between sadness, anger, and guilt reflect different appraisals of agency. When we are sad, we haven't attributed the cause of the inciting event to an agent; the cause is situational, beyond human control. When we are angry, we've attributed the cause of the event to the actions of another agent. When we are guilty, we've attributed the cause of the event to our own actions.

(It's worth noting that there are many more types of appraisal than this, many more emotions, and many more feelings beyond emotions, but I'm going to focus on negative emotions and appraisals of agency for the sake of brevity. For a review of proposed appraisal types, see Demir, Desmet, & Hekkert (2009). For a review of emotions in general, check out Ortony, Clore, & Collins' The Cognitive Structure of Emotions.)

So, what's it look like when we narrow our attention to specific feelings on the same side of the valence spectrum? How are judgments affected when we only look at, say, sadness and anger? Might experiments based on these questions provide support for an account of our dilemma in terms of situational appraisals?

In one experiment, Keltner, Ellsworth, & Edwards (1993) found that sad subjects consider events with situational causes more likely than events with agentic causes, and that angry subjects consider events with agentic causes more likely than events with situational causes. In a second experiment in the same study, they found that sad subjects are more likely to consider situational factors as the primary cause of an ambiguous event than agentic factors, and that angry subjects are more likely to consider agentic factors as the primary cause of an ambiguous event than situational factors.

Perhaps unsurprisingly, watching someone commit murder, and merely knowing that someone could have prevented a death on the other side of the world through an unusual effort, makes very different things happen in our brains. I expect that even the utilitarians are biting a fat bullet; that even the utilitarians feel the tension, the counterintuitiveness, when utilitarianism leads them to conclude that indifferent bystanders are just as bad as murderers. Intuitions are strong, and I hope that a few more utilitarians can understand why utilitarianism is just as repugnant to a virtue ethicist as virtue ethics is to a utilitarian.

My main thrust here is that "Is a bystander as morally responsible as a murderer?" is a wrong question. You're always secretly asking another question when you ask that question, and the answer often doesn't have the word 'responsibility' anywhere in it.

Utilitarians replace the question with, "Do indifference and evil result in the same consequences?" They answer, "Yes."

Virtue ethicists replace the question with, "Does it feel like indifference is as 'bad' as 'evil'?" They answer, "No."

And the one thinks, in too little detail, "They don't think that bystanders are just as bad as murderers!", and likewise, the other thinks, "They do think that bystanders are just as bad as murderers!".

And then the one and the other proceed to talk past one another for a period of time during which millions more die.

As you might expect, I must confess to a belief that the utilitarian is often the one less confused, so I will speak to that one henceforth.

As a special kind of utilitarian, the kind that frequents this community, you should know that, if you take the universe, and grind it down to the finest powder, and sieve it through the finest sieve, then you will not find one agentic atom. If you only ask the question, "Has the virtue ethicist done the moral thing?", and you silently reply to yourself, "No.", and your response is to become outraged at this, then you have failed your Art on two levels.

On the first level, you have lost sight of your goal. As if your goal is to find out whether or not someone has done the moral thing, or not! Your goal is to cause them to commit the moral action. By your own lights, if you fail to be as creative as you can possibly be in your attempts at persuasion, then you're just as culpable as someone who purposefully turned someone away from utilitarianism as a normative-ethical position. And if all you do is scorn the virtue ethicists, instead of engaging with them, then you're definitely not being very creative.

On the second level, you have failed to apply your moral principles to yourself. You have not considered that the utility-maximizing action might be something besides getting righteously angry, even if that's the easiest thing to do. And believe me, I get it. I really do understand that impulse.

And if you are that sort of utilitarian who has come to such a repugnant conclusion epistemically, but who has failed to meet your own expectations instrumentally, then be easy now. For there is no longer a question of 'whether or not you should be guilty'. There are only questions of what guilt is used for, and whether or not that guilt ends more lives than it saves.

All of this is not to say that 'moral outrage' is never the utility-maximizing action. I'm at least a little outraged right now. But in the beginning, all you really wanted was to get rid of naive notions of moral responsibility. The action to take in this situation is not to keep them in some places and toss them in others.

Throw out the bath water, and the baby, too. The virtue ethicists are expecting it anyway.

 


Demir, E., Desmet, P. M. A., & Hekkert, P. (2009). Appraisal patterns of emotions in human-product interaction. International Journal of Design, 3(2), 41-51.

Keltner, D., Ellsworth, P., & Edwards, K. (1993). Beyond simple pessimism: Effects of sadness and anger on social perception. Journal of Personality and Social Psychology, 64, 740-752.

Ortony, A., Clore, G. L., & Collins, A. (1990). The Cognitive Structure of Emotions. (1st ed.).

The 'why does it even tell me this' moment

5 Romashka 01 May 2016 08:15AM

Edited based on the outline kindly provided by Gram_Stone, whom I thank.

There is a skill of reading and thinking which I haven't learned so far: of looking for implications as one goes through the book, simply putting it back on shelf until one's mind has run out of the inferences, perhaps writing them down. I think it would be easier to do with books that [have pictures]

- invite an attitude (like cooking shows or Darwin's travel accounts or Feynman's biography: it doesn't have to be "personal"),

- are/have been regularly needed (ideally belong to you so you can make notes on the margins),

- are either outdated (so you "take it with a grain of salt" and have the option of looking for a current opinion) or very new,

- are not highly specialized,

- are well-structured, preferably into one- to a-few-pages-long chapters,

- allow reading those chapters out of order*,

- (make you) recognize that you do not need this knowledge for its own sake,

- can be shared, or at least shown to other people, and talked about, etc. (Although I keep imagining picture albums when I read the list, so maybe I missed something.)

These features are what attracts me to an amateur-level Russian plant identification text of the 1948.** It was clearly written, and didn't contain many species of plants that the author considered to be easily grouped with others for practical purposes. It annoyed me when I expected the book to hold certain information that it didn't (a starting point - I have to notice something to want to think). This is merely speculation, but I suspect that the author omitted many of the species that they did because the book was intended to convey agricultural knowledge of great economic importance to the Soviet population of the time (although some included details were clearly of less import, botanists know that random bits trivia might help recognizing the plant in the field, which established a feeling of kinship - the realisation that the author's goal was to teach how to use the book, and how to get by without it on hand). I found the book far more entertaining to read when I realized that I would have to evaluate it in this context, even though one might think that this would actually make it more difficult to read. I was surprised that something as simple as glancing at a note on beetroot production rates could make me do more cognitive work than any cheap trick that I'd ever seen a pedagogical author try to perform purposefully.

There may be other ways that books could be written to spontaneously cause independent thought in their audiences. Perhaps we can do this on purpose. Or perhaps the practice of making inferences beyond what is obviously stated in books can be trained.

* which might be less useful for people learning about math.

** Ф. Нейштадт. Определитель растений. - Учпедгиз, 1948. - 476 с. An identification key gives you an algorithm, a branching path which must end with a Latin name, which makes using it leisurely a kind of game. If you cannot find what you see, then either you've made a mistake or it isn't there.

Turning the Technical Crank

43 Error 05 April 2016 05:36AM

A few months ago, Vaniver wrote a really long post speculating about potential futures for Less Wrong, with a focus on the idea that the spread of the Less Wrong diaspora has left the site weak and fragmented. I wasn't here for our high water mark, so I don't really have an informed opinion on what has socially changed since then. But a number of complaints are technical, and as an IT person, I thought I had some useful things to say.

I argued at the time that many of the technical challenges of the diaspora were solved problems, and that the solution was NNTP -- an ancient, yet still extant, discussion protocol. I am something of a crank on the subject and didn't expect much of a reception. I was pleasantly surprised by the 18 karma it generated, and tried to write up a full post arguing the point.

I failed. I was trying to write a manifesto, didn't really know how to do it right, and kept running into a vast inferential distance I couldn't seem to cross. I'm a product of a prior age of the Internet, from before the http prefix assumed its imperial crown; I kept wanting to say things that I knew would make no sense to anyone who came of age this millennium. I got bogged down in irrelevant technical minutia about how to implement features X, Y, and Z. Eventually I decided I was attacking the wrong problem; I was thinking about 'how do I promote NNTP', when really I should have been going after 'what would an ideal discussion platform look like and how does NNTP get us there, if it does?'

So I'm going to go after that first, and work on the inferential distance problem, and then I'm going to talk about NNTP, and see where that goes and what could be done better. I still believe it's the closest thing to a good, available technological schelling point, but it's going to take a lot of words to get there from here, and I might change my mind under persuasive argument. We'll see.

Fortunately, this is Less Wrong, and sequences are a thing here. This is the first post in an intended sequence on mechanisms of discussion. I know it's a bit off the beaten track of Less Wrong subject matter. I posit that it's both relevant to our difficulties and probably more useful and/or interesting than most of what comes through these days. I just took the 2016 survey and it has a couple of sections on the effects of the diaspora, so I'm guessing it's on topic for meta purposes if not for site-subject purposes.

Less Than Ideal Discussion

To solve a problem you must first define it. Looking at the LessWrong 2.0 post, I see the following technical problems, at a minimum; I'll edit this with suggestions from comments.

  1. Aggregation of posts. Our best authors have formed their own fiefdoms and their work is not terribly visible here. We currently have limited support for this via the sidebar, but that's it.
  2. Aggregation of comments. You can see diaspora authors in the sidebar, but you can't comment from here.
  3. Aggregation of community. This sounds like a social problem but it isn't. You can start a new blog, but unless you plan on also going out of your way to market it then your chances of starting a discussion boil down to "hope it catches the attention of Yvain or someone else similarly prominent in the community." Non-prominent individuals can theoretically post here; yet this is the place we are decrying as moribund.
  4. Incomplete and poor curation. We currently do this via Promoted, badly, and via the diaspora sidebar, also badly.
  5. Pitiful interface feature set. This is not so much a Less Wrong-specific problem as a 2010s-internet problem; people who inhabit SSC have probably seen me respond to feature complaints with "they had something that did that in the 90s, but nobody uses it." (my own bugbear is searching for comments by author-plus-content).
  6. Changes are hamstrung by the existing architecture, which gets you volunteer reactions like this one.

I see these meta-technical problems:

  1. Expertise is scarce. Few people are in a position to technically improve the site, and those that are, have other demands on their time.
  2. The Trivial Inconvenience Problem limits the scope of proposed changes to those that are not inconvenient to commenters or authors.
  3. Getting cooperation from diaspora authors is a coordination problem. Are we better than average at handling those? I don't know.

Slightly Less Horrible Discussion

"Solving" community maintenance is a hard problem, but to the extent that pieces of it can be solved technologically, the solution might include these ultra-high-level elements:

  1. Centralized from the user perspective. A reader should be able to interact with the entire community in one place, and it should be recognizable as a community.
  2. Decentralized from the author perspective. Diaspora authors seem to like having their own fiefdoms, and the social problem of "all the best posters went elsewhere" can't be solved without their cooperation. Therefore any technical solution must allow for it.
  3. Proper division of labor. Scott Alexander probably should not have to concern himself with user feature requests; that's not his comparative advantage and I'd rather he spend his time inventing moral cosmologies. I suspect he would prefer the same. The same goes for Eliezer Yudkowski or any of our still-writing-elsewhere folks.
  4. Really good moderation tools.
  5. Easy entrance. New users should be able to join the discussion without a lot of hassle. Old authors that want to return should be able to do so and, preferably, bring their existing content with them.
  6. Easy exit. Authors who don't like the way the community is heading should be able to jump ship -- and, crucially, bring their content with them to their new ship. Conveniently. This is essentially what has happened, except old content is hostage here.
  7. Separate policy and mechanism within the site architecture. Let this one pass for now if you don't know what it means; it's the first big inferential hurdle I need to cross and I'll be starting soon enough.

As with the previous, I'll update this from the comments if necessary.

Getting There From Here

As I said at the start, I feel on firmer ground talking about technical issues than social ones. But I have to acknowledge one strong social opinion: I believe the greatest factor in Less Wrong's decline is the departure of our best authors for personal blogs. Any plan for revitalization has to provide an improved substitute for a personal blog, because that's where everyone seems to end up going. You need something that looks and behaves like a blog to the author or casual readers, but integrates seamlessly into a community discussion gateway.

I argue that this can be achieved. I argue that the technical challenges are solvable and the inherent coordination problem is also solvable, provided the people involved still have an interest in solving it.

And I argue that it can be done -- and done better than what we have now -- using technology that has existed since the '90s.

I don't argue that this actually will be achieved in anything like the way I think it ought to be. As mentioned up top, I am a crank, and I have no access whatsoever to anybody with any community pull. My odds of pushing through this agenda are basically nil. But we're all about crazy thought experiments, right?

This topic is something I've wanted to write about for a long time. Since it's not typical Less Wrong fare, I'll take the karma on this post as a referendum on whether the community would like to see it here.

Assuming there's interest, the sequence will look something like this (subject to reorganization as I go along, since I'm pulling this from some lengthy but horribly disorganized notes; in particular I might swap subsequences 2 and 3):

  1. Technical Architecture
    1. Your Web Browser Is Not Your Client
    2. Specialized Protocols: or, NNTP and its Bastard Children
    3. Moderation, Personal Gardens, and Public Parks
    4. Content, Presentation, and the Division of Labor
    5. The Proper Placement of User Features
    6. Hard Things that are Suddenly Easy: or, what does client control gain us?
    7. Your Web Browser Is Still Not Your Client (but you don't need to know that)
  2. Meta-Technical Conflicts (or, obstacles to adoption)
    1. Never Bet Against Convenience
    2. Conflicting Commenter, Author, and Admin Preferences
    3. Lipstick on the Configuration Pig
    4. Incremental Implementation and the Coordination Problem.
    5. Lowering Barriers to Entry and Exit
  3. Technical and Social Interoperability
    1. Benefits and Drawbacks of Standards
    2. Input Formats and Quoting Conventions
    3. Faking Functionality
    4. Why Reddit Makes Me Cry
    5. What NNTP Can't Do
  4. Implementation of Nonstandard Features
    1. Some desirable feature #1
    2. Some desirable feature #2
    3. ...etc. This subsequence is only necessary if someone actually wants to try and do what I'm arguing for, which I think unlikely.

(Meta-meta: This post was written in Markdown, converted to HTML for posting using Pandoc, and took around four hours to write. I can often be found lurking on #lesswrong or #slatestarcodex on workday afternoons if anyone wants to discuss it, but I don't promise to answer quickly because, well, workday)

[Edited to add: At +10/92% karma I figure continuing is probably worth it. After reading comments I'm going to try to slim it down a lot from the outline above, though. I still want to hit all those points but they probably don't all need a full post's space. Note that I'm not Scott or Eliezer, I write like I bleed, so what I do post will likely be spaced out]

Lesswrong 2016 Survey

29 Elo 30 March 2016 06:17PM

It’s time for a new survey!

Take the survey now


The details of the last survey can be found here.  And the results can be found here.

 

I posted a few weeks back asking for suggestions for questions to include on the survey.  As much as we’d like to include more of them, we all know what happens when we have too many questions. The following graph is from the last survey.


http://i.imgur.com/KFTn2Bt.png

KFTn2Bt.png

(Source: JD’s analysis of 2014 survey data)


Two factors seem to predict if a question will get an answer:

  1. The position

  2. Whether people want to answer it. (Obviously)


People answer fewer questions as we approach the end. They also skip tricky questions. The least answered question on the last survey was - “what is your favourite lw post, provide a link”.  Which I assume was mostly skipped for the amount of effort required either in generating a favourite or in finding a link to it.  The second most skipped questions were the digit-ratio questions which require more work, (get out a ruler and measure) compared to the others. This is unsurprising.


This year’s survey is almost the same size as the last one (though just a wee bit smaller).  Preliminary estimates suggest you should put aside 25 minutes to take the survey, however you can pause at any time and come back to the survey when you have more time.  If you’re interested in helping process the survey data please speak up either in a comment or a PM.


We’re focusing this year particularly on getting a glimpse of the size and shape of the LessWrong diaspora.  With that in mind; if possible - please make sure that your friends (who might be less connected but still hang around in associated circles) get a chance to see that the survey exists; and if you’re up to it - encourage them to fill out a copy of the survey.


The survey is hosted and managed by the team at FortForecast, you’ll be hearing more from them soon. The survey can be accessed through http://lesswrong.com/2016survey.


Survey responses are anonymous in that you’re not asked for your name. At the end we plan to do an opt-in public dump of the data. Before publication the row order will be scrambled, datestamps, IP addresses and any other non-survey question information will be stripped, and certain questions which are marked private such as the (optional) sign up for our mailing list will not be included. It helps the most if you say yes but we can understand if you don’t.  


Thanks to Namespace (JD) and the FortForecast team, the Slack, the #lesswrong IRC on freenode, and everyone else who offered help in putting the survey together, special thanks to Scott Alexander whose 2014 survey was the foundation for this one.


When answering the survey, I ask you be helpful with the format of your answers if you want them to be useful. For example if a question asks for an number, please reply with “4” not “four”.  Going by the last survey we may very well get thousands of responses and cleaning them all by hand will cost a fortune on mechanical turk. (And that’s for the ones we can put on mechanical turk!) Thanks for your consideration.

 

The survey will be open until the 1st of may 2016

 


Addendum from JD at FortForecast: During user testing we’ve encountered reports of an error some users get when they try to take the survey which erroneously reports that our database is down. We think we’ve finally stamped it out but this particular bug has proven resilient. If you get this error and still want to take the survey here are the steps to mitigate it:

 

  1. Refresh the survey, it will still be broken. You should see a screen with question titles but no questions.

  2. Press the “Exit and clear survey” button, this will reset your survey responses and allow you to try again fresh.

  3. Rinse and repeat until you manage to successfully answer the first two questions and move on. It usually doesn’t take more than one or two tries. We haven’t received reports of the bug occurring past this stage.


If you encounter this please mail jd@fortforecast.com with details. Screenshots would be appreciated but if you don’t have the time just copy and paste the error message you get into the email.

 

Take the survey now


Meta - this took 2 hours to write and was reviewed by the slack.


My Table of contents can be found here.

The Brain Preservation Foundation's Small Mammalian Brain Prize won

43 gwern 09 February 2016 09:02PM

The Brain Preservation Foundation’s Small Mammalian Brain Prize has been won with fantastic preservation of a whole rabbit brain using a new fixative+slow-vitrification process.

  • BPF announcement (21CM’s announcement)
  • evaluation
  • The process was published as “Aldehyde-stabilized cryopreservation”, McIntyre & Fahy 2015 (mirror)

    We describe here a new cryobiological and neurobiological technique, aldehyde-stabilized cryopreservation (ASC), which demonstrates the relevance and utility of advanced cryopreservation science for the neurobiological research community. ASC is a new brain-banking technique designed to facilitate neuroanatomic research such as connectomics research, and has the unique ability to combine stable long term ice-free sample storage with excellent anatomical resolution. To demonstrate the feasibility of ASC, we perfuse-fixed rabbit and pig brains with a glutaraldehyde-based fixative, then slowly perfused increasing concentrations of ethylene glycol over several hours in a manner similar to techniques used for whole organ cryopreservation. Once 65% w/v ethylene glycol was reached, we vitrified brains at −135 °C for indefinite long-term storage. Vitrified brains were rewarmed and the cryoprotectant removed either by perfusion or gradual diffusion from brain slices. We evaluated ASC-processed brains by electron microscopy of multiple regions across the whole brain and by Focused Ion Beam Milling and Scanning Electron Microscopy (FIB-SEM) imaging of selected brain volumes. Preservation was uniformly excellent: processes were easily traceable and synapses were crisp in both species. Aldehyde-stabilized cryopreservation has many advantages over other brain-banking techniques: chemicals are delivered via perfusion, which enables easy scaling to brains of any size; vitrification ensures that the ultrastructure of the brain will not degrade even over very long storage times; and the cryoprotectant can be removed, yielding a perfusable aldehyde-preserved brain which is suitable for a wide variety of brain assays…We have shown that both rabbit brains (10 g) and pig brains (80 g) can be preserved equally well. We do not anticipate that there will be significant barriers to preserving even larger brains such as bovine, canine, or primate brains using ASC.

    (They had problems with 2 pigs and got 1 pig brain successfully cryopreserved but it wasn’t part of the entry. I’m not sure why: is that because the Large Mammalian Brain Prize is not yet set up?)
  • previous discussion: Mikula’s plastination came close but ultimately didn’t seem to preserve the whole brain when applied.
  • commentary: Alcor, Robin Hanson, John Smart, Evidence-Based Cryonics, Vice, Pop Sci
  • donation link

To summarize it, you might say that this is a hybrid of current plastination and vitrification methods, where instead of allowing slow plastination (with unknown decay & loss) or forcing fast cooling (with unknown damage and loss), a staged approach is taking: a fixative is injected into the brain first to immediately lock down all proteins and stop all decay/change, and then it is leisurely cooled down to be vitrified.

This is exciting progress because the new method may wind up preserving better than either of the parent methods, but also because it gives much greater visibility into the end-results: the aldehyde-vitrified brains can be easily scanned with electron microscopes and the results seen in high detail, showing fantastic preservation of structure, unlike regular vitrification where the scans leave opaque how good the preservation was. This opacity is one reason that as Mike Darwin has pointed out at length on his blog and jkaufman has also noted that we cannot be confident in how well ALCOR or CI’s vitrification works - because if it didn’t, we have little way of knowing.

EDIT: BPF’s founder Ken Hayworth (Reddit account) has posted a piece, arguing that ALCOR & CI cannot be trusted to do procedures well and that future work should be done via rigorous clinical trials and only then rolled out. “Opinion: The prize win is a vindication of the idea of cryonics, not of unaccountable cryonics service organizations”

…“Should cryonics service organizations immediately start offering this new ASC procedure to their ‘patients’?” My personal answer (speaking for myself, not on behalf of the BPF) has been a steadfast NO. It should be remembered that these same cryonics service organizations have been offering a different procedure for years. A procedure that was not able to demonstrate, to even my minimal expectations, preservation of the brain’s neural circuitry. This result, I must say, surprised and disappointed me personally, leading me to give up my membership in one such organization and to become extremely skeptical of all since. Again, I stress, current cryonics procedures were NOT able to meet our challenge EVEN UNDER IDEAL LABORATORY CONDITIONS despite being offered to paying customers for years[1]. Should we really expect that these same organizations can now be trusted to further develop and properly implement such a new, independently-invented technique for use under non-ideal conditions?

Let’s step back for a moment. A single, independently-researched, scientific publication has come out that demonstrates a method of structural brain preservation (ASC) compatible with long-term cryogenic storage in animal models (rabbit and pig) under ideal laboratory conditions (i.e. a healthy living animal immediately being perfused with fixative). Should this one paper instantly open the floodgates to human application? Under untested real-world conditions where the ‘patient’ is either terminally ill or already declared legally dead? Should it be performed by unlicensed persons, in unaccountable organizations, operating outside of the traditional medical establishment with its checks and balances designed to ensure high standards of quality and ethics? To me, the clear answer is NO. If this was a new drug for cancer therapy, or a new type of heart surgery, many additional steps would be expected before even clinical trials could start. Why should our expectations be any lower for this?

The fact that the ASC procedure has won the brain preservation prize should rightly be seen as a vindication of the central idea of cryonics –the brain’s delicate circuitry underlying memory and personality CAN in fact be preserved indefinitely, potentially serving as a lifesaving bridge to future revival technologies. But, this milestone should certainly not be interpreted as a vindication of the very different cryonics procedures that are practiced on human patients today. And it should not be seen as a mandate for more of the same but with an aldehyde stabilization step casually tacked on. …

Anxiety and Rationality

32 helldalgo 19 January 2016 06:30PM

Recently, someone on the Facebook page asked if anyone had used rationality to target anxieties.  I have, so I thought I’d share my LessWrong-inspired strategies.  This is my first post, so feedback and formatting help are welcome.  

First things first: the techniques developed by this community are not a panacea for mental illness.  They are way more effective than chance and other tactics at reducing normal bias, and I think many mental illnesses are simply cognitive biases that are extreme enough to get noticed.  In other words, getting a probability question about cancer systematically wrong does not disrupt my life enough to make the error obvious.  When I believe (irrationally) that I will get fired because I asked for help at work, my life is disrupted.  I become non-functional, and the error is clear.

Second: the best way to attack anxiety is to do the things that make your anxieties go away.  That might seem too obvious to state, but I’ve definitely been caught in an “analysis loop,” where I stay up all night reading self-help guides only to find myself non-functional in the morning because I didn’t sleep.  If you find that attacking an anxiety with Bayesian updating is like chopping down the Washington monument with a spoon, but getting a full night’s sleep makes the monument disappear completely, consider the sleep.  Likewise for techniques that have little to no scientific evidence, but are a good placebo.  A placebo effect is still an effect.

Finally, like all advice, this comes with Implicit Step Zero:  “Have enough executive function to give this a try.”  If you find yourself in an analysis loop, you may not yet have enough executive function to try any of the advice you read.  The advice for functioning better is not always identical to the advice for functioning at all.  If there’s interest in an “improving your executive function” post, I’ll write one eventually.  It will be late, because my executive function is not impeccable.

Simple updating is my personal favorite for attacking specific anxieties.  A general sense of impending doom is a very tricky target and does not respond well to reality.  If you can narrow it down to a particular belief, however, you can amass evidence against it. 

Returning to my example about work: I alieved that I would get fired if I asked for help or missed a day due to illness.  The distinction between believe and alieve is an incredibly useful tool that I immediately integrated when I heard of it.  Learning to make beliefs pay rent is much easier than making harmful aliefs go away.  The tactics are similar: do experiments, make predictions, throw evidence at the situation until you get closer to reality.  Update accordingly.  

The first thing I do is identify the situation and why it’s dysfunctional.  The alief that I’ll get fired for asking for help is not actually articulated when it manifests as an anxiety.  Ask me in the middle of a panic attack, and I still won’t articulate that I am afraid of getting fired.  So I take the anxiety all the way through to its implication.  The algorithm is something like this:

  1.       Notice sense of doom
  2.       Notice my avoidance behaviors (not opening my email, walking away from my desk)
  3.       Ask “What am I afraid of?”
  4.       Answer (it's probably silly)
  5.       Ask “What do I think will happen?”
  6.       Make a prediction about what will happen (usually the prediction is implausible, which is why we want it to go away in the first place)

In the “asking for help” scenario, the answer to “what do I think will happen” is implausible.  It’s extremely unlikely that I’ll get fired for it!  This helps take the gravitas out of the anxiety, but it does not make it go away.*  After (6), it’s usually easy to do an experiment.  If I ask my coworkers for help, will I get fired?  The only way to know is to try. 

…That’s actually not true, of course.  A sense of my environment, my coworkers, and my general competence at work should be enough.  But if it was, we wouldn’t be here, would we?

So I perform the experiment.  And I wait.  When I receive a reply of any sort, even if it’s negative, I make a tick mark on a sheet of paper.  I label it “didn’t get fired.”  Because again, even if it’s negative, I didn’t get fired. 

This takes a lot of tick marks.  Cutting down the Washington monument with a spoon, remember?

The tick marks don’t have to be physical.  I prefer it, because it makes the “updating” process visual.  I’ve tried making a mental note and it’s not nearly as effective.  Play around with it, though.  If you’re anything like me, you have a lot of anxieties to experiment with. 

Usually, the anxiety starts to dissipate after obtaining several tick marks.  Ideally, one iteration of experiments should solve the problem.  But we aren’t ideal; we’re mentally ill.  Depending on the severity of the anxiety, you may need someone to remind you that doom will not occur.  I occasionally panic when I have to return to work after taking a sick day.  I ask my husband to remind me that I won’t get fired.  I ask him to remind me that he’ll still love me if I do get fired.  If this sounds childish, it’s because it is.  Again: we’re mentally ill.  Even if you aren’t, however, assigning value judgements to essentially harmless coping mechanisms does not make sense.  Childish-but-helpful is much better than mature-and-harmful, if you have to choose.

I still have tiny ugh fields around my anxiety triggers.  They don’t really go away.  It’s more like learning not to hit someone you’re angry at.  You notice the impulse, accept it, and move on.  Hopefully, your harmful alief starves to death.

If you perform your experiment and doom does occur, it might not be you.  If you can’t ask your boss for help, it might be your boss.  If you disagree with your spouse and they scream at you for an hour, it might be your spouse.  This isn’t an excuse to blame your problems on the world, but abusive situations can be sneaky.  Ask some trusted friends for a sanity check, if you’re performing experiments and getting doom as a result.  This is designed for situations where your alief is obviously silly.  Where you know it’s silly, and need to throw evidence at your brain to internalize it.  It’s fine to be afraid of genuinely scary things; if you really are in an abusive work environment, maybe you shouldn’t ask for help (and start looking for another job instead). 

 

 

*using this technique for several months occasionally stops the anxiety immediately after step 6.  

[moderator action] The_Lion and The_Lion2 are banned

51 Viliam_Bur 30 January 2016 02:09AM

Accounts "The_Lion" and "The_Lion2" are banned now. Here is some background, mostly for the users who weren't here two years ago:

 

User "Eugine_Nier" was banned for retributive downvoting in July 2014. He keeps returning to the website using new accounts, such as "Azathoth123", "Voiceofra", "The_Lion", and he keeps repeating the behavior that got him banned originally.

The original ban was permanent. It will be enforced on all future known accounts of Eugine. (At random moments, because moderators sometimes feel too tired to play whack-a-mole.) This decision is not open to discussion.

 

Please note that the moderators of LW are the opposite of trigger-happy. Not counting spam, there is on average less than one account per year banned. I am writing this explicitly, to avoid possible misunderstanding among the new users. Just because you have read about someone being banned, it doesn't mean that you are now at risk.

Most of the time, LW discourse is regulated by the community voting on articles and comments. Stupid or offensive comments get downvoted; you lose some karma, then everyone moves on. In rare cases, moderators may remove specific content that goes against the rules. The account ban is only used in the extreme cases (plus for obvious spam accounts). Specifically, on LW people don't get banned for merely not understanding something or disagreeing with someone.

 

What does "retributive downvoting" mean? Imagine that in a discussion you write a comment that someone disagrees with. Then in a few hours you will find that your karma has dropped by hundreds of points, because someone went through your entire comment history and downvoted all comments you ever wrote on LW; most of them completely unrelated to the debate that "triggered" the downvoter.

Such behavior is damaging to the debate and the community. Unlike downvoting a specific comment, this kind of mass downvoting isn't used to correct a faux pas, but to drive a person away from the website. It has especially strong impact on new users, who don't know what is going on, so they may mistake it for a reaction of the whole community. But even in experienced users it creates an "ugh field" around certain topics known to invoke the reaction. Thus a single user has achieved disproportional control over the content and the user base of the website. This is not desired, and will be punished by the site owners and the moderators.

To avoid rules lawyering, there is no exact definition of how much downvoting breaks the rules. The rule of thumb is that you should upvote or downvote each comment based on the value of that specific comment. You shouldn't vote on the comments regardless of their content merely because they were written by a specific user.

Omega's Idiot Brother, Epsilon

3 OrphanWilde 25 November 2015 07:57PM

Epsilon walks up to you with two boxes, A and b, labeled in rather childish-looking handwriting written in crayon.

"In box A," he intones, sounding like he's trying to be foreboding, which might work better when he hits puberty, "I may or may not have placed a million of your human dollars."  He pauses for a moment, then nods.  "Yes.  I may or may not have placed a million dollars in this box.  If I expect you to open Box B, the million dollars won't be there.  Box B will contain, regardless of what you do, one thousand dollars.  You may choose to take one box, or both; I will leave with any boxes you do not take."

You've been anticipating this.  He's appeared to around twelve thousand people so far.  Out of eight thousand people who accepted both boxes, eighty found the million dollars missing, and walked away with $1,000; the other seven thousand nine hundred and twenty people walked away with $1,001,000 dollars.  Out of the four thousand people who opened only box A, only four found it empty.

The agreement is unanimous: Epsilon is really quite bad at this.  So, do you one-box, or two-box?


There are some important differences here with the original problem.  First, Epsilon won't let you open either box until you've decided whether to open one or both, and will leave with the other box.  Second, while Epsilon's false positive rate on identifying two-boxers is quite impressive, making mistakes about one-boxers only .1% of the time, his false negative rate is quite unimpressive - he catches 1% of everybody who engages in it.  Whatever heuristic he's using, clearly, he prefers to let two-boxers slide than to accidentally punish one-boxers.

I'm curious to know whether anybody would two-box in this scenario and why, and particularly curious in the reasoning of anybody whose answer is different between the original Newcomb problem and this one.

We’ll write you a will for free if you leave a gift to GiveWell’s top charities

4 peter_hurford 16 October 2015 03:33AM

Would you like to leave money in your will to GiveWell’s top rated charities at the time of your passing? If so, Charity Science will you help you write it for free.

To make it as easy as possible for you, we at Charity Science have made a simple form that takes as little as 5 minutes to complete. After that you come out with a ready made will. And don’t worry if you’re not sure what to put in it; it’s easy to change and you can always come back to it later. So give it a shot here. The default option should be to set it up just in case something terrible does happen, that way you always have something ready.

A few more reasons to take the time to write a will include:

  • Reducing the inheritance tax incurred - leaving money to charity being an excellent way to do so.

  • Making provisions for your children if you have any, for example by choosing who will take care of them and setting aside funds for this.

  • Making any other necessary provisions, such as for your pets, or your business, or other responsibilities that you have.

  • Specifying what sort of funeral you would like, which will spare your family from having to make the decision.

  • Naming your executors for your will (family members are a standard choice).


But most of all it’s because you have the incredible opportunity to do an epic amount of good.


You can set it up here. After that consider talking to your friends, parents and grandparents to see if they would be interested in doing the same. It’s really important you mention it because the average amount left to charities in a will is in the thousands of dollars so a few words may go a very long way.


If this doesn’t appeal to you then there are other things that you could do. You can always run a fundraiser for Christmas, your Birthday or any event you like.

Less Wrong EBook Creator

45 ScottL 13 August 2015 09:17PM

I read a lot on my kindle and I noticed that some of the sequences aren’t available in book form. Also, the ones that are mostly only have the posts. I personally want them to also include some of the high ranking comments and summaries. So, that is why I wrote this tool to automatically create books from a set of posts. It creates the book based on the information you give it in an excel file. The excel file contains:

Post information

  • Book name
  • Sequence name
  • Title
  • Link
  • Summary description

Sequence information

  • Name
  • Summary

Book information

  • Name
  • Summary

The only compulsory component is the link to the post.

I have used the tool to create books for Living LuminouslyNo-Nonsense MetaethicsRationality: From AI to ZombiesBenito's Guide and more. You can see them in the examples folder in this github link. The tool just creates epub books you can use calibre or a similar tool to convert it to another format.  

continue reading »

View more: Next