Filter Last three months

Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[moderator action] The_Lion and The_Lion2 are banned

51 Viliam_Bur 30 January 2016 02:09AM

Accounts "The_Lion" and "The_Lion2" are banned now. Here is some background, mostly for the users who weren't here two years ago:

 

User "Eugine_Nier" was banned for retributive downvoting in July 2014. He keeps returning to the website using new accounts, such as "Azathoth123", "Voiceofra", "The_Lion", and he keeps repeating the behavior that got him banned originally.

The original ban was permanent. It will be enforced on all future known accounts of Eugine. (At random moments, because moderators sometimes feel too tired to play whack-a-mole.) This decision is not open to discussion.

 

Please note that the moderators of LW are the opposite of trigger-happy. Not counting spam, there is on average less than one account per year banned. I am writing this explicitly, to avoid possible misunderstanding among the new users. Just because you have read about someone being banned, it doesn't mean that you are now at risk.

Most of the time, LW discourse is regulated by the community voting on articles and comments. Stupid or offensive comments get downvoted; you lose some karma, then everyone moves on. In rare cases, moderators may remove specific content that goes against the rules. The account ban is only used in the extreme cases (plus for obvious spam accounts). Specifically, on LW people don't get banned for merely not understanding something or disagreeing with someone.

 

What does "retributive downvoting" mean? Imagine that in a discussion you write a comment that someone disagrees with. Then in a few hours you will find that your karma has dropped by hundreds of points, because someone went through your entire comment history and downvoted all comments you ever wrote on LW; most of them completely unrelated to the debate that "triggered" the downvoter.

Such behavior is damaging to the debate and the community. Unlike downvoting a specific comment, this kind of mass downvoting isn't used to correct a faux pas, but to drive a person away from the website. It has especially strong impact on new users, who don't know what is going on, so they may mistake it for a reaction of the whole community. But even in experienced users it creates an "ugh field" around certain topics known to invoke the reaction. Thus a single user has achieved disproportional control over the content and the user base of the website. This is not desired, and will be punished by the site owners and the moderators.

To avoid rules lawyering, there is no exact definition of how much downvoting breaks the rules. The rule of thumb is that you should upvote or downvote each comment based on the value of that specific comment. You shouldn't vote on the comments regardless of their content merely because they were written by a specific user.

Upcoming LW Changes

42 Vaniver 03 February 2016 05:34AM

Thanks to the reaction to this article and some conversations, I'm convinced that it's worth trying to renovate and restore LW. Eliezer, Nate, and Matt Fallshaw are all on board and have empowered me as an editor to see what we can do about reshaping LW to meet what the community currently needs. This involves a combination of technical changes and social changes, which we'll try to make transparently and non-intrusively.

continue reading »

Require contributions in advance

38 Viliam 08 February 2016 12:55PM

If you are a person who finds it difficult to tell "no" to their friends, this one weird trick may save you a lot of time!

 

Scenario 1

Alice: "Hi Bob! You are a programmer, right?"

Bob: "Hi Alice! Yes, I am."

Alice: "I have this cool idea, but I need someone to help me. I am not good with computers, and I need someone smart whom I could trust, so they wouldn't steal my idea. Would you have a moment to listen to me?"

Alice explains to Bob her idea that would completely change the world. Well, at the least the world of bicycle shopping.

Instead of having many shops for bicycles, there could be one huge e-shop that would collect all the information about bicycles from all the existing shops. The customers would specify what kind of a bike they want (and where they live), and the system would find all bikes that fit the specification, and display them ordered by lowest price, including the price of delivery; then it would redirect them to the specific page of the specific vendor. Customers would love to use this one website, instead of having to visit multiple shops and compare. And the vendors would have to use this shop, because that's where the customers would be. Taking a fraction of a percent from the sales could make Alice (and also Bob, if he helps her) incredibly rich.

Bob is skeptical about it. The project suffers from the obvious chicken-and-egg problem: without vendors already there, the customers will not come (and if they come by accident, they will quickly leave, never to return again); and without customers already there, there is no reason for the vendors to cooperate. There are a few ways how to approach this problem, but the fact that Alice didn't even think about it is a red flag. She also has no idea who are the big players in the world of bicycle selling; and generally she didn't do her homework. But after pointing out all these objections, Alice still remains super enthusiastic about the project. She promises she will take care about everything -- she just cannot write code, and she needs Bob's help for this part.

Bob believes strongly in the division of labor, and that friends should help each other. He considers Alice his friend, and he will likely need some help from her in the future. Fact is, with perfect specification, he could make the webpage in a week or two. But he considers bicycles to be an extremely boring topic, so he wants to spend as little time as possible on this project. Finally, he has an idea:

"Okay, Alice, I will make the website for you. But first I need to know exactly how the page will look like, so that I don't have to keep changing it over and over again. So here is the homework for you -- take a pen and paper, and make a sketch of how exactly the web will look like. All the dialogs, all the buttons. Don't forget logging in and logging out, editing the customer profile, and everything else that is necessary for the website to work as intended. Just look at the papers and imagine that you are the customer: where exactly would you click to register, and to find the bicycle you want? Same for the vendor. And possibly a site administrator. Also give me the list of criteria people will use to find the bike they want. Size, weight, color, radius of wheels, what else? And when you have it all ready, I will make the first version of the website. But until then, I am not writing any code."

Alice leaves, satisfied with the outcome.

 

This happened a year ago.

No, Alice doesn't have the design ready, yet. Once in a while, when she meets Bob, she smiles at him and apologizes that she didn't have the time to start working on the design. Bob smiles back and says it's okay, he'll wait. Then they change the topic.

 

Scenario 2

Cyril: "Hi Diana! You speak Spanish, right?"

Diana: "Hi Cyril! Yes, I do."

Cyril: "You know, I think Spanish is the most cool language ever, and I would really love to learn it! Could you please give me some Spanish lessons, once in a while? I totally want to become fluent in Spanish, so I could travel to Spanish-speaking countries and experience their culture and food. Would you please help me?"

Diana is happy that someone takes interest in her favorite hobby. It would be nice to have someone around she could practice Spanish conversation with. The first instinct is to say yes.

But then she remembers (she knows Cyril for some time; they have a lot of friends in common, so they meet quite regularly) that Cyril is always super enthusiastic about something he is totally going to do... but when she meets him next time, he is super enthusiastic about something completely different; and she never heard about him doing anything serious about his previous dreams.

Also, Cyril seems to seriously underestimate how much time does it take to learn a foreign language fluently. Some lessons, once in a while will not do it. He also needs to study on his own. Preferably every day, but twice a week is probably a minimum, if he hopes to speak the language fluently within a year. Diana would be happy to teach someone Spanish, but not if her effort will most likely be wasted.

Diana: "Cyril, there is this great website called Duolingo, where you can learn Spanish online completely free. If you give it about ten minutes every day, maybe after a few months you will be able to speak fluently. And anytime we meet, we can practice the vocabulary you have already learned."

This would be the best option for Diana. No work, and another opportunity to practice. But Cyril insists:

"It's not the same without the live teacher. When I read something from the textbook, I cannot ask additional questions. The words that are taught are often unrelated to the topics I am interested in. I am afraid I will just get stuck with the... whatever was the website that you mentioned."

For Diana this feels like a red flag. Sure, textbooks are not optimal. They contain many words that the student will not use frequently, and will soon forget them. On the other hand, the grammar is always useful; and Diana doesn't want to waste her time explaining the basic grammar that any textbook could explain instead. If Cyril learns the grammar and some basic vocabulary, then she can teach him all the specialized vocabulary he is interested in. But now it feels like Cyril wants to avoid all work. She has to draw a line:

"Cyril, this is the address of the website." She takes his notebook and writes 'www.duolingo.com'. "You register there, choose Spanish, and click on the first lesson. It is interactive, and it will not take you more than ten minutes. If you get stuck there, write here what exactly it was that you didn't understand; I will explain it when we meet. If there is no problem, continue with the second lesson, and so on. When we meet next time, tell me which lessons you have completed, and we will talk about them. Okay?"

Cyril nods reluctantly.

 

This happened a year ago.

Cyril and Diana have met repeatedly during the year, but Cyril never brought up the topic of Spanish language again.

 

Scenario 3

Erika: "Filip, would you give me a massage?"

Filip: "Yeah, sure. The lotion is in the next room; bring it to me!"

Erika brings the massage lotion and lies on the bed. Filip massages her back. Then they make out and have sex.

 

This happened a year ago. Erika and Filip are still a happy couple.

Filip's previous relationships didn't work well, in long term. In retrospect, they all followed a similar scenario. At the beginning, everything seemed great. Then at some moment the girl started acting... unreasonably?... asking Filip to do various things for her, and then acting annoyed when Filip did exactly what he was asked to do. This happened more and more frequently, and at some moment she broke up with him. Sometimes she provided explanation for breaking up that Filip was unable to decipher.

Filip has a friend who is a successful salesman. Successful both professionally and with women. When Filip admitted to himself that he is unable to solve the problem on his own, he asked his friend for advice.

"It's because you're a f***ing doormat," said the friend. "The moment a woman asks you to do anything, you immediately jump and do it, like a well-trained puppy. Puppies are cute, but not attractive. Have you ready any of those books I sent you, like, ten years ago? I bet you didn't. Well, it's all there."

Filip sighed: "Look, I'm not trying to become a pick-up artist. Or a salesman. Or anything. No offense, but I'm not like you, personality-wise, I never have been, and I don't want to become your - or anyone else's - copy. Even if it would mean greater success in anything. I prefer to treat other people just like I would want them to treat me. Most people reciprocate nice behavior; and those who don't, well, I avoid them as much as possible. This works well with my friends. It also works with the girls... at the beginning... but then somehow... uhm... Anyway, all your books are about manipulating people, which is ethically unacceptable for me. Isn't there some other way?"

"All human interaction is manipulation; the choice is between doing it right or wrong, acting consciously or driven by your old habits..." started the friend, but then he gave up. "Okay, I see you're not interested. Just let me show you the most obvious mistake you make. You believe that when you are nice to people, they will perceive you as nice, and most of them will reciprocate. And when you act like an asshole, it's the other way round. That's correct, on some level; and in a perfect world this would be the whole truth. But on a different level, people also perceive nice behavior as weakness; especially if you do it habitually, as if you don't have any other option. And being an asshole obviously signals strength: you are not afraid to make other people angry. Also, in long term, people become used to your behavior, good or bad. The nice people don't seem so nice anymore, but they still seem weak. Then, ironicaly, if the person well-known to be nice refuses to do something once, people become really angry, because their expectations were violated. And if the asshole decides to do something nice once, they will praise him, because he surprised them pleasantly. You should be an asshole once in a while, to make people see that you have a choice, so they won't take your niceness for granted. Or if your girlfriend wants something from you, sometimes just say no, even if you could have done it. She will respect you more, and then she will enjoy more the things you do for her."

Filip: "Well, I... probably couldn't do that. I mean, what you say seems to make sense, however much I hate to admit it. But I can't imagine doing it myself, especially to a person I love. It's just... uhm... wrong."

"Then, I guess, the very least you could do is to ask her to do something for you first. Even if it's symbolic, that doesn't matter; human relationships are mostly about role-playing anyway. Don't jump immediately when you are told to; always make her jump first, if only a little. That will demonstrate strength without hurting anyone. Could you do that?"

Filip wasn't sure, but at the next opportunity he tried it, and it worked. And it kept working. Maybe it was all just a coincidence, maybe it was a placebo effect, but Filip doesn't mind. At first it felt kinda artificial, but then it became natural. And later, to his surprise, Filip realized that practicing these symbolic demands actually makes it easier to ask when he really needed something. (In which case sometimes he was asked to do something first, because his girlfriend -- knowingly or not? he never had the courage to ask -- copied the pattern; or maybe she has already known it long before. But he didn't mind that either.)

 

The lesson is: If you find yourself repeatedly in situations where people ask you to do something for them, but at the end they don't seem to appreciate what you did for them, or don't even care about the thing they asked you to do... and yet you find it difficult to say "no"... ask them to contribute to the project first.

This will help you get rid of the projects they don't care about (including the ones they think they care about in far mode, but do not care about enough to actually work on them in near mode) without being the one who refuses cooperation. Also, the act of asking the other person to contribute, after being asked to do something for them, mitigates the status loss inherent in working for them.

Anxiety and Rationality

31 helldalgo 19 January 2016 06:30PM

Recently, someone on the Facebook page asked if anyone had used rationality to target anxieties.  I have, so I thought I’d share my LessWrong-inspired strategies.  This is my first post, so feedback and formatting help are welcome.  

First things first: the techniques developed by this community are not a panacea for mental illness.  They are way more effective than chance and other tactics at reducing normal bias, and I think many mental illnesses are simply cognitive biases that are extreme enough to get noticed.  In other words, getting a probability question about cancer systematically wrong does not disrupt my life enough to make the error obvious.  When I believe (irrationally) that I will get fired because I asked for help at work, my life is disrupted.  I become non-functional, and the error is clear.

Second: the best way to attack anxiety is to do the things that make your anxieties go away.  That might seem too obvious to state, but I’ve definitely been caught in an “analysis loop,” where I stay up all night reading self-help guides only to find myself non-functional in the morning because I didn’t sleep.  If you find that attacking an anxiety with Bayesian updating is like chopping down the Washington monument with a spoon, but getting a full night’s sleep makes the monument disappear completely, consider the sleep.  Likewise for techniques that have little to no scientific evidence, but are a good placebo.  A placebo effect is still an effect.

Finally, like all advice, this comes with Implicit Step Zero:  “Have enough executive function to give this a try.”  If you find yourself in an analysis loop, you may not yet have enough executive function to try any of the advice you read.  The advice for functioning better is not always identical to the advice for functioning at all.  If there’s interest in an “improving your executive function” post, I’ll write one eventually.  It will be late, because my executive function is not impeccable.

Simple updating is my personal favorite for attacking specific anxieties.  A general sense of impending doom is a very tricky target and does not respond well to reality.  If you can narrow it down to a particular belief, however, you can amass evidence against it. 

Returning to my example about work: I alieved that I would get fired if I asked for help or missed a day due to illness.  The distinction between believe and alieve is an incredibly useful tool that I immediately integrated when I heard of it.  Learning to make beliefs pay rent is much easier than making harmful aliefs go away.  The tactics are similar: do experiments, make predictions, throw evidence at the situation until you get closer to reality.  Update accordingly.  

The first thing I do is identify the situation and why it’s dysfunctional.  The alief that I’ll get fired for asking for help is not actually articulated when it manifests as an anxiety.  Ask me in the middle of a panic attack, and I still won’t articulate that I am afraid of getting fired.  So I take the anxiety all the way through to its implication.  The algorithm is something like this:

  1.       Notice sense of doom
  2.       Notice my avoidance behaviors (not opening my email, walking away from my desk)
  3.       Ask “What am I afraid of?”
  4.       Answer (it's probably silly)
  5.       Ask “What do I think will happen?”
  6.       Make a prediction about what will happen (usually the prediction is implausible, which is why we want it to go away in the first place)

In the “asking for help” scenario, the answer to “what do I think will happen” is implausible.  It’s extremely unlikely that I’ll get fired for it!  This helps take the gravitas out of the anxiety, but it does not make it go away.*  After (6), it’s usually easy to do an experiment.  If I ask my coworkers for help, will I get fired?  The only way to know is to try. 

…That’s actually not true, of course.  A sense of my environment, my coworkers, and my general competence at work should be enough.  But if it was, we wouldn’t be here, would we?

So I perform the experiment.  And I wait.  When I receive a reply of any sort, even if it’s negative, I make a tick mark on a sheet of paper.  I label it “didn’t get fired.”  Because again, even if it’s negative, I didn’t get fired. 

This takes a lot of tick marks.  Cutting down the Washington monument with a spoon, remember?

The tick marks don’t have to be physical.  I prefer it, because it makes the “updating” process visual.  I’ve tried making a mental note and it’s not nearly as effective.  Play around with it, though.  If you’re anything like me, you have a lot of anxieties to experiment with. 

Usually, the anxiety starts to dissipate after obtaining several tick marks.  Ideally, one iteration of experiments should solve the problem.  But we aren’t ideal; we’re mentally ill.  Depending on the severity of the anxiety, you may need someone to remind you that doom will not occur.  I occasionally panic when I have to return to work after taking a sick day.  I ask my husband to remind me that I won’t get fired.  I ask him to remind me that he’ll still love me if I do get fired.  If this sounds childish, it’s because it is.  Again: we’re mentally ill.  Even if you aren’t, however, assigning value judgements to essentially harmless coping mechanisms does not make sense.  Childish-but-helpful is much better than mature-and-harmful, if you have to choose.

I still have tiny ugh fields around my anxiety triggers.  They don’t really go away.  It’s more like learning not to hit someone you’re angry at.  You notice the impulse, accept it, and move on.  Hopefully, your harmful alief starves to death.

If you perform your experiment and doom does occur, it might not be you.  If you can’t ask your boss for help, it might be your boss.  If you disagree with your spouse and they scream at you for an hour, it might be your spouse.  This isn’t an excuse to blame your problems on the world, but abusive situations can be sneaky.  Ask some trusted friends for a sanity check, if you’re performing experiments and getting doom as a result.  This is designed for situations where your alief is obviously silly.  Where you know it’s silly, and need to throw evidence at your brain to internalize it.  It’s fine to be afraid of genuinely scary things; if you really are in an abusive work environment, maybe you shouldn’t ask for help (and start looking for another job instead). 

 

 

*using this technique for several months occasionally stops the anxiety immediately after step 6.  

Marketing Rationality

28 Viliam 18 November 2015 01:43PM

What is your opinion on rationality-promoting articles by Gleb Tsipursky / Intentional Insights? Here is what I think:

continue reading »

The Brain Preservation Foundation's Small Mammalian Brain Prize won

26 gwern 09 February 2016 09:02PM

The Brain Preservation Foundation’s Small Mammalian Brain Prize has been won with fantastic preservation of a whole rabbit brain using a new fixative+slow-vitrification process.

  • BPF announcement (21CM’s announcement)
  • evaluation
  • The process was published as “Aldehyde-stabilized cryopreservation”, McIntyre & Fahy 2015 (mirror)

    We describe here a new cryobiological and neurobiological technique, aldehyde-stabilized cryopreservation (ASC), which demonstrates the relevance and utility of advanced cryopreservation science for the neurobiological research community. ASC is a new brain-banking technique designed to facilitate neuroanatomic research such as connectomics research, and has the unique ability to combine stable long term ice-free sample storage with excellent anatomical resolution. To demonstrate the feasibility of ASC, we perfuse-fixed rabbit and pig brains with a glutaraldehyde-based fixative, then slowly perfused increasing concentrations of ethylene glycol over several hours in a manner similar to techniques used for whole organ cryopreservation. Once 65% w/v ethylene glycol was reached, we vitrified brains at −135 °C for indefinite long-term storage. Vitrified brains were rewarmed and the cryoprotectant removed either by perfusion or gradual diffusion from brain slices. We evaluated ASC-processed brains by electron microscopy of multiple regions across the whole brain and by Focused Ion Beam Milling and Scanning Electron Microscopy (FIB-SEM) imaging of selected brain volumes. Preservation was uniformly excellent: processes were easily traceable and synapses were crisp in both species. Aldehyde-stabilized cryopreservation has many advantages over other brain-banking techniques: chemicals are delivered via perfusion, which enables easy scaling to brains of any size; vitrification ensures that the ultrastructure of the brain will not degrade even over very long storage times; and the cryoprotectant can be removed, yielding a perfusable aldehyde-preserved brain which is suitable for a wide variety of brain assays…We have shown that both rabbit brains (10 g) and pig brains (80 g) can be preserved equally well. We do not anticipate that there will be significant barriers to preserving even larger brains such as bovine, canine, or primate brains using ASC.

    (They had problems with 2 pigs and got 1 pig brain successfully cryopreserved but it wasn’t part of the entry. I’m not sure why: is that because the Large Mammalian Brain Prize is not yet set up?)
  • previous discussion: Mikula’s plastination came close but ultimately didn’t seem to preserve the whole brain when applied.
  • commentary: Robin Hanson, John Smart, Vice, Pop Sci
  • donation link

To summarize it, you might say that this is a hybrid of current plastination and vitrification methods, where instead of allowing slow plastination (with unknown decay & loss) or forcing fast cooling (with unknown damage and loss), a staged approach is taking: a fixative is injected into the brain first to immediately lock down all proteins and stop all decay/change, and then it is leisurely cooled down to be vitrified.

This is exciting progress because the new method may wind up preserving better than either of the parent methods, but also because it gives much greater visibility into the end-results: the aldehyde-vitrified brains can be easily scanned with electron microscopes and the results seen in high detail, showing fantastic preservation of structure, unlike regular vitrification where the scans leave opaque how good the preservation was. This opacity is one reason that as Mike Darwin has pointed out at length on his blog and jkaufman has also noted that we cannot be confident in how well ALCOR or CI’s vitrification works - because if it didn’t, we have little way of knowing.

To contribute to AI safety, consider doing AI research

25 Vika 16 January 2016 08:42PM

Among those concerned about risks from advanced AI, I've encountered people who would be interested in a career in AI research, but are worried that doing so would speed up AI capability relative to safety. I think it is a mistake for AI safety proponents to avoid going into the field for this reason (better reasons include being well-positioned to do AI safety work, e.g. at MIRI or FHI). This mistake contributed to me choosing statistics rather than computer science for my PhD, which I have some regrets about, though luckily there is enough overlap between the two fields that I can work on machine learning anyway. I think the value of having more AI experts who are worried about AI safety is far higher than the downside of adding a few drops to the ocean of people trying to advance AI. Here are several reasons for this:

  1. Concerned researchers can inform and influence their colleagues, especially if they are outspoken about their views.
  2. Studying and working on AI brings understanding of the current challenges and breakthroughs in the field, which can usefully inform AI safety work (e.g. wireheading in reinforcement learning agents).
  3. Opportunities to work on AI safety are beginning to spring up within academia and industry, e.g. through FLI grants. In the next few years, it will be possible to do an AI-safety-focused PhD or postdoc in computer science, which would hit two birds with one stone.

To elaborate on #1, one of the prevailing arguments against taking long-term AI safety seriously is that not enough experts in the AI field are worried. Several prominent researchers have commented on the potential risks (Stuart Russell, Bart Selman, Murray Shanahan, Shane Legg, and others), and more are concerned but keep quiet for reputational reasons. An accomplished, strategically outspoken and/or well-connected expert can make a big difference in the attitude distribution in the AI field and the level of familiarity with the actual concerns (which are not about malevolence, sentience, or marching robot armies). Having more informed skeptics who have maybe even read Superintelligence, and fewer uninformed skeptics who think AI safety proponents are afraid of Terminators, would produce much needed direct and productive discussion on these issues. As the proportion of informed and concerned researchers in the field approaches critical mass, the reputational consequences for speaking up will decrease.

A year after FLI's Puerto Rico conference, the subject of long-term AI safety is no longer taboo among AI researchers, but remains rather controversial. Addressing AI risk on the long term will require safety work to be a significant part of the field, and close collaboration between those working on safety and capability of advanced AI. Stuart Russell makes the apt analogy that "just as nuclear fusion researchers consider the problem of containment of fusion reactions as one of the primary problems of their field, issues of control and safety will become central to AI as the field matures". If more people who are already concerned about AI safety join the field, we can make this happen faster, and help wisdom win the race with capability.

(Cross-posted from my blog. Thanks to Janos Kramar for his help with editing this post.)

A Rationalist Guide to OkCupid

23 Jacobian 03 February 2016 08:50PM

There's a lot of data and research on what makes people successful at online dating, but I don't know anyone who actually tried to wholeheartedly apply this to themselves. I decided to be that person: I implemented lessons from data, economics, game theory and of course rationality in my profile and strategy and OkCupid. Shockingly, it worked! I got a lot of great dates, learned a ton and found the love of my life. I didn't expect dating to be my "rationalist win", but it happened.

Here's the first part of the story, I hope you'll find some useful tips and maybe a dollop of inspiration among all the silly jokes.

P.S.

Does anyone know who curates the "Latest on rationality blogs" toolbar? What are the requirements to be included?

 

[Link] Introducing OpenAI

23 Baughn 11 December 2015 09:54PM

From their site:

OpenAI is a non-profit artificial intelligence research company. Our goal is to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.

The money quote is at the end, literally—$1B in committed funding from some of the usual suspects.

Voiceofra is banned

22 NancyLebovitz 23 December 2015 06:29PM

I've gotten sufficient evidence from support that voiceofra has been doing retributive downvoting. I've banned them without prior notice because I'm not giving them more chances to downvote.

I'm thinking of something like not letting anyone give more than 5 downvotes/week for content which is more than a month old. The numbers and the time period are tentative-- this isn't my ideal rule. This is probably technically possible. However, my impression is that highly specific rules like that are an invitation to gaming the rules.

I would rather just make spiteful down-voting impossible (or maybe make it expensive) rather than trying to find out who's doing it. Admittedly, putting up barriers to downvoting for past comments doesn't solve the problem of people who down-vote everything, but at least people who downvote current material are easier to notice.

Any thoughts about technical solutions to excessive down-voting of past material?

How did my baby die and what is the probability that my next one will?

21 deprimita_patro 19 January 2016 06:24AM

Summary: My son was stillborn and I don't know why. My wife and I would like to have another child, but would very much not like to try if the probability of this occurring again is above a certain threshold (of which we have already settled on one). All 3 doctors I have consulted were unable to give a definitive cause of death, nor were any willing to give a numerical estimate of the probability (whether for reasons of legal risk, or something else) that our next baby will be stillborn. I am likely too mind-killed to properly evaluate my situation and would very much appreciate an independent (from mine) probability estimate of what caused my son to die, and given that cause, what is the recurrence risk?

Background: V (L and my only biologically related living son) had no complications during birth, nor has he showed any signs of poor health whatsoever. L has a cousin who has had two miscarriages, and I have an aunt who had several stillbirths followed by 3 live births of healthy children. We know of no other family members that have had similar misfortunes.

J (my deceased son) was the product of a 31 week gestation. L (my wife and J's mother) is 28 years old, gravida 2, para 1. L presented to the physicians office for routine prenatal care and noted that she had not felt any fetal movement for the last five to six days. No fetal heart tones were identified. It was determined that there was an intrauterine fetal demise. L was admitted on 11/05/2015 for induction and was delivered of a nonviable, normal appearing, male fetus at approximately 1:30 on 11/06/2015.

Pro-Con Reasoning: According to a leading obstetrics textbook1, causes of stillbirth are commonly classified into 8 categories: obstetrical complications, placental abnormalities, fetal malformations, infection, umbilical cord abnormalities, hypertensive disorders, medical complications, and undetermined. Below, I'll list the percentage of stillbirths in each category (which may be used as prior probabilities) along with some reasons for or against.

Obstetrical complications (29%)

  • Against: No abruption detected. No multifetal gestation. No ruptured preterm membranes at 20-24 weeks.

Placental abnormalities (24%)

  • For: Excessive fibrin deposition (as concluded in the surgical pathology report). Early acute chorioamnionitis (as conclused in the surgical pathology report, but Dr. M claimed this was caused by the baby's death, not conversely). L has gene variants associated with deep vein thrombosis (AG on rs2227589 per 23andme raw data).
  • Against: No factor V Leiden mutation (GG on rs6025 per 23andme raw data and confirmed via independent lab test). No prothrombin gene mutation (GG on l3002432 per 23andme raw data and confirmed via independent lab test). L was negative for prothrombin G20210A mutation (as determined by lab test). Anti-thrombin III activity results were within normal reference ranges (as determined by lab test). Protein C activity results were withing normal reference ranges (as determined by lab test). Protein S activity results were within normal reference ranges (as determined by lab test). Protein S antigen (free and total) results were within normal references ranges (as determined by lab test).

Infection (13%)

  • For: L visited a nurse's home during the last week of August that works in a hospital we now know had frequent cases of CMV infection. CMV antibody IgH, CMV IgG, and Parvovirus B-19 Antibody IgG values were outside of normal reference ranges.
  • Against: Dr. M discounted the viral test results as the cause of death, since the levels suggested the infection had occurred years ago, and therefore could not have caused J's death. Dr. F confirmed Dr. M's assessment.

Fetal malformations (14%)

  • Against: No major structural abnormalities. No genetic abnormalities detected (CombiSNP Array for Pregnancy Loss results showed a normal male micro array profile).

Umbilical cord abnormalities (10%)

  • Against: No prolapse. No stricture. No thrombosis.

Hypertensive disorder (9%)

  • Against: No preeclampsia. No chronic hypertension.

Medical complications (8%)

  • For: L experienced 2 nights of very painful abdominal pains that could have been contractions on 10/28 and 10/29. L remembers waking up on her back a few nights between 10/20 and 11/05 (it is unclear if this belongs in this category or somewhere else).
  • Against: No antiphospholipid antibody syndrome detected (determined via Beta-2 Glycoprotein I Antibodies [IgG, IgA, IgM] test). No maternal diabetes detected (determined via glucose test on 10/20).

Undetermined (24%)

What is the most likely cause of death? How likely is that cause? Given that cause, if we choose to have another child, then how likely is it to survive its birth? Are there any other ways I could reduce uncertainty (additional tests, etc...) that I haven't listed here? Are there any other forums where these questions are more likely to get good answers? Why won't doctors give probabilities? Help with any of these questions would be greatly appreciated. Thank you.

If your advice to me is to consult another expert (in addition to the 2 obstetricians and 1 high-risk obstetrician I already have consulted), please also provide concrete tactics as to how to find such an expert and validate their expertise.

Contact Information: If you would like to contact me, but don't want to create an account here, you can do so at deprimita.patro@gmail.com.

[1] Cunningham, F. (2014). Williams obstetrics. New York: McGraw-Hill Medical.

EDIT 1: Updated to make clear that both V and J are mine and L's biological sons.

EDIT 2: Updated to add information on family history.

EDIT 3: On PipFoweraker's advice, I added contact info.

EDIT 4: I've cross-posted this on Health Stack Exchange.

EDIT 5: I've emailed the list of authors of the most recent meta-analysis concerning causes of stillbirth. Don't expect much.

Announcing the Signal Data Science Intensive Training Program

20 JonahSinick 19 December 2015 12:30AM

(This post is coauthored with Robert Cordwell.)

We’re writing to announce the inaugural run of Signal Data Science’s intensive training program.

The program will train students in the core skills needed to work as a professional data scientist:

  • Scraping and cleaning data
  • Exploring and analyzing data using statistics
  • Presenting findings
  • Interviewing

By the end of the course, you’ll will be able to start with raw data and produce analyses like the one in Bayesian Adjustment of Yelp Ratings. More to the point, you’ll understand why Jonah structured the analysis the way he did and be able to do the same yourself.

You’ll also be able to produce cool visualizations like this automatic grouping of Slate Star Codex posts by topic, as shown below.

Why data science?

Making inferences from data is fundamental to understanding the world, and there’s a growing unmet need in industry for people with the relevant skills. With good instruction and peer group, smart, motivated people can quickly develop enough proficiency to get jobs in the tech sector (starting compensation ~$115k in the San Francisco Bay Area).

Why us?

The Program

We offer inquiry-based learning (no boring lecturers or unmotivating problem sets!) and an unusually intellectually curious peer group. Far from what’s typical of college classes, our model has more in common with the Math Olympiad Summer Program, where daily lectures are interspersed with on-the-spot problems and followed by long-form problems designed to build on the lesson.

Robert Cordwell is an IMO gold medalist and educational startup veteran who’s working a Facebook data science job despite his limited, self-taught experience. He’s going to be teaching math problem solving, overall presentation skills, and how to break interviews.

Jonah Sinick is a data scientist with 13 years of experience making advanced math accessible to beginners, a PhD in math from University of Illinois, and an extensive body of published work. He’ll be teaching a comprehensive technical curriculum.

Who is this for?

If you:

  • Are interested in data science
  • Passionate about learning new things
  • Would benefit from a social environment with others working toward the same goal
  • Have the programming skills to solve simple algorithms problems
  • Plan on applying for data science jobs after the program

our program will be a good fit for you.

Where / When

The first cohort will run in Berkeley for 6 weeks, from Feburary 1st – March 18th. This will be a compressed version of the standard course that we’ll be offering in the future, and is targeted at students who have a high degree of comfort with math.

In the future we’ll be offering longer courses that cover the mathematical / statistical material at a gentler pace.

Cost

For students in our first 6 week cohort, we offer two options:

  • Payment of $8,000 at the start of the program.
  • A “pay later” model where students pay 8% of their first year’s salary (pretax, spaced over 6 months), contingent on getting a data science job.

This is roughly 50% of the standard price for coding /data science bootcamps.

Next steps

If you’re interested in exploring participating in our first cohort, or keeping posted, please be in touch with us at signaldatascience@gmail.com.

[Link] 10 Tips from CFAR: My Business Insider article

19 James_Miller 10 December 2015 02:09AM

My research priorities for AI control

17 paulfchristiano 06 December 2015 01:57AM

I've been thinking about what research projects I should work on, and I've posted my current view. Naturally, I think these are also good projects for other people to work on as well.

Brief summaries of the projects I find most promising:

The post briefly discusses where I am coming from, and links to a good deal more clarification. I'm always interested in additional thoughts and criticisms, since changing my views on these questions would directly influence what I spend my time on.

 

New Leverhulme Centre on the Future of AI (developed at CSER with spokes led by Bostrom, Russell, Shanahan)

17 Sean_o_h 03 December 2015 10:07AM

[Cross-posted at EA forum]

Hot on the heels of 80K's excellent AI risk research career profile (https://80000hours.org/career-guide/top-careers/profiles/artificial-intelligence-risk-research/), we're delighted to announce the funding of a new international Leverhulme Centre for the Future of Intelligence, to be led by Cambridge, with spokes at Oxford (Nick Bostrom), Imperial (Murray Shanahan), and Berkeley (Stuart Russell). The Centre proposal was developed by us at CSER, but will be a stand-alone centre, albeit collaborating extensively at CSER.

Building on the by-now-familiar "Puerto Rico Agenda", it will have the long-term safe and beneficial development of AI at its core, but with a slightly broader remit than CSER's focus on catastrophic AI risk and superintelligence. For example, it will consider some near-term challenges such as lethal autonomous weapons, and as well as some of the longer-term philosophical and practical issues surrounding the opportunities and challenges we expect to face, should greater-than-human-level intelligence be developed later this century.

It builds on the pioneering work of FHI, FLI and others, and the generous support of Elon Musk in massively boosting this field with his (separate) $10M grants programme in January of this year. One of the most important things this Centre will achieve is in taking a big step towards making this global area of research a long-term one in which the best talents can be expected to have lasting careers - the Centre is funded for a full 10 years, and we will aim to build longer-lasting funding on top of this.

In practical terms, it means that ~10 new postdoc positions at a minimum will be opening up in this space (we're currently pursuing matched funding opportunities) across academic disciplines and locations (Cambridge, Oxford, Berkeley, Imperial and elsewhere). Our first priority will be to identify and hire a world-class Executive Director, who would start in October. This will be a very influential position over the coming years. Research positions will most likely begin in April 2017.

In between now and then, FHI is hiring for AI safety researchers, and CSER will be hiring for an AI policy postdoc in the spring. I'll have limited time to post in between now and the Christmas break (I'll be away at NIPS and then occupied with funder deadlines and CSER recruitment), but will be happy to post more over the Christmas break if desired.

Thank you so much as always to the Lesswrong and Effective Altruism community for their support of existential risk/far future work, both financially and intellectually - it has made a huge difference over the last couple of years. Thanks in particular to MIRI and FHI's researchers, who I received a lot of guidance from in my part of co-developing this proposal.

Seán (Executive Director, CSER)

http://www.eurekalert.org/pub_releases/2015-12/uoc-cul120215.php

Human-level intelligence is familiar in biological 'hardware' -- it happens inside our skulls. Technology and science are now converging on a possible future where similar intelligence can be created in computers.

While it is hard to predict when this will happen, some researchers suggest that human-level AI will be created within this century. Freed of biological constraints, such machines might become much more intelligent than humans. What would this mean for us? Stuart Russell, a world-leading AI researcher at the University of California, Berkeley, and collaborator on the project, suggests that this would be "the biggest event in human history". Professor Stephen Hawking agrees, saying that "when it eventually does occur, it's likely to be either the best or worst thing ever to happen to humanity, so there's huge value in getting it right."

Now, thanks to an unprecedented £10 million grant from the Leverhulme Trust, the University of Cambridge is to establish a new interdisciplinary research centre, the Leverhulme Centre for the Future of Intelligence, to explore the opportunities and challenges of this potentially epoch-making technological development, both short and long term.

The Centre brings together computer scientists, philosophers, social scientists and others to examine the technical, practical and philosophical questions artificial intelligence raises for humanity in the coming century.

Huw Price, the Bertrand Russell Professor of Philosophy at Cambridge and Director of the Centre, said: "Machine intelligence will be one of the defining themes of our century, and the challenges of ensuring that we make good use of its opportunities are ones we all face together. At present, however, we have barely begun to consider its ramifications, good or bad".

The Centre is a response to the Leverhulme Trust's call for "bold, disruptive thinking, capable of creating a step-change in our understanding". The Trust awarded the grant to Cambridge for a proposal developed with the Executive Director of the University's Centre for the Study of Existential Risk (CSER), Dr Seán Ó hÉigeartaigh. CSER investigates emerging risks to humanity's future including climate change, disease, warfare and technological revolutions.

Dr Ó hÉigeartaigh said: "The Centre is intended to build on CSER's pioneering work on the risks posed by high-level AI and place those concerns in a broader context, looking at themes such as different kinds of intelligence, responsible development of technology and issues surrounding autonomous weapons and drones."

The Leverhulme Centre for the Future of Intelligence spans institutions, as well as disciplines. It is a collaboration led by the University of Cambridge with links to the Oxford Martin School at the University of Oxford, Imperial College London, and the University of California, Berkeley. It is supported by Cambridge's Centre for Research in the Arts, Social Sciences and Humanities (CRASSH). As Professor Price put it, "a proposal this ambitious, combining some of the best minds across four universities and many disciplines, could not have been achieved without CRASSH's vision and expertise."

Zoubin Ghahramani, Deputy Director, Professor of Information Engineering and a Fellow of St John's College, Cambridge, said: "The field of machine learning continues to advance at a tremendous pace, and machines can now achieve near-human abilities at many cognitive tasks -- from recognising images to translating between languages and driving cars. We need to understand where this is all leading, and ensure that research in machine intelligence continues to benefit humanity. The Leverhulme Centre for the Future of Intelligence will bring together researchers from a number of disciplines, from philosophers to social scientists, cognitive scientists and computer scientists, to help guide the future of this technology and study its implications."

The Centre aims to lead the global conversation about the opportunities and challenges to humanity that lie ahead in the future of AI. Professor Price said: "With far-sighted alumni such as Charles Babbage, Alan Turing, and Margaret Boden, Cambridge has an enviable record of leadership in this field, and I am delighted that it will be home to the new Leverhulme Centre.


A Medical Mystery: Thyroid Hormones, Chronic Fatigue and Fibromyalgia

15 johnlawrenceaspden 31 January 2016 01:27PM

  Summary:

  Chronic Fatigue and Fibromyalgia look very like Hypothyroidism
  Thyroid Patients aren't happy with the diagnosis and treatment of Hypothyroidism
  It's possible that it's not too difficult to fix CFS/FMS with thyroid hormones
  I believe that there's been a stupendous cock-up that's hurt millions.
  Less Wrong should be interested, because it could be a real example of how bad inference can cause the cargo cult sciences to come to false conclusions.

 

I believe that I've come across a genuine puzzle, and I wonder if you can help me solve it. This problem is complicated, and subtle, and has confounded and defeated good people for forty years. And yet there are huge and obvious clues. No-one seems to have conducted the simple experiments which the clues suggest, even though many clever people have thought hard about it, and the answer to the problem would be very valuable. And so I wonder what it is that I am missing.

 

I am going to tell a story which rather extravagantly privileges a hypothesis that I have concocted from many different sources, but a large part of it is from the work of the late Doctor John C Lowe, an American chiropractor who claimed that he could cure Fibromyalgia.

 

I myself am drowning in confirmation bias to the point where I doubt my own sanity. Every time I look for evidence to disconfirm my hypothesis, I find only new reasons to believe. But I am utterly unqualified to judge. Three months ago I didn't know what an amino acid was. And so I appeal to wiser heads for help.

 

Crocker's Rules on this. I suspect that I am being the most spectacular fool, but I can't see why, and I'd like to know.

 

Setting the Scene

 

Chronic Fatigue Syndrome, Myalgic Encephalitis, and Fibromyalgia are 'new diseases'. There is considerable dispute as to whether they even exist, and if so how to diagnose them. They all seem to have a large number of possible symptoms, and in any given case, these symptoms may or may not occur with varying severity.

 

As far as I can tell, if someone claims that they're 'Tired All The Time', then a competent doctor will first of all check that they're getting enough sleep and are not unduly stressed, then rule out all of the known diseases that cause fatigue (there are a very lot!), and finally diagnose one of the three 'by exclusion', which means that there doesn't appear to be anything wrong, except that you're ill.

 

If widespread pain is one of the symptoms, it's Fibromyalgia Syndrome (FMS). If there's no pain, then it's CFS or ME. These may or may not be the same thing, but Myalgic Encephalitis is preferred by patients because it's greek and so sounds like a disease. Unfortunately Myalgic Encephalitis means 'hurty muscles brain inflammation', and if one had hurty muscles, it would be Fibromyalgia, and if one had brain inflammation, it would be something else entirely.

 

Despite the widespread belief that these are 'somatoform' diseases (all in the mind), the severity of them ranges from relatively mild (tired all the time, can't think straight), to devastating (wheelchair bound, can't leave the house, can't open one eye because the pain is too great).

 

All three seem to have come spontaneously into existence in the 1970s, and yet searches for the responsible infective agent have proved fruitless. Neither have palliative measures been discovered, apart from the tried and true method of telling the sufferers that it's all in their heads.

 

The only treatments that have proved effective are Cognitive Behavioural Therapy / Graded Exercise. A Cochrane Review reckoned that they do around 15% over placebo in producing a measurable alleviation of symptoms. I'm not very impressed. CBT/GE sound a lot like 'sports coaching', and I'm pretty sure that if we thought of 'Not Being Very Good at Rowing' as a somatoform disorder, then I could produce an improvement over placebo in a measurable outcome in ten percent of my victims without too much trouble.

 

But any book on CFS will tell you that the disease was well known to the Victorians, under the name of neurasthenia. The hypothesis that God lifted the curse of neurasthenia from the people of the Earth as a reward for their courage during the wars of the early twentieth century, while well supported by the clinical evidence, has a low prior probability.

 

We face therefore something of a mystery, and in the traditional manner of my people, a mystery requires a Just-So Story:

 

How It Was In The Beginning

 

In the dark days of Victoria, the brilliant physician William Miller Ord noticed large numbers of mainly female patients suffering from late-onset cretinism.

 

These patients, exhausted, tired, stupid, sad, cold, fat and emotional, declined steeply, and invariably died.

 

As any man of decent curiosity would, Dr Ord cut their corpses apart, and in the midst of the carnage noticed that the thyroid, a small butterfly-shaped gland in the throat, was wasted and shrunken.

 

One imagines that he may have thought to himself: "What has killed them may cure them."

 

After a few false starts and a brilliant shot in the dark by the brave George Redmayne Murray, Dr Ord secured a supply of animal thyroid glands (cheaply available at any butcher, sautée with nutmeg and basil) and fed them to his remaining patients, who were presumably by this time too weak to resist.

 

They recovered miraculously, and completely.

 

I'm not sure why Dr Ord isn't better known, since this appears to have been the first time in recorded history that something a doctor did had a positive effect.

 

Dr Ord's syndrome was named Ord's Thyroiditis, and it is now known to be an autoimmune disease where the patient's own antibodies attack and destroy the thyroid gland. In Ord's thyroiditis, there is no goiter.

 

A similar disease, where the thyroid swells to form a disfiguring deformity of the neck (goiter), was described by Hakaru Hashimoto in 1912 (who rather charmingly published in German), and as part of the war reparations of 1946 it was decided to confuse the two diseases under the single name of Hashimoto's Thyroiditis. Apart from the goiter, both conditions share a characteristic set of symptoms, and were easily treated with animal thyroid gland, with no complications.

 

Many years before, in 1835, a fourth physician, Robert James Graves, had described a different syndrome, now known as Graves' Disease, which has as its characteristic symptoms irritability, muscle weakness, sleeping problems, a fast heartbeat, poor tolerance of heat, diarrhoea, and weight loss. Unfortunately Dr Graves could not think how to cure his eponymous horror, and so the disease is still named after him.

 

The Horror Spreads

 

Victorian medicine being what it was, we can assume that animal glands were sprayed over and into any wealthy person unwise enough to be remotely ill in the vicinity of a doctor. I seem to remember a number of jokes about "monkey glands" in PG Wodehouse, and indeed a man might be tempted to assume that chimpanzee parts would be a good substitute for humans. Supply issues seem to have limited monkey glands to a few millionaires worried about impotence, and it may be that the corresponding procedure inflicted on their wives has come down to us as Hormone Replacement Therapy.

 

Certainly anyone looking a bit cold, tired, fat, stupid, sad or emotional is going to have been eating thyroids. We can assume that in a certain number of cases, this was just the thing, and I think it may also be safe to assume that a fair number of people who had nothing wrong with them at all died as a result of treatment, although the fact that animal thyroid is still part of the human food chain suggests it can't be that dangerous.

 

I mean seriously, these people use high pressure hoses to recover the last scraps of meat from the floors of slaughterhouses, they're not going to carefully remove all the nasty gristly throat-bits before they make ready meals, are they?

 

The Armour Sausage company, owner of extensive meat-packing facilities in Chicago, Illinois, and thus in possession of a large number of pig thyroids which, if not quite surplus to requirements, at the very least faced a market sluggish to non-existent as foodstuffs, brilliantly decided to sell them in freeze-dried form as a cure for whatever ails you.

 

 

Some Sort of Sanity Emerges, in a Decade not Noted for its Sanity

 

Around the time of the second world war, doctors became interested in whether their treatments actually helped, and an effort was made to determine what was going on with thyroids and the constellation of sadness that I will henceforth call 'hypometabolism', which is the set of symptoms associated with Ord's thyroiditis. Jumping the gun a little, I shall also define 'hypermetabolism' as the set of symptoms associated with Graves' disease.

 

The thyroid gland appeared to be some sort of metabolic regulator, in some ways analogous to a thermostat. In hypometabolism, every system of the body is running slow, and so it produces a vast range of bad effects, affecting almost every organ. Different sufferers can have very different symptoms, and so diagnosis is very difficult.

 

Dr Broda Barnes decided that the key symptom of hypometabolism was a low core body temperature. By careful experiment he established that in patients with no symptoms of hypometabolism the average temperature of the armpit on waking was 98 degrees Fahrenheit (or 36.6 Celsius). He believed that temperature variation of +/- 0.2 degrees Fahrenheit was unusual enough to merit diagnosis. He also seems to have believed, in the manner of the proverbial man with a hammer, that all human ailments without exception were caused by hypometabolism, and to have given freeze-dried thyroid to almost everyone he came into contact with, to see if it helped. A true scientist. Doctor Barnes became convinced that fully 40% of the population of America suffered from hypometabolism, and recommended Armour's Freeze Dried Pig Thyroid to cure America's ills.

 

In a brilliant stroke, Freeze Dried Pig's Thyroid was renamed 'Natural Dessicated Thyroid', which almost sounds like the sort of thing you might take in sound mind. I love marketing. It's so clever.

 

America being infested with religious lunatics, and Chicago being infested with nasty useless gristly bits of cow's throat, led almost inevitably to a second form of 'Natural Dessicated Thyroid' on the market.

 

Dr Barnes' hypometabolism test never seems to have caught on. There are several ways your temperature can go outside his 'normal' range, including fever (too hot), starvation (too cold), alcohol (too hot), sleeping under too many duvets (too hot), sleeping under too few duvets (too cold). Also mercury thermometers are a complete pain in the neck, and take ten minutes to get a sensible reading, which is a long time to lie around in bed carefully doing nothing so that you don't inadvertently raise your body temperature. To make the situation even worse, while men's temperature is reasonably constant, the body temperature of healthy young women goes up and down like the Assyrian Empire.

 

Several other tests were proposed. One of the most interesting is the speed of the Achilles Tendon Reflex, which is apparently super-fast in hypermetabolism, and either weirdly slow or has a freaky pause in it if you're running a bit cold. Drawbacks of this test include 'It's completely subjective, give me something with numbers in it', and 'I don't seem to have one, where am I supposed to tap the hammer-thing again?'.

 

By this time, neurasthenia was no longer a thing. In the same way that spiritualism was no longer a thing, and the British Empire was no longer a thing.

 

As far as we know, Chronic Fatigue Syndrome was not a thing either, and neither was Fibromyalgia (which is just Chronic Fatigue Syndrome but it hurts), nor Myalgic Encephalitis. There was something called 'Myalgic Neurasthenia' in 1934, but it seems to have been a painful infectious disease and they thought it was polio.

 

 

Finally, Science

 

It turned out that the purpose of the thyroid gland is to make hormones which control the metabolism. It takes in the amino acid tyrosine, and it takes in iodine. It releases Thyroglobulin, mono-iodo-tyrosine (MIT), di-iodo-tyrosine (DIT), thyroxine (T4) and triiodothyronine (T3) into the blood. The chemistry is interesting but too complicated to explain in a just-so story.

 

I believe that we currently think that thyroglobulin, MIT and DIT are simply by-products of the process that makes T3 and T4.

 

T3 is the hormone. It seems to control the rate of metabolism in all cells. T4 has something of the same effect, but is much less active, and called a 'prohormone'. Its main purpose seems to be to be deiodinated to make more T3. This happens outside the thyroid gland, in the other parts of the body ('peripheral conversion'). I believe mainly in the liver, but to some extent in all cells.

 

Our forefathers knew about thyroxine (T4, or thyronine-with-four-iodines-attached), and triiodothyronine (T3, or thyronine-with-three-iodines-attached)

 

It seems to me that just from the names, thyroxine was the first one to be discovered. But I'm not sure about that. You try finding a history-of-endocrinology website. At any rate they seem to have known about T4 and T3 fairly early on.

 

The mystery of Graves', Ord's and Hashimoto's thyroid diseases was explained.

 

Ord's and Hashimoto's are diseases where the thryoid gland under-produces (hypothyroidism). The metabolism of all cells slows down. As might be expected, this causes a huge number of effects, which seem to manifest differently in different sufferers.

 

Graves' disease is caused by the thyroid gland over-producing (hyperthyroidism). The metabolism of all cells speeds up. Again, there are a lot of possible symptoms.

 

All three are thought to be autoimmune diseases. Some people think that they may be different manifestations of the same disease. They are all fairly common.

 

Dessicated thryoid cures hypothyroidism because the ground-up thyroids contain T4 and T3, as well as lots of thyroglobulin, MIT and DIT, and they are absorbed by the stomach. They get into the blood and speed up the metabolism of all cells. By titrating the dose carefully you can restore roughly the correct levels of the thyroid hormones in all tissues, and the patient gets better. (Titration is where you change something carefully until you get it right)

 

The theory has considerable explanatory power. It explains cretinism, which is caused either by a genetic disease, or by iodine deficiency in childhood. If you grow up in an iodine deficient area, then your growth is stunted, your brain doesn't develop properly, and your thyroid gland may become hugely enlarged. Presumably because the brain is desperately trying to get it to produce more thyroid hormones, and it responds by swelling.

 

Once upon a time, this swelling (goitre) was called 'Derbyshire Neck'. I grew up near Derbyshire, and I remember an old rhyme: "Derbyshire born, Derbyshire bred, strong in the arm, and weak in the head". I always thought it was just an insult. Maybe not. Cretinism was also popular in the Alps, and there is a story of an English traveller in Switzerland of whom it was remarked that he would have been quite handsome if only he had had a goitre. So it must have been very common there.

 

But at this point I am *extremely suspicious*. The thyroid/metabolic regulation system is ancient (universal in vertebrates, I believe), crucial to life, and it really shouldn't just go wrong. We should suspect either an infectious cause, or a recent environmental influence which we haven't had time to adjust to, an evolved defence against an infectious disease, or just possibly, a recently evolved but as yet imperfect defence against a less recent environmental change.

 

(Cretinism in particular is very strange. Presumably animals in iodine-deficient areas aren't cretinous, and yet they should be. Perhaps a change to a farming from a hunter-gatherer lifestyle has increased our dependency on iodine from crops, which crops have sucked what little iodine occurs naturally out of the soil?)

 

It's also not entirely clear to me what the thyroid system is *for*. If there's just a particular rate that cells are supposed to run at, then why do they need a control signal to tell them that? I could believe that it was a literal thermostat, designed to keep the body temperature constant at the best speed for the various biological reactions, but it's universal in *vertebrates*. There are plenty of vertebrates which don't keep a constant temperature.

 

 

The Fall of Dessicated Thyroid

 

There turned out to be some problems with Natural Dessicated Thyroid (NDT).

 

Firstly, there were many competing brands and types, and even if you stuck to one brand the quality control wasn't great, so the dose you'd be taking would have been a bit variable.

 

Secondly, it's fucking pig's thyroid from an abattoir. It could have all sorts of nasty things in it. Also, ick.

 

Thirdly, it turned out that pigs made quite a lot more T3 in their thyroids than humans do. It also seems that T3 is better absorbed by the gut than T4 is, so someone taking NDT to compensate for their own underproduction will have too much of the active hormone compared to the prohormone. That may not be good news.

 

With the discovery of 'peripheral conversion', and the possibility of cheap clean synthesis, it was decided that modern scientific thyroid treatment would henceforth be by synthetic T4 (thyroxine) alone. The body would make its own T3 from the T4 supply.

 

Alarm bells should be ringing at this point. Apart from the above points, I'm not aware of any great reason for the switch from NDT to thyroxine in the treatment of hypothyroidism, but it seems to have been pretty much universal, and it seems to have worked.

 

Aware of the lack of T3, doctors compensated by giving people more T4 than was in their pig-thyroid doses. And there don't seem to have been any complaints.

 

Over the years, NDT seems to have become a crazy fringe treatment despite there not being any evidence against it. It's still a legal prescription drug, but in America it's only prescribed by eccentrics. In England a doctor prescribing it would be, at the very least, summoned to explain himself before the GMC.

 

However, since it was (a) sold over the counter for so many years, and (b) part of the food chain, it is still perfectly legal to sell as a food supplement in both countries, as long as you don't make any medical claims for it. And the internet being what it is, the prescription-only synthetic hormones T3 and T4 are easily obtained without a prescription. These are extremely powerful hormones which have an effect on metabolism. If 'body-builders' and sports cheats aren't consuming all three in vast quantities, I am a Dutchman.

 

The Clinical Diagnosis of Hypothyroidism

 

We pass now to the beginning of the 1970s.

 

Hypothyroidism is ferociously difficult to diagnose. People complain of 'Tired All The Time' well, ... all the time, and it has literally hundreds of causes.

 

And it must be diagnosed correctly! If you miss a case of hypothyroidism, your patient is likely to collapse and possibly die at some point in the medium-term future. If you diagnose hypothyroidism where it isn't, you'll start giving the poor bugger powerful hormones which he doesn't need and *cause* hypermetabolism.

 

The last word in 'diagnosis by symptoms' was the absolutely excellent paper:

 

Statistical Methods Applied To The Diagnosis Of Hypothyroidism by W. Z. Billewicz et al.

 

Connoisseurs will note the clever and careful application of 'machine learning' techniques, before there were machines to learn!

 

One important thing to note is that this is a way of separating hypothyroid cases from other cases of tiredness at the point where people have been referred by their GP to a specialist at a hospital on suspicion of hypothyroidism. That changes the statistics remarkably. This is *not* a way of diagnosing hypothyroidism in the general population. But if someone's been to their GP (general practitioner, the doctor that a British person likely makes first contact with) and their GP has suspected their thryoid function might be inadequate, this test should probably still work.

 

For instance, they consider Physical Tiredness, Mental Lethargy, Slow Cerebration, Dry Hair, and Muscle Pain, the classic symptoms of hypothyroidism, present in most cases, to be indications *against* the disease.

 

That's because if you didn't have these things, you likely wouldn't have got that far. So in the population they're seeing (of people whose doctor suspects they might be hypothyroid), they're not of great value either way, but their presence is likely the reason why the person's GP has referred them even though they've really got iron-deficiency anaemia or one of the other causes of fatigue.

 

In their population, the strongest indicators are 'Ankle Jerk' and 'Slow Movements', subtle hypothyroid symptoms which aren't likely to be present in people who are fatigued for other reasons.

 

But this absolutely isn't a test you should use for population screening! In the general population, the classic symptoms are strong indicators of hypothyroidism.

 

Probability Theory is weird, huh?

 

Luckily, there were lab tests for hypothyroidism too, but they were expensive, complicated, annoying and difficult to interpret. Billewicz et al used them to calibrate their test, and recommend them for the difficult cases where their test doesn't give a clear answer.

 

And of course, the final test is to give them thyroid treatment and see whether they get better. If you're not sure, go slow, watch very carefully and look for hyper symptoms.

 

Overconfidence is definitely the way to go. If you don't diagnose it and it is, that's catastrophe. If it isn't, but you diagnose it anyway, then as long as you're paying attention the hyper symptoms are easy enough to spot, and you can pull back with little harm done.

 

A Better Way

 

It should be obvious from the above that the diagnosis of hypothyroidism by symptoms is absolutely fraught with complexity, and very easy to get wrong, and if you get it wrong the bad way, it's a disaster. Doctors were absolutely screaming for a decisive way to test for hypothyroidism.

 

Unfortunately, testing directly for the levels of thyroid hormones is very difficult, and the tests of the 1960s weren't accurate enough to be used for diagnosis.

 

The answer came from an understanding of how the thyroid regulatory system works, and the development of an accurate blood test for a crucial signalling hormone.

 

Three structures control the level of thyroid hormones in the blood.

 

The thyroid gland produces the hormones and secretes them into the blood.

 

Its activity is controlled by the hormone thyrotropin, or Thyroid Signalling Hormone (TSH). Lots of TSH works the thyroid hard. In the absence of TSH the thyroid relaxes but doesn't switch off entirely. However the basal level of thyroid activity in the absence of TSH is far too low.

 

TSH is controlled by the pituitary gland, a tiny structure attached to the brain.

 

The pituitary itself is controlled, via Thyroid Releasing Hormone (TRH), by the hypothalamus, which is part of the brain.

 

This was thought to be a classic example of a feedback control system.

 

hypothalamus->pituitary->thyroid

 

It turns out that the level of thyrotropin TSH in the blood is exquisitely sensitive to the levels of thyroid hormones in the blood.

 

Administer thyroid hormone to a patient and their TSH level will rapidly adjust downwards by an easily detectable amount.

 

So:

 

In hypothyroidism, where the thyroid has failed, the body will be desperately trying to produce more thyroid hormones, and the TSH level will be extremely high.

 

In Graves' Disease, this theory says, where the thyroid has grown too large, and the metabolism is running damagingly fast, the body will be, like a central bank trying to stimulate growth in a deflationary economy by reducing interest rates, 'pushing on a piece of string'. TSH will be undetectable.

 

The original TSH test was developed in 1965, by the startlingly clever method of radio-immuno-assay.

 

[For reasons that aren't clear to me, rather than being expressed in grams/litre, or mols/litre, the TSH test is expressed in 'international units/liter'. But I don't think that that's important]

 

A small number of people in whom there was no suspicion of thyroid disease were assessed, and the 'normal range' of TSH was calculated.

 

Again, 'endocrinology history' resources are not easy to find, but the first test was not terribly sensitive, and I think originally hyperthyroidism was thought to result in a complete absence of TSH, and that the highest value considered normal was about 4 (milli-international-units/liter).

 

This apparently pretty much solved the problem of diagnosing thyroid disorders.

 

Forgetfulness

 

It's no longer necessary to diagnose hypo- and hyper-thyroidism by symptoms. It was error prone anyway, and the question is easily decided by a cheap and simple test.

 

Natural Dessicated Thyroid is one with Nineveh and Tyre.

 

No doctor trained since the 1980s knows much about hypothyroid symptoms.

 

Medical textbooks mention them only in passing, as an unweighted list of classic symptoms. You couldn't use that for diagnosis of this famously difficult disease.

 

If you suspect hypothyroidism, you order a TSH test. If the value of TSH is very low, that's hyperthyroidism. If the value is very high then that's hypothyroidism. Otherwise you're 'euthyroid' (greek again, good-thyroid), and your symptoms are caused by some other problem.

 

The treatment for hyperthyroidism is to damage the thyroid gland. There are various ways. This often results in hypothyroidism. *For reasons that are not terribly well understood*.

 

The treatment for hypothyroidism is to give the patient sufficient thyroxine (T4) to cause TSH levels to come back into their normal range.

 

The conditions hyperthyroidism and hypothyroidism are now *defined* by TSH levels.

 

Hypothyroidism, in particular, a fairly common disease, is considered to be such a solved problem that it's usually treated by the GP, without involving any kind of specialist.

 

 

Present Day

 

It was found that the traditional amount of thyroxine (T4) administered to cure hypothyroid patients, was in fact too high. The amount of T4 that had always been used to replace the hormones that had once been produced by a thyroid gland now dead, destroyed, or surgically removed appeared now to be too much. That amount causes suppression of TSH to below its normal range. The brain, theory says, is asking for the level to be reduced.

 

The amount of T4 administered in such cases (there are many) has been reduced by a factor of around two, to the level where it produces 'normal' TSH levels in the blood. Treatment is now titrated to produce the normal levels of TSH.

 

TSH tests have improved enormously since their introduction, and are on their third or fourth generation. The accuracy of measurement is very good indeed.

 

It's now possible to detect the tiny remaining levels of TSH in overtly hyperthyroid patients, so hyperthyroidism is also now defined by the TSH test.

 

In England, the normal range is 0.35 to 5.5. This is considered to be the definition of 'euthyroidism'. If your levels are normal, you're fine.

 

If you have hypothyroid symptoms but a normal TSH level, then your symptoms are caused by something else. Look for Anaemia, look for Lyme Disease. There are hundreds of other possible causes. Once you rule out all the other causes, then it's the mysterious CFS/FMS/ME, for which there is no cause and no treatment.

 

If your doctor is very good, very careful and very paranoid, he might order tests of the levels of T4 and T3 directly. But actually the direct T4 and T3 tests, although much more accurate than they were in the 1960s, are quite badly standardised, and there's considerable controversy about what they actually measure. Different assay techniques can produce quite different readings. They're expensive. It's fairly common, and on the face of it perfectly reasonable, for a lab to refuse to conduct the T3 and T4 tests if the TSH level is normal.

 

It's been discovered that quite small increases in TSH actually predict hypothyroidism. Minute changes in thyroid hormone levels, which don't produce symptoms, cause detectable changes in the TSH levels. Normal, but slightly high values of TSH, especially in combination with the presence of thyroid related antibodies (there are several types), indicate a slight risk of one day developing hypothyroidism.

 

There's quite a lot of controversy about what the normal range for TSH actually is. Many doctors consider that the optimal range is 1-2, and target that range when administering thyroxine. Many think that just getting the value in the normal range is good enough. None of this is properly understood, to understate the case rather dramatically.

 

There are new categories, 'sub-clinical hypothyroidism' and 'sub-clinical hyperthyroidism', which are defined by abnormal TSH tests in the absence of symptoms. There is considerable controversy over whether it is a good idea to treat these, in order to prevent subtle hormonal imbalances which may cause difficult-to-detect long term problems.

 

Everyone is a little concerned about accidentally over-treating people, (remember that hyperthyroidism is now defined by TSH<0.35).

 

Hyperthyroidism has long been associated with Atrial Fibrillation (a heart problem), and Osteoporosis, both very nasty things. A large population study in Denmark recently revealed that there is a greater incidence of Atrial Fibrillation in sub-clinical hyperthyroidism, and that hypothyroidism actually has a 'protective effect' against Atrial Fibrillation.

 

It's known that TSH has a circadian rhythm, higher in the early morning, lower at night. This makes the test rather noisy, as your TSH level can be doubled or halved depending on what time of day you have the blood drawn.

 

But the big problems of the 1960s and 1970s are completely solved. We are just tidying up the details.

 

Doubt

 

Many hypothyroid patients complain that they suffer from 'Tired All The Time', and have some of the classic hypothyroid symptoms, even though their TSH levels have been carefully adjusted to be in the normal range.

 

I've no idea how many, but opinions range from 'the great majority of patients are perfectly happy' to 'around half of hypothyroid sufferers have hypothyroid symptoms even though they're being treated'.

 

The internet is black with people complaining about it, and there are many books and alternative medicine practitioners trying to cure them, or possibly trying to extract as much money as possible from people in desperate need of relief from an unpleasant, debilitating and inexplicable malaise.

 

THE PLURAL OF ANECDOTE IS DATA.

 

Not good data, to be sure. But if ten people mention to you in passing that the sun is shining, you are a damned fool if you think you know nothing about the weather.

 

It's known that TSH ranges aren't 'normally distributed' (in the sense of Gauss/the bell curve distribution) in the healthy population.

 

If you log-transform them, they do look a bit more normal.

 

The American Academy of Clinical Biochemists, in 2003, decided to settle the question once and for all. They carefully screened out anyone with even the slightest sign that there might be anything wrong with their thyroid at all, and measured their TSH very accurately.

 

In their report, they said (this is a direct quote):

 

In the future, it is likely that the upper limit of the serum TSH euthyroid reference range will be reduced to 2.5 mIU/L because >95% of rigorously screened normal euthyroid volunteers have serum TSH values between 0.4 and 2.5 mIU/L.

 

Many other studies disagree, and propose wider ranges for normal TSH.

 

But if the AACB report were taken seriously, it would lead to diagnosis of hypothyroidism in vast numbers of people who are perfectly healthy! In fact the levels of noise in the test would put people whose thyroid systems are perfectly normal in danger of being diagnosed and inappropriately treated.

 

For fairly obvious reasons, biochemists have been extremely, and quite properly, reluctant to take the report of their own professional body seriously. And yet it is hard to see where the AACB have gone wrong in their report.

 

Neurasthenia is back.

 

A little after the time of the introduction of the TSH test, new forms of 'Tired All The Time' were discovered.

 

As I said, CFS and ME are just two names for the same thing. Fibromyalgia Syndrome (FMS) is much worse, since it is CFS with constant pain, for which there is no known cause and from which there is no relief. Most drugs make it worse.

 

But if you combine the three things (CFS/ME/FMS), then you get a single disease, which has a large number of very non-specific symptoms.

 

These symptoms are the classic symptoms of 'hypometabolism'. Any doctor who has a patient who has CFS/ME/FMS and hasn't tested their thyroid function is *de facto* incompetent. I think the vast majority of medical people would agree with this statement.

 

And yet, when you test the TSH levels in CFS/ME/FMS sufferers, they are perfectly normal.

 

All three/two/one are appalling, crippling, terrible syndromes which ruin people's lives. They are fairly common. You almost certainly know one or two sufferers. The suffering is made worse by the fact that most people believe that they're psychosomatic, which is a polite word for 'imaginary'.

 

And the people suffering are mainly middle-aged women. Middle-aged women are easy to ignore. Especially stupid middle-aged women who are worried about being overweight and obviously faking their symptoms in order to get drugs which are popularly believed to induce weight loss. It's clearly their hormones. Or they're trying to scrounge up welfare benefits. Or they're trying to claim insurance. Even though there's nothing wrong with them and you've checked so carefully for everything that it could possibly be.

 

But it's not all middle aged women. These diseases affect men, and the young. Sometimes they affect little children. Exhaustion, stupidity, constant pain. Endless other problems as your body rots away. Lifelong. No remission and no cure.

 

And I have Doubts of my Own

 

And I can't believe that careful, numerate Billewicz and his co-authors would have made this mistake, but I can't find where the doctors of the 1970s checked for the sensitivity of the TSH test.

 

Specificity, yes. They tested a lot of people who hadn't got any sign of hypothyroidism for TSH levels. If you're well, then your TSH level will be in a narrow range, which may be 0-6, or it may be 1-2. Opinions are weirdly divided on this point in a hard to explain way.

 

But Sensitivity? Where's the bit where they checked for the other arm of the conditional?

 

The bit where they show that no-one who's suffering from hypometabolism, and who gets well when you give them Dessicated Thyroid, had, on first contact, TSH levels outside the normal range.

 

If you're trying to prove A <=> B, you can't just prove A => B and call it a day. You couldn't get that past an A-level maths student. And certainly anyone with a science degree wouldn't make that error. Surely? I mean you shouldn't be able to get that past anyone who can reason their way out of a paper bag.

 

I'm going to say this a third time, because I think it's important and maybe it's not obvious to everyone.

 

If you're trying to prove that two things are the same thing, then proving that the first one is always the second one is not good enough.

 

IF YOU KNOW THAT THE KING OF FRANCE IS ALWAYS FRENCH, YOU DO *NOT* KNOW THAT ANYONE WHO IS FRENCH IS KING OF FRANCE.

 

It's possible, of course, that I've missed this bit. As I say, 'History of Endocrinology' is not one of those popular, fashionable subjects that you can easily find out about.

 

I wonder if they just assumed that the thyroid system was a thermostat. The analogy is still common today.

 

But it doesn't look like a thermostat to me. The thyroid system with its vast numbers of hormones and transforming enzymes is insanely, incomprehensibly complicated. And very poorly understood. And evolutionarily ancient. It looks as though originally it was the system that coordinated metamorphosis. Or maybe it signalled when resources were high enough to undergo metamorphosis. But whatever it did originally in our most ancient ancestors, it looks as though the blind watchmaker has layered hack after hack after hack on top of it on the way to us.

 

Only the thyroid originally, controlling major changes in body plan in tiny creatures that metamorphose.

 

Of course, humans metamorphose too, but it's all in the womb, and who measures thyroid levels in the unborn when they still look like tiny fish?

 

And of course, humans undergo very rapid growth and change after we are born. Especially in the brain. Baby horses can walk seconds after they're born. Baby humans take months to learn to crawl. I wonder if that's got anything to do with cretinism.

 

And I'm told that baby humans have very high hormone levels. I wonder why they need to be so hot? If it's a thermostat, I mean.

 

But then on top of the thyroid, the pituitary. I wonder what that adds to the system? If the thyroid's just a thermostat, or just a device for keeping T4 levels constant, why can't it just do the sensing itself?

 

What evolutionary process created the pituitary control over the thyroid? Is that the thermostat bit?

 

And then the hypothalamus, controlling the pituitary. Why? Why would the brain need to set the temperature when the ideal temperature of metabolic reactions is always 37C in every animal? That's the temperature everything's designed for. Why would you dial it up or down, to a place where the chemical reactions that you are don't work properly?

 

I can think of reasons why. Perhaps you're hibernating. Many of our ancestors must have hibernated. Maybe it's a good idea to slow the metabolism sometimes. Perhaps to conserve your fat supplies. Your stored food.

 

Perhaps it's a good idea to slow the metabolism in times of famine?

 

Perhaps the whole calories in/calories out thing is wrong, and people whose energy expenditure goes over their calorie intake have slow metabolisms, slowly sacrificing every bodily function including immune defence in order to avoid starvation.

 

I wonder at the willpower that could keep an animal sane in that state. While its body does everything it can to keep its precious fat reserves high so that it can get through the famine.

 

And then I remember about Anorexia Nervosa, where young women who want to lose weight starve themselves to the point where they no longer feel hungry at all. Another mysterious psychological disease that's just put down to crazy females. We really need some female doctors.

 

And I remember about Seth Robert's Shangri-La Diet, that I tried, to see if it worked, some years ago, just because it was so weird, where by eating strange things, like tasteless oil and raw sugar, you can make your appetite disappear, and lose weight. It seemed to work pretty well, to my surprise. Seth came up with it while thinking about rats. And apparently it works on rats too. I wonder why it hasn't caught on.

 

It seems, my female friends tell me, that a lot of diets work well for a bit, but then after a few weeks the effect just stops. If we think of a particular diet as a meme, this would seem to be its infectious period, where the host enthusiastically spreads the idea.

 

And I wonder about the role of the thyronine de-iodinating enzymes, and the whole fantastically complicated process of stripping the iodines and the amino acid bits from thyroxine in various patterns that no-one understands, and what could be going on there if the thyroid system were just a simple thermostat.

 

And I wonder about reports I am reading where elite athletes are finding themselves suffering from hypothyroidism in numbers far too large to be credible, if it wasn't, say, a physical response to calorie intake less than calorie output.

 

I've been looking ever so hard to find out why the TSH test, or any of the various available thyroid blood tests are a good way to assess the function of this fantastically complicated and very poorly understood system.

 

But every time I look, I just come up with more reasons to believe that they don't tell you very much at all.

 

 

The Mystery

 

Can anyone convince me that the converse arm has been carefully checked?

 

That everyone who's suffering from hypometabolism, and who gets well when you give them Dessicated Thyroid, has, before you fix them, TSH levels outside the normal range.

 

In other words, that we haven't just thrown, though carelessness, a long standing, perfectly safe, well tested treatment, for a horrible disabling disease that often causes excruciating pain, that the Victorians knew how to cure, and that the people of the 1950s and 60s routinely cured, away.

Spreading rationality through engagement with secular groups

15 Gleb_Tsipursky 19 January 2016 11:19PM

The Less Wrong meetup in Columbus, OH is very oriented toward popularizing rationality for a broad audience (in fact, Intentional Insights sprang from this LW meetup). We've found that doing in-person presentations for secular groups is an excellent way of attracting new people to rationality, and have been doing that for a couple of years now, through a group called "Columbus Rationality" as part of the local branch of the American Humanist Association. Here's a blog post I just published about this topic.

 

Most importantly for anyone who is curious with experimenting doing something like this, we at Intentional Insights have put together a “Rationality” group starter package, which includes two blog posts describing “Rationality” events, three videos, a facilitator’s guide, an introduction guide, and a feedback sheet. We've been working on this starter package for about 9 months, and finally it's in a shape that we think it's ready for use. Hope this is helpful for any LWs who want to do something similar with a secular group where you live. You can also get in touch with us at info@intentionalinsights.org to get connected to current participants in “Columbus Rationality” who can give you tips on setting up such a group in your own locale.

The Growth of My Pessimism: Transhumanism, Immortalism, Effective Altruism.

15 diegocaleiro 28 November 2015 11:07AM
This text has many, many hyperlinks, it is useful to at least glance at frontpage of the linked material to get it. It is an expression of me thinking so it has many community jargon terms. Thank Oliver Habryka, Daniel Kokotajlo and James Norris for comments. No, really, check the front page of the hyperlinks. 
  • Why I Grew Skeptical of Transhumanism
  • Why I Grew Skeptical of Immortalism
  • Why I Grew Skeptical of Effective Altruism
  • Only Game in Town

 

Wonderland’s rabbit said it best: The hurrier I go, the behinder I get.

 

We approach 2016, and the more I see light, the more I see brilliance popping everywhere, the Effective Altruism movement growing, TEDs and Elons spreading the word, the more we switch our heroes in the right direction, the behinder I get. But why? - you say.

Clarity, precision, I am tempted to reply. I have left the intellectual suburbs of Brazil, straight into the strongest hub of production of things that matter, The Bay Area, via Oxford’s FHI office, I now split my time between UC Berkeley, and the CFAR/MIRI office. In the process, I have navigated an ocean of information, read hundreds of books, papers, saw thousands of classes, became proficient in a handful of languages and a handful of intellectual disciplines. I’ve visited the Olympus and I met our living demigods in person as well.

Against the overwhelming forces of an extremely upbeat personality surfing a hyper base-level happiness, these three forces: approaching the center, learning voraciously, and meeting the so-called heroes, have brought me to the current state of pessimism.

I was a transhumanist, an immortalist, and an effective altruist.

 

Why I Grew Skeptical of Transhumanism

The transhumanist in me is skeptical of technological development fast enough for improving the human condition to be worth it now, he sees most technologies as fancy toys that don’t get us there. Our technologies can’t and won’t for a while lead our minds to peaks anywhere near the peaks we found by simply introducing weirdly shaped molecules into our brains. The strangeness of Salvia, the beauty of LSD, the love of MDMA are orders and orders of magnitude beyond what we know how to change from an engineering perspective. We can induce a rainbow, but we don’t even have the concept of force yet. Our knowledge about the brain, given our goals about the brain, is at the level of knowledge of physics of someone who found out that spraying water on a sunny day causes the rainbow. It’s not even physics yet.

Believe me, I have read thousands of pages of papers in the most advanced topics in cognitive neuroscience, my advisor spent his entire career, from Harvard to Tenure, doing neuroscience, and was the first person to implant neurons that actually healed a brain to the point of recovering functionality by using non-human neurons. As Marvin Minsky, who invented the multi-agent computational theory of mind, told me: I don’t recommend entering a field where every four years all knowledge is obsolete, they just don’t know it yet.

 

Why I Grew Skeptical of Immortalism

The immortalist in me is skeptical because he understands the complexity of biology from conversations with the centimillionaires and with the chief scientists of anti-aging research facilities worldwide, he met the bio-startup founders and gets that the structure of incentives does not look good for bio-startups anyway, so although he was once very excited about the prospect of defeating the mechanisms of ageing, back when less than 300 thousand dollars were directly invested in it, he is now, with billions pledged against ageing, confident that the problem is substantially harder to surmount than the number of man-hours left to be invested in the problem, at least during my lifetime, or before the Intelligence Explosion.

Believe me, I was the first cryonicist among the 200 million people striding my country, won a prize for anti-ageing research at the bright young age of 17, and hang out on a regular basis with all the people in this world who want to beat death that still share in our privilege of living, just in case some new insight comes that changes the tides, but none has come in the last ten years, as our friend Aubrey will be keen to tell you in detail.

 

Why I Grew Skeptical of Effective Altruism

The Effective Altruist is skeptical too, although less so, I’m still founding an EA research institute, keeping a loving eye on the one I left behind, living with EAs, working at EA offices and mostly broadcasting ideas and researching with EAs. Here are some problems with EA which make me skeptical after being shook around by the three forces:

  1. The Status Games: Signalling, countersignalling, going one more meta-level up, outsmarting your opponent, seeing others as opponents, my cause is the only true cause, zero-sum mating scarcity, pretending that poly eliminates mating scarcity, founders X joiners, researchers X executives, us institutions versus them institutions, cheap individuals versus expensive institutional salaries, it's gore all the way up and down.

  2. Reasoning by Analogy: Few EAs are able to and doing their due intellectual diligence. I don’t blame them, the space of Crucial Considerations is not only very large, but extremely uncomfortable to look at, who wants to know our species has not even found the stepping stones to make sure that what matters is preserved and guaranteed at the end of the day? It is a hefty ordeal. Nevertheless, it is problematic that fewer than 20 EAs (one in 300?) are actually reasoning from first principles, thinking all things through from the very beginning. Most of us are looking away from at least some philosophical assumption or technological prediction. Most of us are cooks and not yet chefs. Some of us have not even waken up yet.

  3. Babies with a Detonator: Most EAs still carry their transitional objects around, clinging desperately to an idea or a person they think more guaranteed to be true, be it hardcore patternism about philosophy of mind, global aggregative utilitarianism, veganism, or the expectation of immortality.

  4. The Size of the Problem: No matter if you are fighting suffering, Nature, Chronos (death), Azathoth (evolutionary forces) or Moloch (deranged emergent structures of incentives), the size of the problem is just tremendous. One completely ordinary reason to not want to face the problem, or to be in denial, is the problem’s enormity.

  5. The Complexity of The Solution: Let me spell this out, the nature of the solution is not simple in the least. It’s possible that we luck out and it turns out the Orthogonality Thesis and the Doomsday Argument and Mind Crime are just philosophical curiosities that have no practical bearing in our earthly engineering efforts, that the AGI or Emulation will by default fall into an attractor basin which implements some form of MaxiPok with details that it only grasps after CEV or the Crypto, and we will be Ok. It is possible, and it is more likely than that our efforts will end up being the decisive factor. We need to focus our actions in the branches where they matter though.

  6. The Nature of the Solution: So let’s sit down side by side and stare at the void together for a bit. The nature of the solution is getting a group of apes who just invented the internet from everywhere around the world, and get them to coordinate an effort that fills in the entire box of Crucial Considerations yet unknown - this is the goal of Convergence Analysis, by the way - find every single last one of them to the point where the box is filled, then, once we have all the Crucial Considerations available, develop, faster than anyone else trying, a translation scheme that translates our values to a machine or emulation, in a physically sound and technically robust way (that’s if we don’t find a Crucial Consideration otherwise which, say, steers our course towards Mars). Then we need to develop the engineering prerequisites to implement a thinking being smarter than all our scientists together who can reflect philosophically better than the last two thousand years of effort while becoming the most powerful entity in the universe’s history, that will fall into the right attractor basin within mindspace. That’s if Superintelligences are even possible technically. Add to that we or it have to guess correctly all the philosophical problems that are A)Relevant B)Unsolvable within physics (if any) or by computers, all of this has to happen while the most powerful corporations, States, armies and individuals attempt to seize control of the smart systems themselves. without being curtailed by the hindrance counter incentive of not destroying the world either because they don’t realize it, or because the first mover advantage seems worth the risk, or because they are about to die anyway so there’s not much to lose.

  7. How Large an Uncertainty: Our uncertainties loom large. We have some technical but not much philosophical understanding of suffering, and our technical understanding is insufficient to confidently assign moral status to other entities, specially if they diverge in more dimensions than brain size and architecture. We’ve barely scratched the surface of technical understanding on happiness increase, and philosophical understanding is also in its first steps.

  8. Macrostrategy is Hard: A Chess Grandmaster usually takes many years to acquire sufficient strategic skill to command the title. It takes a deep and profound understanding of unfolding structures to grasp how to beam a message or a change into the future. We are attempting to beam a complete value lock-in in the right basin, which is proportionally harder.

  9. Probabilistic Reasoning = Reasoning by Analogy: We need a community that at once understands probability theory, doesn’t play reference class tennis, and doesn’t lose motivation by considering the base rates of other people trying to do something, because the other people were cooks, not chefs, and also because sometimes you actually need to try a one in ten thousand chance. But people are too proud of their command of Bayes to let go of the easy chance of showing off their ability to find mathematically sound reasons not to try.

  10. Excessive Trust in Institutions: Very often people go through a simplifying set of assumptions that collapses a brilliant idea into an awful donation, when they reason:
    I have concluded that cause X is the most relevant
    Institution A is an EA organization fighting for cause X
    Therefore I donate to institution A to fight for cause X.
    To begin with, this is very expensive compared to donating to any of the three P’s: projects, people or prizes. Furthermore, the crucial points to fund institutions are when they are about to die, just starting, or building a type of momentum that has a narrow window of opportunity where the derivative gains are particularly large or you have private information about their current value. To agree with you about a cause being important is far from sufficient to assess the expected value of your donation.

  11. Delusional Optimism: Everyone who like past-me moves in with delusional optimism will always have a blind spot in the feature of reality about which they are in denial. It is not a problem to have some individuals with a blind spot, as long as the rate doesn’t surpass some group sanity threshold, yet, on an individual level, it is often the case that those who can gaze into the void a little longer than the rest end up being the ones who accomplish things. Staring into the void makes people show up.

  12. Convergence of opinions may strengthen separation within EA:  Thus far, the longer someone is an EA for, the more likely they are to transition to an opinion in the subsequent boxes in this flowchart from whichever box they are at at the time. There are still people in all the opinion boxes, but the trend has been to move in that flow. Institutions however have a harder time escaping being locked into a specific opinion. As FHI moves deeper into AI, and GWWC into poverty, 80k into career selection etc… they become more congealed. People’s opinions are still changing, and some of the money follows, but institutions are crystallizing into some opinions, and in the future they might prevent transition between opinion clusters and free mobility of individuals, like national frontiers already do. Once institutions, which in theory are commanded by people who agree with institutional values, notice that their rate of loss towards the EA movement is higher than their rate of gain, they will have incentives to prevent the flow of talent, ideas and resources that has so far been a hallmark of Effective Altruism and why many of us find it impressive, it’s being an intensional movement. Any part that congeals or becomes extensional will drift off behind, and this may create unsurmountable separation between groups that want to claim ‘EA’ for themselves.

 

Only Game in Town

 

The reasons above have transformed a pathological optimist into a wary skeptical about our future, and the value of our plans to get there. And yet, I don’t see other option than to continue the battle. I wake up in the morning and consider my alternatives: Hedonism, well, that is fun for a while, and I could try a quantitative approach to guarantee maximal happiness over the course of the 300 000 hours I have left. But all things considered, anyone reading this is already too close to the epicenter of something that can become extremely important and change the world to have the affordance to wander off indeterminately. I look at my high base-happiness and don’t feel justified in maximizing it up to the point of no marginal return, there clearly is value elsewhere than here (points inwards), clearly the self of which I am made has strong altruistic urges anyway, so at least above a threshold of happiness, has reason to purchase the extremely good deals in expected value happiness of others that seem to be on the market. Other alternatives? Existentialism? Well, yes, we always have a fundamental choice and I feel the thrownness into this world as much as any Kierkegaard does. Power? When we read Nietzsche it gives that fantasy impression that power is really interesting and worth fighting for, but at the end of the day we still live in a universe where the wealthy are often reduced to having to spend their power in pathetic signalling games and zero sum disputes or coercing minds to act against their will. Nihilism and Moral Fictionalism, like Existentialism all collapse into having a choice, and if I have a choice my choice is always going to be the choice to, most of the time, care, try and do.

Ideally, I am still a transhumanist and an immortalist. But in practice, I have abandoned those noble ideals, and pragmatically only continue to be an EA.

It is the only game in town.

What's wrong with this picture?

14 CronoDAS 28 January 2016 01:30PM

Alice: "I just flipped a coin [large number] times. Here's the sequence I got:

 

(Alice presents her sequence.)

 

Bob: No, you didn't. The probability of having gotten that particular sequence is 1/2^[large number]. Which is basically impossible. I don't believe you.

 

Alice: But I had to get some sequence or other. You'd make the same claim regardless of what sequence I showed you.

 

Bob: True. But am I really supposed to believe you that a 1/2^[large number] event happened, just because you tell me it did, or because you showed me a video of it happening, or even if I watched it happen with my own eyes? My observations are always fallible, and if you make an event improbable enough, why shouldn't I be skeptical even if I think I observed it?

 

Alice: Someone usually wins the lottery. Should the person who finds out that their ticket had the winning numbers believe the opposite, because winning is so improbable?

 

Bob: What's the difference between finding out you've won the lottery and finding out that your neighbor is a 500 year old vampire, or that your house is haunted by real ghosts? All of these events are extremely improbable given what we know of the world.

 

Alice: There's improbable, and then there's impossible. 500 year old vampires and ghosts don't exist.

 

Bob: As far as you know. And I bet more people claim to have seen ghosts than have won more than 100 million dollars in the lottery.

 

Alice: I still think there's something wrong with your reasoning here.

Perhaps a better form factor for Meetups vs Main board posts?

14 lionhearted 28 January 2016 11:50AM

I like to read posts on "Main" from time to time, including ones that haven't been promoted. However, lately, these posts get drowned out by all the meetup announcements.

It seems like this could lead to a cycle where people comment less on recent non-promoted posts (because they fall off the Main non-promoted area quickly) which leads to less engagement, and less posts, etc.

Meetups are also very important, but here's the rub: I don't think a text-based announcement in the Main area is the best possible way to showcase meetups.

So here's an idea: how about creating either a calendar of upcoming meetups, or map with pins on it of all places having a meetup in the next three months?

This could be embedded on the front page of leswrong.com -- that'd let people find meetups easier (they can look either by timeframe or see if their region is represented), and would give more space to new non-promoted posts, which would hopefully promote more discussion, engagement, and new posts.

Thoughts?

[Link] AlphaGo: Mastering the ancient game of Go with Machine Learning

14 ESRogs 27 January 2016 09:04PM

DeepMind's go AI, called AlphaGo, has beaten the European champion with a score of 5-0. A match against top ranked human, Lee Se-dol, is scheduled for March.

 

Games are a great testing ground for developing smarter, more flexible algorithms that have the ability to tackle problems in ways similar to humans. Creating programs that are able to play games better than the best humans has a long history

[...]

But one game has thwarted A.I. research thus far: the ancient game of Go.


Beware surprising and suspicious convergence

14 Thrasymachus 24 January 2016 07:13PM

[Cross]

Imagine this:

Oliver: … Thus we see that donating to the opera is the best way of promoting the arts.

Eleanor: Okay, but I’m principally interested in improving human welfare.

Oliver: Oh! Well I think it is also the case that donating to the opera is best for improving human welfare too.

Generally, what is best for one thing is usually not the best for something else, and thus Oliver’s claim that donations to opera are best for the arts and human welfare is surprising. We may suspect bias: that Oliver’s claim that the Opera is best for the human welfare is primarily motivated by his enthusiasm for opera and desire to find reasons in favour, rather than a cooler, more objective search for what is really best for human welfare.

The rest of this essay tries to better establish what is going on (and going wrong) in cases like this. It is in three parts: the first looks at the ‘statistics’ of convergence - in what circumstances is it surprising to find one object judged best by the lights of two different considerations? The second looks more carefully at the claim of bias: how it might be substantiated, and how it should be taken into consideration. The third returns to the example given above, and discusses the prevalence of this sort of error ‘within’ EA, and what can be done to avoid it.

Varieties of convergence

Imagine two considerations, X and Y, and a field of objects to be considered. For each object, we can score it by how well it performs by the lights of the considerations of X and Y. We can then plot each object on a scatterplot, with each axis assigned to a particular consideration. How could this look?

Convergence

At one extreme, the two considerations are unrelated, and thus the scatterplot shows no association. Knowing how well an object fares by the lights of one consideration tells you nothing about how it fares by the lights of another, and the chance that the object that scores highest on consideration X also scores highest on consideration Y is very low. Call this no convergence.

At the other extreme, considerations are perfectly correlated, and the ‘scatter’ plot has no scatter, but rather a straight line. Knowing how well an object fares by consideration X tells you exactly how well it fares by consideration Y, and the object that scores highest on consideration X is certain to be scored highest on consideration Y. Call this strong convergence.

In most cases, the relationship between two considerations will lie between these extremes: call this weak convergence. One example is there being a general sense of physical fitness, thus how fast one can run and how far one can throw are somewhat correlated. Another would be intelligence: different mental abilities (pitch discrimination, working memory, vocabulary, etc. etc.) all correlate somewhat with one another.

More relevant to effective altruism, there also appears to be weak convergence between different moral theories and different cause areas. What is judged highly by (say) Kantianism tends to be judged highly by Utilitarianism: although there are well-discussed exceptions to this rule, both generally agree that (among many examples) assault, stealing, and lying are bad, whilst kindness, charity, and integrity are good.(1) In similarly broad strokes what is good for (say) global poverty is generally good for the far future, and the same applies for between any two ‘EA’ cause areas.(2)

In cases of weak convergence, points will form some some sort of elliptical scatter, and knowing how an object scores on X does tell you something about how well it scores on Y. If you know that something scores highest for X, your expectation of how it scores for Y should go upwards, and the chance of it also scores highest for Y should increase. However, the absolute likelihood of it being best for X and best for Y remains low, for two main reasons:

divergence

Trade-offs: Although consideration X and Y are generally positively correlated, there might be a negative correlation at the far tail, due to attempts to optimize for X or Y  at disproportionate expense for Y or X. Although in the general population running and throwing will be positively correlated with one another, elite athletes may optimize their training for one or the other, and thus those who specialize in throwing and those who specialize in running diverge. In a similar way, we may think believe there is scope for similar optimization when it comes to charities or cause selection.

regressexp

Chance: (c.f.) Even in cases where there are no trade-offs, as long as the two considerations are somewhat independent, random fluctuations will usually ensure the best by consideration X will not be best by consideration Y. That X and Y only weakly converge implies other factors matter for Y besides X. For the single object that is best for X, there will be many more not best for X (but still very good), and out of this large number of objects it is likely one will do very well on these other factors to end up the best for Y overall. Inspection of most pairs of correlated variables confirms this: Those with higher IQ scores tend to be wealthier, but the very smartest aren’t the very wealthiest (and vice versa), serving fast is good for tennis, but the very fastest servers are not the best players (and vice versa), and so on. Graphically speaking, most scatter plots bulge in an ellipse rather than sharpen to a point.

The following features make a single object scoring highest on two considerations more likely:

  1. The smaller the population of objects. Were the only two options available to OIiver and Eleanor, “Give to the Opera” and “Punch people in the face”, it is unsurprising the former comes top for many considerations.
  2. The strength of their convergence. The closer the correlation moves to collinearity, the less surprising finding out something is best for both. It is less surprising the best at running 100m is best at running 200m, but much more surprising if it transpired they threw discus best too.
  3. The ‘wideness’ of the distribution. The heavier the tails, the more likely a distribution is to be stretched out and ‘sharpen’ to a point, and the less likely bulges either side of the regression line are to be populated. (I owe this to Owen Cotton-Barratt)

In the majority of cases (including those relevant to EA), there is a large population of objects, weak convergence and (pace the often heavy-tailed distributions implicated) it is uncommon for one thing to be best b the lights of two weakly converging considerations.

Proxy measures and prediction

In the case that we have nothing to go on to judge what is good for Y save knowing what is good for X. Our best guess for what is best for Y is what is best for X. Thus the Opera is the best estimate for what is good for human welfare, given only the information that it is best for the arts. In this case, we should expect our best guess to be very likely wrong. Although it is more likely than any similarly narrow alternative (“donations to the opera, or donations to X-factor?”) Its absolute likelihood relative to the rest of the hypothesis space is very low (“donations to the opera, or something else?”).

Of course, we usually have more information available. Why not search directly for what is good for human welfare, instead of looking at what is good for the arts? Often searching for Y directly rather than a weakly converging proxy indicator will do better: if one wants to select a relay team, selecting based on running speed rather than throwing distance looks a better strategy. Thus finding out a particular intervention (say the Against Malaria Foundation) comes top when looking for what is good for human welfare provides much stronger evidence it is best for human welfare than finding out the opera comes top when looking for what is good for a weakly converging consideration.(3)

Pragmatic defeat and Poor Propagation

Eleanor may suspect bias is driving Oliver’s claim on behalf of the opera. The likelihood of the opera being best for both the arts and human welfare is low, even taking their weak convergence into account. The likelihood of bias and motivated cognition colouring Oliver’s judgement is higher, especially if Oliver has antecedent commitments to the opera. Three questions: 1) Does this affect how she should regard Oliver’s arguments? 2) Should she keep talking to Oliver, and, if she does, should she suggest to him he is biased? 3) Is there anything she can do to help ensure she doesn’t make a similar mistake?

Grant Eleanor is right that Oliver is biased. So what? It entails neither he is wrong nor the arguments he offers in support are unsound: he could be biased and right. It would be a case of the genetic fallacy (or perhaps ad hominem) to argue otherwise. Yet this isn’t the whole story: informal ‘fallacies’ are commonly valuable epistemic tools; we should not only attend to the content of arguments offered, but argumentative ‘meta-data’ such as qualities of the arguer as well.(4)

Consider this example. Suppose you are uncertain whether God exists. A friendly local Christian apologist offers the reasons why (in her view) the balance of reason clearly favours Theism over Atheism. You would be unwise to judge the arguments purely ‘on the merits’: for a variety of reasons, the Christian apologist is likely to have slanted the evidence she presents to favour Theism; the impression she will give of where the balance of reason lies will poorly track where the balance of reason actually lies. Even if you find her arguments persuasive, you should at least partly discount this by what you know of the speaker.

In some cases it may be reasonable to dismiss sources ‘out of hand’ due to their bias without engaging on the merits: we may expect the probative value of the reasons they offer, when greatly attenuated by the anticipated bias, to not be worth the risks of systematic error if we mistake the degree of bias (which is, of course, very hard to calculate); alternatively, it might just be a better triage of our limited epistemic resources to ignore partisans and try and find impartial sources to provide us a better view of the balance of reason.

So: should Eleanor stop talking to Oliver about this topic? Often, no. First (or maybe zeroth), there is the chance she is mistaken about Oliver being biased, and further discussion would allow her to find this out. Second, there may be tactical reasons: she may want to persuade third parties to their conversation. Third, she may guess further discussion is the best chance of persuading Oliver, despite the bias he labours under. Fourth, it may still benefit Eleanor: although bias may undermine the strength of reasons Oliver offers, they may still provide her with valuable information. Being too eager to wholly discount what people say based on assessments of bias (which are usually partly informed by object level determinations of various issues) risks entrenching one’s own beliefs.

Another related question is whether it is wise for Eleanor to accuse Oliver of bias. There are some difficulties. Things that may bias are plentiful, thus counter-accusations are easy to make: (“I think you’re biased in favour of the opera due to your prior involvement”/”Well, I think you’re biased against the opera due to your reductionistic and insufficiently holistic conception of the good.”) They are apt to devolve into the personally unpleasant (“You only care about climate change because you are sleeping with an ecologist”) or the passive-aggressive (“I’m getting really concerned that people who disagree with me are offering really bad arguments as a smokescreen for their obvious prejudices”). They can also prove difficult to make headway on. Oliver may assert his commitment was after his good-faith determination that opera really was best for human welfare and the arts. Many, perhaps most, claims like these are mistaken, but it can be hard to tell (or prove) which.(5)

Eleanor may want to keep an ‘internal look out’ to prevent her making a similar mistake to Oliver. One clue is a surprising lack of belief propagation: we change our mind about certain matters, and yet our beliefs about closely related matters remain surprisingly unaltered. In most cases where someone becomes newly convinced of (for example) effective altruism, we predict this should propagate forward and effect profound changes to their judgements on where to best give money or what is the best career for them to pursue. If Eleanor finds in her case that this does not happen, that in her case her becoming newly persuaded by the importance of the far future does not propagate forward to change her career or giving, manifesting instead in a proliferation of ancillary reasons that support her prior behaviour, she should be suspicious of this surprising convergence between what she thought was best then, and what is best now under considerably different lights.

EA examples

Few Effective altruists seriously defend the opera as a leading EA cause. Yet the general problem of endorsing surprising and suspicious convergence remains prevalent. Here are some provocative examples:

  1. The lack of path changes. Pace personal fit, friction, sunk capital, etc. it seems people who select careers on ‘non EA grounds’ often retain them after ‘becoming’ EA, and then provide reasons why (at least for them) persisting in their career is the best option.
  2. The claim that, even granting the overwhelming importance of the far future, it turns out that animal welfare charities are still the best to give to, given their robust benefits, positive flow through effects, and the speculativeness of far future causes.
  3. The claim that, even granting the overwhelming importance of the far future, it turns out that global poverty charities are still the best to give to, given their robust benefits, positive flow through effects, and the speculativeness of far future causes.
  4. Claims from enthusiasts of Cryonics or anti-aging research that this, additional to being good for their desires for an increased lifespan, is also a leading ‘EA’ buy.
  5. A claim on behalf of veganism that it is the best diet for animal welfare and for the environment and for individual health and for taste.

All share similar features: one has prior commitments to a particular cause area or action. One becomes aware of a new consideration which has considerable bearing on these priors. Yet these priors don’t change, and instead ancillary arguments emerge to fight a rearguard action on behalf of these prior commitments - that instead of adjusting these commitments in light of the new consideration, one aims to co-opt the consideration to the service of these prior commitments.

Naturally, that some rationalize doesn’t preclude others being reasonable, and the presence of suspicious patterns of belief doesn’t make them unwarranted. One may (for example) work in global poverty due to denying the case for the far future (via a person affecting view, among many other possibilities) or aver there are even stronger considerations in favour (perhaps an emphasis on moral uncertainty and peer disagreement and therefore counting the much stronger moral consensus around stopping tropical disease over (e.g.) doing research into AI risk as the decisive consideration).

Also, for weaker claims, convergence is much less surprising. Were one to say on behalf of veganism: “It is best for animal welfare, but also generally better for the environment and personal health than carnivorous diets. Granted, it does worse on taste, but it is clearly superior all things considered”, this seems much less suspect (and also much more true) than the claim it is best by all of these metrics. It would be surprising if the optimal diet for personal health did not include at least some animal products.

Caveats aside, though, these lines of argument are suspect, and further inspection deepens these suspicions. In sketch, one first points to some benefits the prior commitment has by the lights of the new consideration (e.g. promoting animal welfare promotes antispeciesism, which is likely to make the far future trajectory go better), and second remarks about how speculative searching directly on the new consideration is (e.g. it is very hard to work out what we can do now which will benefit the far future).(6)

That the argument tends to end here is suggestive of motivated stopping. For although the object level benefits of (say) global poverty are not speculative, their putative flow-through benefits on the far future are speculative. Yet work to show that this is nonetheless less speculative than efforts to ‘directly’ work on the far future is left undone.(7) Similarly, even if it is the case the best way to make the far future go better is to push on a proxy indicator, which one? Work on why (e.g.) animal welfare is the strongest proxy out of competitors also tends to be left undone.(8) As a further black mark, it is suspect that those maintaining global poverty is the best proxy almost exclusively have prior commitments to global poverty causes, mutatis mutandis animal welfare, and so on.

We at least have some grasp of what features of (e.g.) animal welfare interventions make them good for the far future. If this (putatively) was the main value of animal welfare interventions due to the overwhelming importance of the far future, it would seem wise to try and pick interventions which maximize these features. So we come to a recursion: within animal welfare interventions, ‘object level’ and ‘far future’ benefits would be expected to only weakly converge. Yet (surprisingly and suspiciously) the animal welfare interventions recommended by the lights of the far future are usually the same as those recommended on ‘object level’ grounds.

Conclusion

If Oliver were biased, he would be far from alone. Most of us are (like it or not) at least somewhat partisan, and our convictions are in part motivated by extra-epistemic reasons: be it vested interests, maintaining certain relationships, group affiliations, etc. In pursuit of these ends we defend our beliefs against all considerations brought to bear against them. Few beliefs are indefatigable by the lights of any reasonable opinion, and few policy prescriptions are panaceas. Yet all of ours are.

It is unsurprising the same problems emerge within effective altruism: a particular case of ‘pretending to actually try’ is ‘pretending to take actually arguments seriously’.(9)These problems seem prevalent across the entirety of EA: that I couldn’t come up with good examples for meta or far future cause areas is probably explained by either bias on my part or a selection effect: were these things less esoteric, they would err more often.(10)

There’s no easy ‘in house’ solution, but I repeat my recommendations to Eleanor: as a rule, maintaining dialogue, presuming good faith, engaging on the merits, and listening to others seems a better strategy, even if we think bias is endemic. It is also worth emphasizing the broad (albeit weak) convergence between cause areas is fertile common ground, and a promising area for moral trade. Although it is unlikely that the best thing by the lights of one cause area is the best thing by the lights of another, it is pretty likely it will be pretty good. Thus most activities by EAs in a particular field should carry broad approbation and support from those working in others.

I come before you a sinner too. I made exactly the same sorts of suspicious arguments myself on behalf of global poverty. I’m also fairly confident my decision to stay in medicine doesn’t really track the merits either – but I may well end up a beneficiary of moral luck. I’m loath to accuse particular individuals of making the mistakes I identify here. But, insofar as readers think this may apply to them, I urge them to think again.(11)

Notes

  1. We may wonder why this is the case: the content of the different moral theories are pretty alien to one another (compare universalizable imperatives, proper functioning, and pleasurable experiences). I suggest the mechanism is implicit selection by folk or ‘commonsense’ morality. Normative theories are evaluated at least in part by how well they accord to our common moral intuitions, and they lose plausibility commensurate to how much violence they do to them. Although cases where a particular normative theory apparently diverges from common sense morality are well discussed (consider Kantianism and the inquiring murder, or Utilitarianism and the backpacker), moral theories that routinely contravene our moral intuitions are non-starters, and thus those that survive to be seriously considered somewhat converge with common moral intuitions, and therefore one another.
  2. There may be some asymmetry: on the object level we may anticipate the ‘flow forward’ effects of global health on x-risk to be greater than the ‘flow back’ benefits of x-risk work on global poverty. However (I owe this to Carl Shulman) the object level benefits are probably much smaller than more symmetrical ‘second order’ benefits, like shared infrastructure, communication and cross-pollination, shared expertise on common issues (e.g. tax and giving, career advice).
  3. But not always. Some things are so hard to estimate directly, and using proxy measures can do better. The key question is whether the correlation between our outcome estimates and the true values is greater than that between outcome and (estimates of) proxy measure outcome. If so, one should use direct estimation; if not, then the proxy measure. There may also be opportunities to use both sources of information in a combined model.
  4. One example I owe to Stefan Schubert: we generally take the fact someone says something as evidence it is true. Pointing out relevant ‘ad hominem’ facts (like bias) may defeat this presumption.
  5. Population data – epistemic epidemiology, if you will – may help. If we find that people who were previously committed to the operas much more commonly end up claiming the opera is best for human welfare than than other groups, this is suggestive of bias.

    A subsequent problem is how to disentangle bias from expertise or privileged access. Oliver could suggest that those involved in the opera gain ‘insider knowledge’, and their epistemically superior position explains why they disproportionately claim the opera is best for human welfare.

    Some features can help distinguish between bias and privileged access, between insider knowledge and insider beliefs. We might be able to look at related areas, and see if ‘insiders’ have superior performance which an insider knowledge account may predict (if insiders correctly anticipate movements in consensus, this is suggestive they have an edge). Another possibility is to look at migration of beliefs. If there is ‘cognitive tropism’, where better cognizers tend to move from the opera to AMF, this is evidence against donating to the opera in general and the claim of privileged access among opera-supporters in particular. Another is to look at ordering: if the population of those ‘exposed’ to the opera first and then considerations around human welfare are more likely to make Oliver’s claims than those exposed in reverse order, this is suggestive of bias on one side or the other.

  6. Although I restrict myself to ‘meta’-level concerns, I can’t help but suggest the ‘object level’ case for these things looks about as shaky as Oliver’s object level claims on behalf of the opera. In the same way we could question: “I grant that the arts is the an important aspect of human welfare, but is it the most important (compared to, say, avoiding preventable death and disability)?” or “What makes you so confident donations to the opera are the best for the arts - why not literature? or perhaps some less exoteric music?” We can post similarly tricky questions to proponents of 2-4: “I grant that (e.g.) antispeciesism is an important aspect of making the far future go well, but is it the most important aspect (compared to, say, extinction risks)?” or “What makes you so confident (e.g) cryonics is the best way of ensuring greater care for the future - what about militating for that directly? Or maybe philosophical research into whether this is the correct view in the first place?”

    It may well be that there are convincing answers to the object level questions, but I have struggled to find them. And, in honesty, I find the lack of public facing arguments in itself cause for suspicion.

  7. At least, undone insofar as I have seen. I welcome correction in the comments.
  8. The only work I could find taking this sort of approach is this.
  9. There is a tension between ‘taking arguments seriously’ and ‘deferring to common sense’. Effective altruism only weakly converges with common sense morality, and thus we should expect their recommendations to diverge. On the other hand, that something lies far from common sense morality is a pro tanto reason to reject it. This is better acknowledged openly: “I think the best action by the lights of EA is to research wild animal suffering, but all things considered I will do something else, as how outlandish this is by common sense morality is a strong reason against it”. (There are, of course, also tactical reasons that may speak against saying or doing very strange things.)
  10. This ‘esoteric selection effect’ may also undermine social epistemological arguments between cause areas:

    It seems to me that more people move from global poverty to far future causes than people move in the opposite direction (I suspect, but am less sure, the same applies between animal welfare and the far future). It also seems to me that (with many exceptions) far future EAs are generally better informed and cleverer than global poverty EAs.

    I don’t have great confidence in this assessment, but suppose I am right. This could be adduced as evidence in favour of far future causes: if the balance of reason favoured the far future over global poverty, this would explain the unbalanced migration and ‘cognitive tropism’ between the cause areas.

    But another plausible account explains this by selection. Global poverty causes are much more widely known that far future causes. Thus people who are ‘susceptible’ to be persuaded by far future causes were often previously persuaded by global poverty causes, whilst the reverse is not true - those susceptible to global poverty causes are unlikely to encounter far future causes first. Further, as far future causes are more esoteric, they will be disproportionately available to better-informed people. Thus, even if the balance of reason was against the far future, we would still see these trends and patterns of believers.

    I am generally a fan of equal-weight views, and of being deferential to group or expert opinion. However, selection effects like these make deriving the balance of reason from the pattern of belief deeply perplexing.

  11. Thanks to Stefan Schubert, Carl Shulman, Amanda MacAskill, Owen Cotton-Barratt and Pablo Stafforini for extensive feedback and advice. Their kind assistance should not be construed as either endorsement endorsement of the content, nor responsibility for any errors.

Making My Peace with Belief

14 OrphanWilde 03 December 2015 08:36PM

I grew up in an atheistic household.

Almost needless to say, I was relatively hostile towards religion for most of my early life.  A few things changed that.

First, the apology of a pastor.  A friend of mine was proselytizing at me, and apparently discussed it with his pastor; the pastor apologized to my parents, and explained to my friend he shouldn't be trying to convert people.  My friend apologized to me after considering the matter.  We stayed friends for a little while afterwards, although I left that school, and we lost contact.

I think that was around the time that I realized that religion is, in addition to being a belief system, a way of life, and not necessarily a bad one.

The next was actually South Park's Mormonism episode, which pointed out that a belief system could be desirable on the merits of the way of life it represented, even if the beliefs themselves are stupid.  This tied into Douglas Adam's comment on Feng Shui, that "...if you disregard for a moment the explanation that's actually offered for it, it may be there is something interesting going on" - which is to say, the explanation for the belief is not necessarily the -reason- for the belief, and that stupid beliefs may actually have something useful to offer - which then requires us to ask whether the beliefs are, in fact, stupid.

Which is to say, beliefs may be epistemically irrational while being instrumentally rational.

The next peace I made with belief actually came from quantum physics, and reading about how there were several disparate and apparently contradictory mathematical systems, which all predicted the same thing.  It later transpired that they could all be generalized into the same mathematical system, but I hadn't read that far before the isomorphic nature of truth occurred to me; you can have multiple contradictory interpretations of the same evidence that all predict the same thing.

Up to this point, however, I still regarded beliefs as irrational, at least on an epistemological basis.

The next peace came from experiences living in a house that would have convinced most people that ghosts are real, which I have previously written about here.  I think there are probably good explanations for every individual experience even if I don't know them, but am still somewhat flummoxed by the fact that almost all the bizarre experiences of my life all revolve around the same physical location.  I don't know if I would accept money to live in that house again, which I guess means that I wouldn't put money on the bet that there wasn't something fundamentally odd about the house itself - a quality of the house which I think the term "haunted" accurately conveys, even if its implications are incorrect.

If an AI in a first person shooter dies every time it walks into a green room, and experiences great disutility for death, how many times must it walk into a green room before it decides not to do that anymore?  I'm reasonably confident on a rational level that there was nothing inherently unnatural about that house, nothing beyond explanation, but I still won't "walk into the green room."

That was the point at which I concluded that beliefs can be -rational-.  Disregard for a moment the explanation that's actually offered for them, and just accept the notion that there may be something interesting going on underneath the surface.

If we were to hold scientific beliefs to the same standard we hold religious beliefs - holding the explanation responsible rather than the predictions - scientific beliefs really don't come off looking that good.  The sun isn't the center of the universe; some have called this theory "less wrong" than an earth-centric model of the universe, but that's because the -predictions- are better; the explanation itself is still completely, 100% wrong.

Likewise, if we hold religious beliefs to the same standard we hold scientific beliefs - holding the predictions responsible rather than the explanations - religious beliefs might just come off better than we'd expect.

[link] "The Happiness Code" - New York Times on CFAR

13 Kaj_Sotala 15 January 2016 06:34AM

http://www.nytimes.com/2016/01/17/magazine/the-happiness-code.html

Long. Mostly quite positive, though does spend a little while rolling its eyes at the Eliezer/MIRI connection and the craziness of taking things like cryonics and polyamory seriously.

PSA: even if you don't usually read Main, there have been several worthwhile posts there recently

13 Kaj_Sotala 19 December 2015 12:34PM

A lot of people have said that they never look at Main, only Discussion. And indeed, LW's Google Analytics stats say that Main only gets one-third of the views that Discussion does.

Because of this, I thought that I'd point out that December has been an unusually lively month for Main, with several high-quality posts that you may be interested in reading out if you haven't already:

Your transhuman copy is of questionable value to your meat self.

12 Usul 06 January 2016 09:03AM

I feel safe saying that nearly everyone reading this will agree that, given sufficient technology, a perfect replica or simulation could be made of the structure and function of a human brain, producing an exact copy of an individual mind including a consciousness.  Upon coming into existence, this consciousness will have a separate but baseline-identical subjective experience to the consciousness from which it was copied, as it was at the moment of copying.  The original consciousness will continue its own existence/ subjective experience.  If the brain containing the original consciousness is destroyed, the consciousness within ceases to be.  The existence or non- of a copy is irrelevant to this fact.

With this in mind, I fail to see the attraction of the many transhuman options for extra-meat existence, and I see no meaningful immortality therein, if that's what you came for.

Consciousness is notoriously difficult to define and analyze and I am far from an expert in it's study.  I define it as an awareness: the sense organ which perceives the activity of the mind.  It is not thought.  It is not memory or emotion.  It is the thing that experiences or senses these things.  Memories will be gained and lost, thoughts and emotions come and go, the sense of self remains even as the self changes.  There exists a system of anatomical structures in your brain which, by means of electrochemical activity, produces the experience of consciousness.  If a brain injury wiped out major cognitive functions but left those structures involved in the sense of consciousness unharmed, you would, I believe, have the same central awareness of Self as Self, despite perhaps lacking all language or even the ability to form thoughts or understand to world around you.  Consciousness, this awareness, is, I believe, the most accurate definition of Self, Me, You.  I realize this sort of terminology has the potential to sound like mystical woo.  I believe this is due to the twin effects of the inherent difficulty in defining and discussing consciousness, and of our socialization wherein these sorts of discussions are more often than not heard from Buddhists or Sufis, whose philosophical traditions have looked into the matter with greater rigor for a longer time than Western philosophy, and Hippies and Druggies who introduced these traditions to our popular culture.  I am not speaking of a magical soul.  I am speaking of a central feature of the human experience which is a product of the anatomy and physiology of the brain.

Consider the cryonic head-freeze. Ideally, the scanned dead brain, cloned, remade and restarted (or whatever) will be capable of generating a perfectly functional consciousness, and it will feel as if it is the same consciousness which observes the mind which is, for instance, reading these words; but it will not be. The consciousness which is experiencing awareness of the mind which is reading these words will no longer exist. To disagree with this statement is to say that a scanned living brain, cloned, remade and started will contain the exact same consciousness, not similar, the exact same thing itself, that simultaneously exists in the still-living original.  If consciousness has an anatomical location, and therefore is tied to matter, then it would follow that this matter here is the exact matter as that separate matter there. This is an absurd proposition.  If consciousness does not have an anatomical / physical location then it is the stuff of magic and woo.

*Aside: I believe that consciousness, mind, thought, and memory are products not only of anatomy but of physiology, that is to say the ongoing electrochemical state of the brain, the constant flux of charge in and across neurons.  In perfect cryonic storage, the anatomy (hardware) might be maintained, but I doubt the physiology (software), in the form of exact moment-in-time membrane electrical potentials and intra-and extra-cellular ion concentrations for every neuron, will be.  Therefore I hold no faith in its utility, in addition to my indifference to the existence of a me-like being in the future.

Consider the Back-Up. Before lava rafting on your orbital, you have your brain scanned by your local AI so that a copy of your mind at that moment is saved.  In your fiery death in an unforeseen accident, will the mind observed by the consciousness on the raft experience anything differently than if it were not backed up? I doubt I would feel much consolation, other than knowing my loved ones were being cared for.  Not unlike a life insurance policy: not for one's own benefit.  I image the experience would be one of coming to the conclusion of a cruel joke at one's own expense.  Death in the shadow of a promise of immortality.  In any event, the consciousness that left the brain scanner and got on the raft is destroyed when the brain is destroyed, it benefits not at all from the reboot.

Consider the Upload.  You plug in for a brain scan, a digital-world copy of your consciousness is made, and then you are still just you.  You know there is a digital copy of you, that feels as if it is you, feels exactly as you would feel were it you who had travelled to the digital-world, and it is having a wonderful time, but there you still are. You are still just you in your meat brain.  The alternative, of course, is that your brain is destroyed in the scan in which case you are dead and something that feels as if it is you is having a wonderful time.  It would be a mercy killing.

If the consciousness that is me is perfectly analyzed and a copy created, in any medium, that process is external to the consciousness that is me.  The consciousness that is me, that is you reading this, will have no experience of being that copy, although that copy will have a perfect memory of having been the consciousness that is you reading this.  Personally, I don't know that I care about that copy.  I suppose he could be my ally in life.  He could work to achieve any altruistic goals I think I have, perhaps better than I think that you could.  He might want to fuck my wife, though.  And might be jealous of the time she spends with me rather than him, and he'd probably feel entitled to all my stuff, as would I be vice versa. The Doppelganger and the Changeling have never been considered friendly beasts.

I have no firm idea where lines can be drawn on this.  Certainly consciousness can be said to be an intermittent phenomenon which the mind pieces together into the illusion of continuity.  I do not fear going to sleep at night, despite the "loss of consciousness" associated. If I were to wake up tomorrow and Omega assures me that I am a freshly made copy of the original, it wouldn't trouble me as to my sense of self, only to the set of problems associated with living in a world with a copy of myself.  I wouldn't mourn a dead original me any more than I'd care about a copy of me living on after I'm dead, I don't imagine.  

Would a slow cell by cell, or thought by thought / byte by byte, transfer of my mind to another medium: one at a time every new neural action potential is received by a parallel processing medium which takes over?  I want to say the resulting transfer would be the same consciousness as is typing this but then what if the same slow process were done to make a copy and not a transfer?  Once a consciousness is virtual, is every transfer from one medium or location to another not essentially a copy and therefore representing a death of the originating version?

It almost makes a materialist argument (self is tied to matter) seem like a spiritualist one (meat consciousness is soul is tied to human body at birth) which, of course, is weird place to be intellectually.

I am not addressing the utility or ethics or inevitability of the projection of the self-like-copy into some transhuman state of being, but I don't see any way around the conclusion that that the consciousness that is so immortalized will not be the consciousness that is writing these words, although it would feel exactly as if it were.  I don't think I care about that guy.  And I see no reason for him to be created. And if he were created, I, in my meat brain's death bed, would gain no solace from knowing he, a being which started out it's existence exactly like me, will live on.

EDIT: Lots of great responses, thank you all and keep them coming.  I want to bring up some of my responses so far to better define what I am talking about when I talk about consciousness.

I define consciousness as a passively aware thing, totally independent of memory, thoughts, feelings, and unconscious hardwired or conditioned responses. It is the hard-to-get-at thing inside the mind which is aware of the activity of the mind without itself thinking, feeling, remembering, or responding. The demented, the delirious, the brain damaged all have (unless those brain structures performing the function of consciousness are damaged, which is not a given) the same consciousness, the same Self, the same I and You, as I define it, as they did when their brains were intact. Dream Self is the same Self as Waking Self to my thinking. I assume consciousness arises at some point in infancy. From that moment on it is Self, to my thinking.

If I lose every memory slowly and my personality changes because of this and I die senile in a hospital bed, I believe that it will be the same consciousness experiencing those events as is experiencing me writing these words. That is why many people choose suicide at some point on the path to dementia.

I recognize that not everyone reading this will agree that such a thing exists or has the primacy of existential value that I ascribe to it.

And an addendum:
Sophie Pascal's Choice (hoping it hasn't already been coined): Would any reward given to the surviving copy induce you to step onto David Bowie Tesla's Prestige Duplication Machine, knowing that your meat body and brain will be the one which falls into the drowning pool while an identical copy of you materializes 100m away, believing itself to be the same meat that walked into the machine and ready to accept the reward?

A note about calibration of confidence

12 jbay 04 January 2016 06:57AM

Background

In a recent Slate Star Codex Post (http://slatestarcodex.com/2016/01/02/2015-predictions-calibration-results/), Scott Alexander made a number of predictions and presented associated confidence levels, and then at the end of the year, scored his predictions in order to determine how well-calibrated he is. In the comments, however, there arose a controversy over how to deal with 50% confidence predictions. As an example, Scott has these predictions at 50% confidence, among his others:

Proposition

Scott's Prior

Result

A

Jeb Bush will be the top-polling Republican candidate

P(A) = 50%

A is False

B

Oil will end the year greater than $60 a barrel

P(B) = 50%

B is False

C

Scott will not get any new girlfriends

P(C) = 50%

C is False

D

At least one SSC post in the second half of 2015 will get > 100,000 hits: 70%

P(D) = 70%

D is False

E

Ebola will kill fewer people in second half of 2015 than the in first half

P(E) = 95%

E is True

 

Scott goes on to score himself as having made 0/3 correct predictions at the 50% confidence interval, which looks like significant overconfidence. He addresses this by noting that with only 3 data points it’s not much data to go by, and could easily have been correct if any of those results had turned out differently. His resulting calibration curve is this:

Scott Alexander's 2015 calibration curve

 

However, the commenters had other objections about the anomaly at 50%. After all, P(A) = 50% implies P(~A) = 50%, so the choice of “I will not get any new girlfriends: 50% confidence”  is logically equivalent to “I will get at least 1 new girlfriend: 50% confidence”, except that one results as true and the other false. Therefore, the question seems sensitive only to the particular phrasing chosen, independent of the outcome.

One commenter suggests that close to perfect calibration at 50% confidence can be achieved by choosing whether to represent propositions as positive or negative statements by flipping a fair coin. Another suggests replacing 50% confidence with 50.1% or some other number arbitrarily close to 50%, but not equal to it. Others suggest getting rid of the 50% confidence bin altogether.

Scott recognizes that predicting A and predicting ~A are logically equivalent, and choosing to use one or the other is arbitrary. But by choosing to only include A in his data set rather than ~A, he creates a problem that occurs when P(A) = 50%, where the arbitrary choice of making a prediction phrased as ~A would have changed the calibration results despite being the same prediction.

Symmetry

This conundrum illustrates an important point about these calibration exercises. Scott chooses all of his propositions to be in the form of statements to which he assigns greater or equal to 50% probability, by convention, recognizing that he doesn’t need to also do a calibration of probabilities less than 50%, as the upper-half of the calibration curve captures all the relevant information about his calibration.

This is because the calibration curve has a property of symmetry about the 50% mark, as implied by the mathematical relation P(X) = 1- P(~X) and of course P(~X) = 1 –P(X).

We can enforce that symmetry by recognizing that when we make the claim that proposition X has probability P(X), we are also simultaneously making the claim that proposition ~X has probability 1-P(X). So we add those to the list of predictions and do the bookkeeping on them too. Since we are making both claims, why not be clear about it in our bookkeeping?

When we do this, we get the full calibration curve, and the confusion about what to do about 50% probability disappears. Scott’s list of predictions looks like this:

Proposition

Scott's Prior

Result

A

Jeb Bush will be the top-polling Republican candidate

P(A) = 50%

A is False

~A

Jeb Bush will not be the top-polling Republican candidate

P(~A) = 50%

~A is True

B

Oil will end the year greater than $60 a barrel

P(B) = 50%

B is False

~B

Oil will not end the year greater than $60 a barrel

P(~B) = 50%

~B is True

C

Scott will not get any new girlfriends

P(C) = 50%

C is False

~C

Scott will get new girlfriend(s)

P(~C) = 50%

~C is True

D

At least one SSC post in the second half of 2015 will get > 100,000 hits: 70%

P(D) = 70%

D is False

~D

No SSC post in the second half of 2015 will get > 100,000 hits

P(~D) = 30%

~D is True

E

Ebola will kill fewer people in second half of 2015 than the in first half

P(E) = 95%

E is True

~E

Ebola will kill as many or more people in second half of 2015 than the in first half

P(~E) = 05%

~E is False

 

You will by now have noticed that there will always be an even number of predictions, and that half of the predictions always are true and half are always false. In most cases, like with E and ~E, that means you get a 95% likely prediction that is true and a 5%-likely prediction that is false, which is what you would expect. However, with 50%-likely predictions, they are always accompanied by another 50% prediction, one of which is true and one of which is false. As a result, it is actually not possible to make a binary prediction at 50% confidence that is out of calibration.

The resulting calibration curve, applied to Scott’s predictions, looks like this:

no error bars


Sensitivity

By the way, this graph doesn’t tell the whole calibration story; as Scott noted it’s still sensitive to how many predictions were made in each bucket. We can add “error bars” that show what would have resulted if Scott had made one more prediction in each bucket, and whether the result of that prediction had been true or false. The result is the following graph:

with error bars

Note that the error bars are zero about the point of 0.5. That’s because even if one additional prediction had been added to that bucket, it would have had no effect. That point is fixed by the inherent symmetry.

I believe that this kind of graph does a better job of showing someone’s true calibration. But it's not the whole story.

Ramifications for scoring calibration (updated)

Clearly, it is not possible to make a binary prediction with 50% confidence that is poorly calibrated. This shouldn’t come as a surprise; a prediction at 50% between two choices represents the correct prior for the case where you have no information that discriminates between X and ~X. However, that doesn’t mean that you can improve your ability to make correct predictions just by giving them all 50% confidence and claiming impeccable calibration! An easy way to "cheat" your way into apparently good calibration is to take a large number of predictions that you are highly (>99%) confident about, negate a fraction of them, and falsely record a lower confidence for those. If we're going to measure calibration, we need a scoring method that will encourage people to write down the true probabilities they believe, rather than faking low confidence and ignoring their data. We want people to only claim 50% confidence when they genuinely have 50% confidence, and we need to make sure our scoring method encourages that.

 

A first guess would be to look at that graph and do the classic assessment of fit: sum of squared errors. We can sum the squared error of our predictions against the ideal linear calibration curve. If we did this, we would want to make sure we summed all the individual predictions, rather than the averages of the bins, so that the binning process itself doesn’t bias our score.

If we do this, then our overall prediction score can be summarized by one number:

S = \frac{1}{N}\left(\sum_{i=1}^{N}(P(X_i)-X_i)^2 \right )

Here P(Xi) is the assigned confidence of the truth of Xi, and Xi is the ith proposition and has a value of 1 if it is True and 0 if it is False. S is the prediction score, and lower is better. Note that because these are binary predictions, the sum of squared errors gives an optimal score if you assign the probabilities you actually believe (ie, there is no way to "cheat" your way to a better score by giving false confidence).

In this case, Scott's score is S=0.139, much of this comes from the 0.4/0.6 bracket. The worst score possible would be S=1, and the best score possible is S=0. Attempting to fake a perfect calibration by everything by claiming 50% confidence for every prediction, regardless of the information you actually have available, yields S=0.25 and therefore isn't a particularly good strategy (at least, it won't make you look better-calibrated than Scott).

Several of the commenters pointed out that log scoring is another scoring rule that works better in the general case. Before posting this I ran the calculus to confirm that the least-squares error did encourage an optimal strategy of honest reporting of confidence, but I did have a feeling that it was an ad-hoc scoring rule and that there must be better ones out there.

The logarithmic scoring rule looks like this:

S = \frac{1}{N}\sum_{i=1}^{N}X_i\ln(P(X_i))

Here again Xi is the ith proposition and has a value of 1 if it is True and 0 if it is False. The base of the logarithm is arbitrary so I've chosen base "e" as it makes it easier to take derivatives. This scoring method gives a negative number and the closer to zero the better. The log scoring rule has the same honesty-encouraging properties as the sum-of-squared-errors, plus the additional nice property that it penalizes wrong predictions of 100% or 0% confidence with an appropriate score of minus-infinity. When you claim 100% confidence and are wrong, you are infinitely wrong. Don't claim 100% confidence!

In this case, Scott's score is calculated to be S=-0.42. For reference, the worst possible score would be minus-infinity, and claiming nothing but 50% confidence for every prediction results in a score of S=-0.69. This just goes to show that you can't win by cheating.

Example: Pretend underconfidence to fake good calibration

In an attempt to appear like I have better calibration than Scott Alexander, I am going to make the following predictions. For clarity I have included the inverse propositions in the list (as those are also predictions that I am making), but at the end of the list so you can see the point I am getting at a bit better.

Proposition

Quoted Prior

Result

A

I will not win the lottery on Monday

P(A) = 50%

A is True

B

I will not win the lottery on Tuesday

P(B) = 66%

B is True

C

I will not win the lottery on Wednesday

P(C) = 66%

C is True

D

I will win the lottery on Thursday

P(D) =66%

D is False

E

I will not win the lottery on Friday

P(E) = 75%

E is True

F

I will not win the lottery on Saturday

P(F) = 75%

F is True

G

I will not win the lottery on Sunday

P(G) = 75%

G is True

H

I will win the lottery next Monday

P(H) = 75%

H is False

 

 

 

~A

I will win the lottery on Monday

P(~A) = 50%

~A is False

~B

I will win the lottery on Tuesday

P(~B) = 34%

~B is False

~C

I will win the lottery on Wednesday

P(~C) = 34%

~C is False

 

 

 

Look carefully at this table. I've thrown in a particular mix of predictions that I will or will not win the lottery on certain days, in order to use my extreme certainty about the result to generate a particular mix of correct and incorrect predictions.

To make things even easier for me, I’m not even planning to buy any lottery tickets. Knowing this information, an honest estimate of the odds of me winning the lottery are astronomically small. The odds of winning the lottery are about  1 in 14 million (for the Canadian 6/49 lottery). I’d have to win by accident (one of my relatives buying me a lottery ticket?). Not only that, but since the lottery is only held on Wednesday and Saturday, that makes most of these scenarios even more implausible since the lottery corporation would have to hold the draw by mistake.

I am confident I could make at least 1 billion similar statements of this exact nature and get them all right, so my true confidence must be upwards of (100% - 0.0000001%).

If I assemble 50 of these types of strategically-underconfident predictions (and their 50 opposites) and plot them on a graph, here’s what I get:

 Looks like good calibration...? Not so fast.

You can see that the problem with cheating doesn’t occur only at 50%. It can occur anywhere!

But here’s the trick: The log scoring algorithm rates me -0.37. If I had made the same 100 predictions all at my true confidence (99.9999999%), then my score would have been -0.000000001. A much better score! My attempt to cheat in order to make a pretty graph has only sabotaged my score.

By the way, what if I had gotten one of those wrong, and actually won the lottery one of those times without even buying a ticket? In that case my score is -0.41 (the wrong prediction had a probability of 1 in 10^9 which is about 1 in e^21, so it’s worth -21 points, but then that averages down to -0.41 due to the 49 correct predictions that are collectively worth a negligible fraction of a point).* Not terrible! The log scoring rule is pretty gentle about being very badly wrong sometimes, just as long as you aren’t infinitely wrong. However, if I had been a little less confident and said the chance of winning each time was only 1 in a million, rather than 1 in a billion, my score would have improved to -0.28, and if I had expressed only 98% confidence I would have scored -0.098, the best possible score for someone who is wrong one in every fifty times.

This has another important ramification: If you're going to honestly test your calibration, you shouldn't pick the predictions you'll make. It is easy to improve your score by throwing in a couple predictions that you are very certain about, like that you won't win the lottery, and by making few predictions that you are genuinely uncertain about. It is fairer to use a list of propositions that is generated by somebody else, and then pick your probabilities. Scott demonstrates his honesty by making public predictions about a mix of things he was genuinely uncertain about, but if he wanted to cook his way to a better score in the future, he would avoid making any predictions at the 50% category that he wasn't forced to.

 

Input and comments are welcome! Let me know what you think!

* This result surprises me enough that I would appreciate if someone in the comments can double-check it on their own. What is the proper score for being right 49 times with 1-1 in a billion certainty, but wrong once?

Sports

12 adamzerner 26 December 2015 07:54PM

This is intended to be a pretty broad discussion of sports. I have some thoughts, but feel free to start your own threads.


tl;dr - My impression is that people here aren't very interested in sports. My impression1 is that most people have something to gain by both competitive and recreational sports. With competitive sports you have to be careful not to overdo it. With recreational sports, the circumstances have to be right for it to be enjoyable. I also think that sports get a bad rep for being simple and dull. In actuality, there's a lot of complexity. 

1 - Why does this have to sound bad?! I have two statements I want to make. And for each of them, I want to qualify it by saying that it as an impression that I have. What is a better way to say this? 

Me

I love sports. Particularly basketball. I was extremely extremely dedicated to it back in middle/high school. Actually, it was pretty much all I cared about (not an exaggeration). This may or not be crazy... but I wanted to be the best player who's ever lived. That was what I genuinely aspired and was working towards (~7th-11th grade).

My thinking: the pros practice, what, 5-6 hours a day? I don't care about anything other than basketball. I'm willing to practice 14 hours a day! I just need time to eat and sleep, but other than that, I value basketball above all else (friends, school...). Plus, I will work so much smarter than they do! The norm is to mindlessly do push ups and eat McDonalds. I will read the scientific literature and figure out what the most effective ways to improve are. I'm short and not too athletic, so I knew I was starting at a disadvantage, but I saw a mismatch between what the norm is and what my rate of improvement could be. I thought I could do it.

In some ways I succeeded, but ultimately I didn't come close to my goal of greatness. In short, I spent too much time on high level actions such as researching training methods and not enough time on object level work; and with school and homework, I simply didn't have enough time to put in the 14 hour days I envisioned. I was a solid high school player, but was no where near good enough to play college ball.

Take Aways

Intense work. I've gone through some pretty intense physical exercise. Ex. running suicides until you collapse. And then getting up to do more until you collapse again. It takes a lot of willpower to do that. I think willpower is like a muscle, and you have to train yourself to be able to work at such intensities. I haven't experienced anything intellectual that has required such intensity. Knowing that I am capable of working at high intensities has given me confidence that "I could do anything".

Ambition. The culture in athletic circles is often one where, "I'm not content being where I am". There's someone above you, and you want to beat them out. I guess that sort of exists in academic and career circles as well, but I don't think it's the same (in the average case; there's certainly exceptions). What explains this? Maybe there's something very visceral about lining up across from someone, getting physically and unambiguously beaten, and letting your teammates and yourself down.

Confidence. Often times, confidence is something you learn because you have to. Often times, if you're not confident, you won't perform, so you need to learn to be confident. But it's not just that; there's something else about the culture that promotes confidence (perhaps cockiness). Think: "I don't care who the opponent is, no one can stop me!".

Group Bonds. When you spend so much time with a group of people, go through exhausting practices together, and work as a team to experience wins and losses, you develop a certain bond that is enjoyable. It reminds me a bit of putting in long hours on a project and eventually meeting the deadline, but it isn't the same.

Other: There's certainly other things I'm forgetting.

All of that said, there are downsides that correspond with all of these benefits. My overarching opinion is "all things in moderation". Ambition can be poison. So can the habitual productivity that often comes with ambition. Sometimes the atmosphere can backfire and make you less confident. And sometimes teammates can bully and be cruel. I've experienced the good and bad extremes along all of these axes.

Honestly, I'm not quite sure when it's worth it and when it isn't. I think it often depends on the person and the situation, but I think that in moderation, most people have a decent amount to gain (in aggregate) by experiencing these things.

Recreational

So far I've really only talked about competitive sports. Now I want to talk about recreational sports. With competitive sports, as I mention above, I think there's a somewhat fine line between underdoing it and overdoing it. But I think that line is a lot wider for recreational sports. I think it's wide enough such that recreational sports are very often a good choice.

One huge benefit of recreational sports is that it's a fun way to get exercise. You do/should exercise anyway; why not make a game out of it?

Part of me feels like sports are just inherently fun! I know that calling them inherently fun is too strong a statement, but I think that under the right circumstances, they often are fun (I think the same point can be applied to most other things as well).

In practice, what goes wrong?

  • You aren't in shape. You're playing a pick up basketball game where everyone else is running up and down the court and you're too winded to breathe. That's no fun.
  • Physical bumps and bruises. You're playing football and get knocked around, or perhaps injured.
  • Lack of involvement.
    • You're playing baseball. You only get to hit 1/18th of the time. And you are playing right field and no one ever hits it to you (for these reasons, I don't like baseball).
    • You're playing soccer with people who don't know how to space the field and move the ball, and you happen to get excluded.
    • You're playing basketball where each team has a ball hog who brings up the ball and shoots it every possession.
  • Difficulty-skill mismatch. You're playing with people who are way too good for you, so it isn't fun. Alternatively, maybe you're way better than the people you're playing with and aren't being challenged.
  • Other. Again, I'm sure there are things I'm not thinking of.
For the most part, I feel like the things that go wrong are correctable, and once corrected, I predict that the sport will become enjoyable (some things are inherent, like the bumps and bruises in tackle football; but there's always two-hand touch!).

I even see a business opportunity here! Currently, these are all legitimate problems. I think that if these were corrected, a lot of utility/value would be generated. What if you could sign up and be provided with recreational games, with enough time for you to rest so you're not exhausted, where your teammates and opponents are respectful and considerate, where you're involved in the game, and where your teammates and opponents are roughly at your skill level.

Complexity

I sense that sports get a bit of an unfair rep for being simple and dull games. Maybe some are, but I think that most aren't.

Perhaps it's because of the way most people experience the game. Take basketball as an example. A lot of people just like to watch to see whether the ball goes in the hoop or not and cheer. Ie. they experience the game in a very binary way. Observing this, it may be tempting to think, "Ugh, what a stupid game." But what happens when you steelman?

I happen to know a lot about basketball, so I experience the game very differently. Here's an example:

Iguodala has the ball and is being guarded by LeBron. LeBron is playing close and is in a staggered stance. He's vulnerable and Iguodala should attack his lead foot. People (even NBA players) don't look at this enough! Actually no, he shouldn't attack: the weak side help defense looks like it's in position, and LeBron is great at recovery. Plus, you have to think about the opportunity cost. Curry has Dellavedova and could definitely take him. Meaning, if Delly plays off, Curry can take a shot, but if Delly plays him more tightly, Curry could penetrate and either score or set someone else up, depending on how the help defense reacts. That approach has a pretty high expected value. But actually, Draymond Green looks like he has JR Smith on him (who is much smaller), which probably has an even higher expected value than Curry taking Delly. But to get Green the ball they'd have to reverse it to the weak side, and they'd have to keep the court spaced such that the Cavs won't have an opportunity to switch a bigger defender on to Green. All of this is in contrast with running a motion offense or some set plays. And you also have to take into account the stamina of the other team. Maybe you want to attack LeBron on defense to make him work, get him tired, and make him less effective on offense (I think this is a great approach to take against Curry and the Warriors, because Curry isn't a good defender and is lethal on offense).

Hopefully you could see that the amount of information there is to process in any given second is extremely high! If you know what to look for. Personally, I've never played organized football. But after playing the video game Madden (and doing some further research), I've learned a good amount about how the game works. Now when I watch football, I know the intricacies of the game and am watching for them. The density of information + the excitement, skill and physicality makes these ports extremely enjoyable for me to watch. Alternatively, I don't know too much about golf and don't enjoy watching it. All I see when I watch golf is, "The ball was hit closer to the hole... the ball was hit closer to the hole... the ball was it in the hole. This was a par 3, so that must have been an average performance."

 

Starting University Advice Repository

12 Bryan-san 03 December 2015 11:51PM
I know quite a few (12+) rationalists and CFAR graduates who are entering University soon or have just recently started University.

There was a lot of advice I wish I had been given or heard before I entered University and I think having a good repository of rationalist-contributed knowledge/advice/suggestions/information/links/DireWarnings could be very helpful to people in that situation.
 

1. What advice do students starting University need to hear?

2. What advice did your past self need to hear or what advice would have benefited you at that point in time?

3. Many people fail to ask the right questions. What questions do students need to ask themselves and other people?



Any links or guides on any related topics would be helpful. I will be posting some of my own ideas and links in the comments.
 
Possibly Relevant Repositories

Post-doctoral Fellowships at METRICS

12 Anders_H 12 November 2015 07:13PM
The Meta-Research Innovation Center at Stanford (METRICS) is hiring post-docs for 2016/2017. The full announcement is available at http://metrics.stanford.edu/education/postdoctoral-fellowships. Feel free to contact me with any questions; I am currently a post-doc in this position.

METRICS is a research center within Stanford Medical School. It was set up to study the conditions under which the scientific process can be expected to generate accurate beliefs, for instance about the validity of evidence for the effect of interventions.

METRICS was founded by Stanford Professors Steve Goodman and John Ioannidis in 2014, after Givewell connected them with the Laura and John Arnold Foundation, who provided the initial funding. See http://blog.givewell.org/2014/04/23/meta-research-innovation-centre-at-stanford-metrics/ for more details.

Intentional Insights and the Effective Altruism Movement – Q & A

11 Gleb_Tsipursky 02 January 2016 07:43PM

This post is cross-posted on the EA forum and is mainly of interest to EAs. It focuses on the engagement of Intentional Insights with the EA movement, and does not address the engagement of InIn with promoting rationality-informed strategies, which is a hotly-debated issue.

 

 

Introduction

I wanted to share InIn’s background and goals and where we see ourselves as fitting within the EA movement. I also wanted to allow all of you a chance to share your opinions about the benefits and drawbacks of what InIn is doing, put forth any reservations, concerns, and risks, and provide suggestions for optimization.

 

Background

InIn began in January 2014, when my wife and I decided to create an organization dedicated to marketing rational, evidence-based thinking in all areas of our lives, especially charitable giving, to a broad audience. We decided to do so because we looked around for organizations that would provide marketing resources for our own local activism in Columbus, OH, trying to convey these ideas to a broad public and found no such organizations. So we decided – if not us, then who? If not now, then when? My wife would use her experience in nonprofits to run the organisation, while I would use my experience as a professor to work on content and research.

 

We gathered together a group of local aspiring rationalists and Effective Altruists interested in the project, and launched the organization publicly in 9/2014. We got our 501(c)(3) nonprofit status, began running various content marketing experiments, and established the internal infrastructure. We also built up a solid audience in the secular and skeptical market, who we saw as the easiest-to-reach audience with promoting effective giving and rational thinking. By the early fall of 2015, we had established some connections and reputation, a solid social media following, and our articles began to be accepted in prominent venues that reach a broad audience, such as The Huffington Post and Lifehack. At that point, we felt comfortable enough to begin our active engagement with the EA movement, as we felt we could provide added value.

 

Fit in EA Movement

As an Effective Altruist, I have long seen opportunities of optimization in the marketing of EA ideas using research-based, modern content marketing strategies. I did not feel comfortable speaking out about that until I had built up InIn enough to be able to speak from a position of some expertise in the early fall of 2015, and to demonstrate right away the benefit we could bring through publishing widely-shared articles that promoted EA messages.

 

Looking back, I wish I had started engaging with the EA Forum sooner. It was a big mistake on my part that caused some EAs to treat InIn as a sudden outsider that burst on the scene. Also, our early posts were perceived as too self-promotional. I guess this is not surprising, looking back – although the goal was simply to demonstrate our value, the content marketing nature of our work does show through. Ah well, lessons learned and something to update on for the future.

 

As InIn has become more engaged in various projects within the EA movement, we have begun to settle on how to add value to the EA community and have formulated our plans for future work.

 

1) We are promoting EA-themed effective giving ideas to a broad audience through publishing shareable articles in prominent venues.

 

1A) Note: we focus on spreading ideas like effective giving without associating them overtly with the movement of Effective Altruism, though leaving buried hooks to EA in the articles. This approach has the benefit minimizing the risk of diluting the movement with less value-aligned members, while leaving opportunities for those who are more value-aligned to find the EA movement. Likewise, we don’t emphasize EA as we believe that overt uses of labels can lead some people to perceive our messages as ideological, which would undermine our ability to build rapport with them.

 

2) We are specifically promoting effective giving to the secular and skeptic community, as we see this audience as more likely to be value aligned, and also have strong existing connections with this audience.

 

3) We are providing content and social media marketing consulting to the EA movement, both EA meta-charities and prominent direct-action charities.

 

4) We are collaborating with EA meta-charities in boosting the marketing capacities of the EA movement as a whole being.

 

5) We are helping build EA capacity around effective decision-making and goal achievement through providing foundational rationality knowledge.

 

6) By using content marketing to promote rationality to a broad audience, we are aiming to help people be more clear-thinking, long-term oriented, empathetic, and utilitarian. This not only increases their own flourishing, but also expands their circles of caring beyond biases based on geographical location (drowning child problem), species (non-human animals), and temporal distance (existential risk).

 

Conclusion

InIn is engaged in both EA capacity-building and movement-building, but movement-building of a new type, not oriented toward directing people into the EA movement, but getting EA habits of thinking into the broader world. I specifically chose not to include our achievements in doing so in this post, as I had previously fallen into the trap of including too much and being perceived as self-promotional as a result. However, if you wish, you can learn more about the organization and its activities at this link.


What are your impressions on the value of this fit of InIn within the EA movement and our plans, including advantages and disadvantages, as well as suggestions for improvement? We are always eager to learn and improve based on feedback from the community.

 

 

 

Why You Should Be Public About Your Good Deeds

11 Gleb_Tsipursky 30 December 2015 04:06AM

(This will be mainly of interest to Effective Altruists, and is cross-posted on the Giving What We Can blog, the Intentional Insights blog, and the EA Forum)

 

When I first started donating, I did so anonymously. My default is to be humble and avoid showing off. I didn’t want others around me to think that I have a stuffed head and hold too high an opinion of myself. I also didn’t want them to judge my giving decisions, as some may have judged them negatively. I also had cached patterns of associating sharing about my good deeds publicly with feelings that I get from commercials, of self-promotion and sleaziness.

I wish I had known back then that I could have done much more good by publicizing my donations and other goods deeds, such as signing the Giving What We Can Pledge to donate 10% of my income to effective charities, or being public about my donations to CFAR on this LW forum post.

Why did I change my mind about being public? Let me share a bit of my background to give you the appropriate context.

As long as I can remember, I have been interested in analyzing how and why individuals and groups evaluated their environment and made their decisions to reach their goals – rational thinking. This topic became the focus of my research as a professor at Ohio State in the history of science, studying the intersection of psychology, cognitive neuroscience, behavioral economics, and other fields.

While most of my colleagues focused on research, I grew more passionate about sharing my knowledge with others, focusing my efforts on high-quality, innovative teaching. I perceived my work as cognitive altruism, sharing my knowledge about rational thinking, and students expressed much appreciation for my focus on helping them make better decisions in their lives. Separately, I engaged in anonymous donations to causes such as poverty alleviation.

Yet over time, I realized that by teaching only in the classroom, I would have a very limited impact, since my students were only a small minority of the population I could potentially reach. I began to consult academic literature on how to spread my knowledge broadly. Through reading classics in the field of social influence such as Influence: The Psychology of Persuasion and Made To Stick, I learned a great many strategies to multiply the impact of my cognitive altruism work, as well as my charitable giving.

One of the most important lessons was the value of being public about my activities. Both Influence: The Psychology of Persuasion and subsequent research showed that our peers deeply impact our thoughts, feelings, and behaviors. We tend to evaluate ourselves based on what our peers think of us, and try to model behaviors that will cause others to have positive opinions about us. This applies not only to in-person meetings, but also online communities.

A related phenomenon, social proof, illustrates how we evaluate appropriate behavior based on how we see others behaving. However, research also shows that people who exhibit more beneficial behaviors tend to avoid expressing themselves to those with less beneficial behaviors, resulting in overall social harm.

Learning about the importance of being public, including in online communities that reach far more people than in-person communities, especially by people engaging in socially beneficial habits, led to a deep transformation in my civic engagement. While it was not easy to overcome my shyness, I realized I had to do it if I wanted to optimize my positive impact on the world – both in cognitive altruism and in effective giving.

I shared this journey of learning and transformation with my wife, Agnes Vishnevkin, an MBA and non-profit professional. Together, we decided to co-found a nonprofit dedicated to spreading rational thinking and effective giving to a broad audience using research-based strategies for maximizing social impact, Intentional Insights. Uniting with others committed to this mission, we write articles, blogs, make videos, author books, program apps, and collaborate with other organizations to share these ideas widely.

I also rely on research to make other decisions, such as my decision to take the Giving What We Can pledge. The strategy of precommitment is key here – we make a decision in a state where we have the time to consider their consequences in the long term, and specifically wish to constrain the options of our future selves. That way, we can plan within a narrowed range of options and make the best possible use of the resources available to us.

Thus, I can plan to live on 90% of my income over my lifetime, and plan to decrease some of my spending in the long term so that I can give to charities that I believe are most effective for making the kind of impact I want to see in the world.

Knowing about the importance of publicizing my good deeds and commitments, I recognize that I can do much more good by sharing my decision to take the pledge with others. All of us have friends, and the large majority of us have social media channels and we all have the power to be public about our good deeds. You can also consider fundraising for effective charities, and being an advocate for effective altruism in your community. 

According to the scholarly literature, by being public about our good deeds we can bring about much good in the world. Even though it may not feel as tangible as direct donations, sharing with others about our good deeds and supporting others doing so may in the end allow us to do even more good.

Deadly sins of software estimation

11 NancyLebovitz 22 December 2015 01:38PM

This is so remarkably sensible I think it deserves its own article.

It's a pdf of the slides from a lecture, and should help with the planning fallacy.

A few highlights: Distinguish between targets and estimates. Don't make estimates before you know very much about the project. Estimates are probability statements. Best assumption is that a new tool or method will lead to productivity loss.

Promoting rationality to a broad audience - feedback on methods

11 Gleb_Tsipursky 30 November 2015 04:52AM

We at Intentional Insights​, the nonprofit devoted to promoting rationality and effective altruism  to a broad audience, are finalizing our Theory of Change (a ToC is meant to convey our goals, assumptions, methods, and metrics). Since there's recently been extensive discussion on LessWrong of our approaches to promoting rationality and effective altruism to a broad audience, one that was quite helpful for helping us update, I'd like to share our Theory of Change with you and ask for your feedback.

 

Here's the Executive Summary:

  • The goal of Intentional Insights is to create a world where all rely on research-based strategies to make wise decisions and lead to mutual flourishing.
  • To achieve this goal, we believe that people need to be motivated to learn and have broadly accessible information about such research-based strategies, and also integrate these strategies into their daily lives through regular practice.
  • We assume that:
    • some natural and intuitive human thinking, feeling, and behavior patterns are flawed in ways that undermine wise decisions.
    • problematic decision making undermines mutual flourishing in a number of life areas.
    • these flawed thinking, feeling, and behavior patterns can be improved through effective interventions.
    • we can motivate and teach people to improve their thinking, feeling, and behavior patterns by presenting our content in ways that combine education and entertainment.
  • Our intervention is helping people improve their patterns of thinking, feeling, and behavior to enable them to make wise decisions and bring about mutual flourishing.
  • Our outputs, what we do, come in the form of online content such as blog entries, videos, etc., on our channels and in external publications, as well as collaborations with other organizations.
  • Our metrics of impact are in the form of anecdotal evidence, feedback forms from workshops, and studies we run on our content.

Here is the full version.

 

I'd appreciate any feedback on the full version from fellow Less Wrongers, on things like content, concepts, structure, style, grammar, etc. I look forward to updating the organization's goals, assumptions, methods, and metrics based on your thoughts. Thanks!

[LINK] The Top A.I. Breakthroughs of 2015

10 Vika 30 December 2015 10:04PM

A great overview article on AI breakthroughs by Richard Mallah from FLI, linking to many excellent recent papers worth reading. 

Progress in artificial intelligence and machine learning has been impressive this year. Those in the field acknowledge progress is accelerating year by year, though it is still a manageable pace for us. The vast majority of work in the field these days actually builds on previous work done by other teams earlier the same year, in contrast to most other fields where references span decades.

Creating a summary of a wide range of developments in this field will almost invariably lead to descriptions that sound heavily anthropomorphic, and this summary does indeed. Such metaphors, however, are only convenient shorthands for talking about these functionalities. It's important to remember that even though many of these capabilities sound very thought-like, they're usually not very similar to how human cognition works. The systems are all of course functional and mechanistic, and, though increasingly less so, each are still quite narrow in what they do. Be warned though: in reading this article, these functionalities may seem to go from fanciful to prosaic.

The biggest developments of 2015 fall into five categories of intelligence: abstracting across environments, intuitive concept understanding, creative abstract thought, dreaming up visions, and dexterous fine motor skills. I'll highlight a small number of important threads within each that have brought the field forward this year.

Open Thread, Dec. 28 - Jan. 3, 2016

10 Clarity 27 December 2015 02:21PM

If it's worth saying, but not worth its own post (even in Discussion), then it goes here.


Notes for future OT posters:

1. Please add the 'open_thread' tag.

2. Check if there is an active Open Thread before posting a new one. (Immediately before; refresh the list-of-threads page before posting.)

3. Open Threads should be posted in Discussion, and not Main.

4. Open Threads should start on Monday, and end on Sunday.

Conveying rational thinking about long-term goals to youth and young adults

9 Gleb_Tsipursky 07 February 2016 01:54AM
More than a year ago, I discussed here how we at Intentional Insights intended to convey rationality to young adults through our collaboration with the Secular Student Alliance. This international organization unites over 270 clubs at colleges and high schools in English-speaking countries, mainly the US, with its clubs spanning from a few students to a few hundred students. The SSA's Executive Director is an aspiring rationalist and CFAR alum who is on our Advisory Board.

Well, we've been working on a project with the SSA for the last 8 months to create and evaluate an event aimed to help its student members figure out and orient toward the long term, thus both fighting Moloch on a societal level and helping them become more individually rational as well (the long-term perspective is couched in the language of finding purpose using science) It's finally done, and here is the link to the event packet. The SSA will be distributing this packet broadly, but in the meantime, if you have any connections to secular student groups, consider encouraging them to hold this event. The event would also fit well for adult secular groups with minor editing, in case any of you are involved with them. It's also easy to strip the secular language from the packet, and just have it as an event for a philosophy/science club of any sort, at any level from youth to adult. Although I would prefer you cite Intentional Insights when you do it, I'm comfortable with you not doing so if circumstances don't permit it for some reason.

We're also working on similar projects with the SSA, focusing on being rational in the area of giving, so promoting Effective Altruism. I'll post it here when it's ready.  

Clearing An Overgrown Garden

9 Anders_H 29 January 2016 10:16PM

(tl;dr: In this post, I make some concrete suggestions for LessWrong 2.0.)

Less Wrong 2.0

A few months ago, Vaniver posted some ideas about how to reinvigorate Less Wrong. Based on comments in that thread and based on personal discussions I have had with other members of the community, I believe there are several different views on why Less Wrong is dying. The following are among the most popular hypotheses:

(1) Pacifism has caused our previously well-kept garden to become overgrown

(2) The aversion to politics has caused a lot of interesting political discussions to move away from the website

(3) People prefer posting to their personal blogs.

With this background, I suggest the following policies for Less Wrong 2.0.  This should be seen only as a starting point for discussion about the ideal way to implement a rationality forum. Most likely, some of my ideas are counterproductive. If anyone has better suggestions, please post them to the comments.

Moderation Policy:

There are four levels of users:  

  1. Users
  2. Trusted Users 
  3. Moderators
  4. Administrator
Users may post comments and top level posts, but their contributions must be approved by a moderator.

Trusted users may post comments and top level posts which appear immediately. Trusted user status is awarded by 2/3 vote among the moderators

Moderators may approve comments made by non-trusted users. There should be at least 10 moderators to ensure that comments are approved within an hour of being posted, preferably quicker. If there is disagreement between moderators, the matter can be discussed on a private forum. Decisions may be altered by a simple majority vote.

The administrator (preferably Eliezer or Nate) chooses the moderators.

Personal Blogs:


All users are assigned a personal subdomain, such as Anders_H.lesswrong.com. When publishing a top-level post, users may click a checkbox to indicate whether the post should appear only on their personal subdomain, or also in the Less Wrong discussion feed. The commenting system is shared between the two access pathways. Users may choose a design template for their subdomain. However, when the post is accessed from the discussion feed, the default template overrides the user-specific template. The personal subdomain may include a blogroll, an about page, and other information. Users may purchase a top-level domain as an alias for their subdomain

Standards of Discourse and Policy on Mindkillers:

All discussion in Less Wrong 2.0 is seen explicitly as an attempt to exchange information for the purpose of reaching Aumann agreement. In order to facilitate this goal, communication must be precise. Therefore, all users agree to abide by Crocker's Rules for all communication that takes place on the website.  

However, this is not a license for arbitrary rudeness.  Offensive language is permitted only if it is necessary in order to point to a real disagreement about the territory. Moreover, users may not repeatedly bring up the same controversial discussion outside of their original context.

Discussion of politics is explicitly permitted as long as it adheres to the rules outlined above. All political opinions are permitted (including opinions which are seen as taboo by society as large), as long as the discussion is conducted with civility and in a manner that is suited for dispassionate exchange of information, and suited for accurate reasoning about the consequences of policy choice. By taking part in any given discussion, all users are expected to pre-commit to updating in response to new information.

Upvotes:

Only trusted users may vote. There are two separate voting systems.  Users may vote on whether the post raises a relevant point that will result in interesting discussion (quality of contribution) and also on whether they agree with the comment (correctness of comment). The first is a property both of the comment and of the user, and is shown in their user profile.  The second scale is a property only of the comment. 

All votes are shown publicly (for an example of a website where this is implemented, see for instance dailykos.com).  Abuse of the voting system will result in loss of Trusted User Status. 

How to Implement This

After the community comes to a consensus on the basic ideas behind LessWrong 2.0, my preference is for MIRI to implement it as a replacement for Less Wrong. However, if for some reason MIRI is unwilling to do this, and if there is sufficient interest in going in this direction, I offer to pay server costs. If necessary, I also offer to pay some limited amount for someone to develop the codebase (based on Open Source solutions). 

Other Ideas:


MIRI should start a professionally edited rationality journal (For instance called "Rationality") published bi-monthly. Users may submit articles for publication in the journal. Each week, one article is chosen for publication and posted to a special area of Less Wrong. This replaces "main". Every two months, these articles are published in print in the journal.  

The idea behind this is as follows:
(1) It will incentivize users to compete for the status of being published in the journal.
(2) It will allow contributors to put the article on their CV.
(3) It may bring in high-quality readers who are unlikely to read blogs.  
(4) Every week, the published article may be a natural choice for discussion topic at Less Wrong Meetup

[Link] Lifehack article promoting rationality-themed ideas, namely long-term orientation, mere-exposure effect, consider-the-alternative, and agency

9 Gleb_Tsipursky 11 January 2016 08:14PM

Here's my article in Lifehack, one of the most prominent self-improvement websites, bringing rationality-style ideas to a broad audience, specifically long-term orientation, mere-exposure effect, consider-the-alternative, and agency :-)

 

P.S. Based on feedback from the LessWrong community, I made sure to avoid mentioning LessWrong or rationality in the article.

[Link] Huffington Post article about dual process theory

9 Gleb_Tsipursky 06 January 2016 01:44AM

Published a piece in The Huffington Post popularizing dual-process theory in layman's language.

 

P.S. I know some don't like using terms like Autopilot and Intentional to describe System 1 and System 2, but I find from long experience that these terms resonate well with a broad audience. Also, I know dual process theory is criticized by some, but we have to start somewhere, and just explaining dual process theory is a way to start bridging the inference gap to higher meta-cognition.

View more: Next