Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

Anxiety and Rationality

31 helldalgo 19 January 2016 06:30PM

Recently, someone on the Facebook page asked if anyone had used rationality to target anxieties.  I have, so I thought I’d share my LessWrong-inspired strategies.  This is my first post, so feedback and formatting help are welcome.  

First things first: the techniques developed by this community are not a panacea for mental illness.  They are way more effective than chance and other tactics at reducing normal bias, and I think many mental illnesses are simply cognitive biases that are extreme enough to get noticed.  In other words, getting a probability question about cancer systematically wrong does not disrupt my life enough to make the error obvious.  When I believe (irrationally) that I will get fired because I asked for help at work, my life is disrupted.  I become non-functional, and the error is clear.

Second: the best way to attack anxiety is to do the things that make your anxieties go away.  That might seem too obvious to state, but I’ve definitely been caught in an “analysis loop,” where I stay up all night reading self-help guides only to find myself non-functional in the morning because I didn’t sleep.  If you find that attacking an anxiety with Bayesian updating is like chopping down the Washington monument with a spoon, but getting a full night’s sleep makes the monument disappear completely, consider the sleep.  Likewise for techniques that have little to no scientific evidence, but are a good placebo.  A placebo effect is still an effect.

Finally, like all advice, this comes with Implicit Step Zero:  “Have enough executive function to give this a try.”  If you find yourself in an analysis loop, you may not yet have enough executive function to try any of the advice you read.  The advice for functioning better is not always identical to the advice for functioning at all.  If there’s interest in an “improving your executive function” post, I’ll write one eventually.  It will be late, because my executive function is not impeccable.

Simple updating is my personal favorite for attacking specific anxieties.  A general sense of impending doom is a very tricky target and does not respond well to reality.  If you can narrow it down to a particular belief, however, you can amass evidence against it. 

Returning to my example about work: I alieved that I would get fired if I asked for help or missed a day due to illness.  The distinction between believe and alieve is an incredibly useful tool that I immediately integrated when I heard of it.  Learning to make beliefs pay rent is much easier than making harmful aliefs go away.  The tactics are similar: do experiments, make predictions, throw evidence at the situation until you get closer to reality.  Update accordingly.  

The first thing I do is identify the situation and why it’s dysfunctional.  The alief that I’ll get fired for asking for help is not actually articulated when it manifests as an anxiety.  Ask me in the middle of a panic attack, and I still won’t articulate that I am afraid of getting fired.  So I take the anxiety all the way through to its implication.  The algorithm is something like this:

  1.       Notice sense of doom
  2.       Notice my avoidance behaviors (not opening my email, walking away from my desk)
  3.       Ask “What am I afraid of?”
  4.       Answer (it's probably silly)
  5.       Ask “What do I think will happen?”
  6.       Make a prediction about what will happen (usually the prediction is implausible, which is why we want it to go away in the first place)

In the “asking for help” scenario, the answer to “what do I think will happen” is implausible.  It’s extremely unlikely that I’ll get fired for it!  This helps take the gravitas out of the anxiety, but it does not make it go away.*  After (6), it’s usually easy to do an experiment.  If I ask my coworkers for help, will I get fired?  The only way to know is to try. 

…That’s actually not true, of course.  A sense of my environment, my coworkers, and my general competence at work should be enough.  But if it was, we wouldn’t be here, would we?

So I perform the experiment.  And I wait.  When I receive a reply of any sort, even if it’s negative, I make a tick mark on a sheet of paper.  I label it “didn’t get fired.”  Because again, even if it’s negative, I didn’t get fired. 

This takes a lot of tick marks.  Cutting down the Washington monument with a spoon, remember?

The tick marks don’t have to be physical.  I prefer it, because it makes the “updating” process visual.  I’ve tried making a mental note and it’s not nearly as effective.  Play around with it, though.  If you’re anything like me, you have a lot of anxieties to experiment with. 

Usually, the anxiety starts to dissipate after obtaining several tick marks.  Ideally, one iteration of experiments should solve the problem.  But we aren’t ideal; we’re mentally ill.  Depending on the severity of the anxiety, you may need someone to remind you that doom will not occur.  I occasionally panic when I have to return to work after taking a sick day.  I ask my husband to remind me that I won’t get fired.  I ask him to remind me that he’ll still love me if I do get fired.  If this sounds childish, it’s because it is.  Again: we’re mentally ill.  Even if you aren’t, however, assigning value judgements to essentially harmless coping mechanisms does not make sense.  Childish-but-helpful is much better than mature-and-harmful, if you have to choose.

I still have tiny ugh fields around my anxiety triggers.  They don’t really go away.  It’s more like learning not to hit someone you’re angry at.  You notice the impulse, accept it, and move on.  Hopefully, your harmful alief starves to death.

If you perform your experiment and doom does occur, it might not be you.  If you can’t ask your boss for help, it might be your boss.  If you disagree with your spouse and they scream at you for an hour, it might be your spouse.  This isn’t an excuse to blame your problems on the world, but abusive situations can be sneaky.  Ask some trusted friends for a sanity check, if you’re performing experiments and getting doom as a result.  This is designed for situations where your alief is obviously silly.  Where you know it’s silly, and need to throw evidence at your brain to internalize it.  It’s fine to be afraid of genuinely scary things; if you really are in an abusive work environment, maybe you shouldn’t ask for help (and start looking for another job instead). 

 

 

*using this technique for several months occasionally stops the anxiety immediately after step 6.  

A note about calibration of confidence

12 jbay 04 January 2016 06:57AM

Background

In a recent Slate Star Codex Post (http://slatestarcodex.com/2016/01/02/2015-predictions-calibration-results/), Scott Alexander made a number of predictions and presented associated confidence levels, and then at the end of the year, scored his predictions in order to determine how well-calibrated he is. In the comments, however, there arose a controversy over how to deal with 50% confidence predictions. As an example, Scott has these predictions at 50% confidence, among his others:

Proposition

Scott's Prior

Result

A

Jeb Bush will be the top-polling Republican candidate

P(A) = 50%

A is False

B

Oil will end the year greater than $60 a barrel

P(B) = 50%

B is False

C

Scott will not get any new girlfriends

P(C) = 50%

C is False

D

At least one SSC post in the second half of 2015 will get > 100,000 hits: 70%

P(D) = 70%

D is False

E

Ebola will kill fewer people in second half of 2015 than the in first half

P(E) = 95%

E is True

 

Scott goes on to score himself as having made 0/3 correct predictions at the 50% confidence interval, which looks like significant overconfidence. He addresses this by noting that with only 3 data points it’s not much data to go by, and could easily have been correct if any of those results had turned out differently. His resulting calibration curve is this:

Scott Alexander's 2015 calibration curve

 

However, the commenters had other objections about the anomaly at 50%. After all, P(A) = 50% implies P(~A) = 50%, so the choice of “I will not get any new girlfriends: 50% confidence”  is logically equivalent to “I will get at least 1 new girlfriend: 50% confidence”, except that one results as true and the other false. Therefore, the question seems sensitive only to the particular phrasing chosen, independent of the outcome.

One commenter suggests that close to perfect calibration at 50% confidence can be achieved by choosing whether to represent propositions as positive or negative statements by flipping a fair coin. Another suggests replacing 50% confidence with 50.1% or some other number arbitrarily close to 50%, but not equal to it. Others suggest getting rid of the 50% confidence bin altogether.

Scott recognizes that predicting A and predicting ~A are logically equivalent, and choosing to use one or the other is arbitrary. But by choosing to only include A in his data set rather than ~A, he creates a problem that occurs when P(A) = 50%, where the arbitrary choice of making a prediction phrased as ~A would have changed the calibration results despite being the same prediction.

Symmetry

This conundrum illustrates an important point about these calibration exercises. Scott chooses all of his propositions to be in the form of statements to which he assigns greater or equal to 50% probability, by convention, recognizing that he doesn’t need to also do a calibration of probabilities less than 50%, as the upper-half of the calibration curve captures all the relevant information about his calibration.

This is because the calibration curve has a property of symmetry about the 50% mark, as implied by the mathematical relation P(X) = 1- P(~X) and of course P(~X) = 1 –P(X).

We can enforce that symmetry by recognizing that when we make the claim that proposition X has probability P(X), we are also simultaneously making the claim that proposition ~X has probability 1-P(X). So we add those to the list of predictions and do the bookkeeping on them too. Since we are making both claims, why not be clear about it in our bookkeeping?

When we do this, we get the full calibration curve, and the confusion about what to do about 50% probability disappears. Scott’s list of predictions looks like this:

Proposition

Scott's Prior

Result

A

Jeb Bush will be the top-polling Republican candidate

P(A) = 50%

A is False

~A

Jeb Bush will not be the top-polling Republican candidate

P(~A) = 50%

~A is True

B

Oil will end the year greater than $60 a barrel

P(B) = 50%

B is False

~B

Oil will not end the year greater than $60 a barrel

P(~B) = 50%

~B is True

C

Scott will not get any new girlfriends

P(C) = 50%

C is False

~C

Scott will get new girlfriend(s)

P(~C) = 50%

~C is True

D

At least one SSC post in the second half of 2015 will get > 100,000 hits: 70%

P(D) = 70%

D is False

~D

No SSC post in the second half of 2015 will get > 100,000 hits

P(~D) = 30%

~D is True

E

Ebola will kill fewer people in second half of 2015 than the in first half

P(E) = 95%

E is True

~E

Ebola will kill as many or more people in second half of 2015 than the in first half

P(~E) = 05%

~E is False

 

You will by now have noticed that there will always be an even number of predictions, and that half of the predictions always are true and half are always false. In most cases, like with E and ~E, that means you get a 95% likely prediction that is true and a 5%-likely prediction that is false, which is what you would expect. However, with 50%-likely predictions, they are always accompanied by another 50% prediction, one of which is true and one of which is false. As a result, it is actually not possible to make a binary prediction at 50% confidence that is out of calibration.

The resulting calibration curve, applied to Scott’s predictions, looks like this:

no error bars


Sensitivity

By the way, this graph doesn’t tell the whole calibration story; as Scott noted it’s still sensitive to how many predictions were made in each bucket. We can add “error bars” that show what would have resulted if Scott had made one more prediction in each bucket, and whether the result of that prediction had been true or false. The result is the following graph:

with error bars

Note that the error bars are zero about the point of 0.5. That’s because even if one additional prediction had been added to that bucket, it would have had no effect. That point is fixed by the inherent symmetry.

I believe that this kind of graph does a better job of showing someone’s true calibration. But it's not the whole story.

Ramifications for scoring calibration (updated)

Clearly, it is not possible to make a binary prediction with 50% confidence that is poorly calibrated. This shouldn’t come as a surprise; a prediction at 50% between two choices represents the correct prior for the case where you have no information that discriminates between X and ~X. However, that doesn’t mean that you can improve your ability to make correct predictions just by giving them all 50% confidence and claiming impeccable calibration! An easy way to "cheat" your way into apparently good calibration is to take a large number of predictions that you are highly (>99%) confident about, negate a fraction of them, and falsely record a lower confidence for those. If we're going to measure calibration, we need a scoring method that will encourage people to write down the true probabilities they believe, rather than faking low confidence and ignoring their data. We want people to only claim 50% confidence when they genuinely have 50% confidence, and we need to make sure our scoring method encourages that.

 

A first guess would be to look at that graph and do the classic assessment of fit: sum of squared errors. We can sum the squared error of our predictions against the ideal linear calibration curve. If we did this, we would want to make sure we summed all the individual predictions, rather than the averages of the bins, so that the binning process itself doesn’t bias our score.

If we do this, then our overall prediction score can be summarized by one number:

S = \frac{1}{N}\left(\sum_{i=1}^{N}(P(X_i)-X_i)^2 \right )

Here P(Xi) is the assigned confidence of the truth of Xi, and Xi is the ith proposition and has a value of 1 if it is True and 0 if it is False. S is the prediction score, and lower is better. Note that because these are binary predictions, the sum of squared errors gives an optimal score if you assign the probabilities you actually believe (ie, there is no way to "cheat" your way to a better score by giving false confidence).

In this case, Scott's score is S=0.139, much of this comes from the 0.4/0.6 bracket. The worst score possible would be S=1, and the best score possible is S=0. Attempting to fake a perfect calibration by everything by claiming 50% confidence for every prediction, regardless of the information you actually have available, yields S=0.25 and therefore isn't a particularly good strategy (at least, it won't make you look better-calibrated than Scott).

Several of the commenters pointed out that log scoring is another scoring rule that works better in the general case. Before posting this I ran the calculus to confirm that the least-squares error did encourage an optimal strategy of honest reporting of confidence, but I did have a feeling that it was an ad-hoc scoring rule and that there must be better ones out there.

The logarithmic scoring rule looks like this:

S = \frac{1}{N}\sum_{i=1}^{N}X_i\ln(P(X_i))

Here again Xi is the ith proposition and has a value of 1 if it is True and 0 if it is False. The base of the logarithm is arbitrary so I've chosen base "e" as it makes it easier to take derivatives. This scoring method gives a negative number and the closer to zero the better. The log scoring rule has the same honesty-encouraging properties as the sum-of-squared-errors, plus the additional nice property that it penalizes wrong predictions of 100% or 0% confidence with an appropriate score of minus-infinity. When you claim 100% confidence and are wrong, you are infinitely wrong. Don't claim 100% confidence!

In this case, Scott's score is calculated to be S=-0.42. For reference, the worst possible score would be minus-infinity, and claiming nothing but 50% confidence for every prediction results in a score of S=-0.69. This just goes to show that you can't win by cheating.

Example: Pretend underconfidence to fake good calibration

In an attempt to appear like I have better calibration than Scott Alexander, I am going to make the following predictions. For clarity I have included the inverse propositions in the list (as those are also predictions that I am making), but at the end of the list so you can see the point I am getting at a bit better.

Proposition

Quoted Prior

Result

A

I will not win the lottery on Monday

P(A) = 50%

A is True

B

I will not win the lottery on Tuesday

P(B) = 66%

B is True

C

I will not win the lottery on Wednesday

P(C) = 66%

C is True

D

I will win the lottery on Thursday

P(D) =66%

D is False

E

I will not win the lottery on Friday

P(E) = 75%

E is True

F

I will not win the lottery on Saturday

P(F) = 75%

F is True

G

I will not win the lottery on Sunday

P(G) = 75%

G is True

H

I will win the lottery next Monday

P(H) = 75%

H is False

 

 

 

~A

I will win the lottery on Monday

P(~A) = 50%

~A is False

~B

I will win the lottery on Tuesday

P(~B) = 34%

~B is False

~C

I will win the lottery on Wednesday

P(~C) = 34%

~C is False

 

 

 

Look carefully at this table. I've thrown in a particular mix of predictions that I will or will not win the lottery on certain days, in order to use my extreme certainty about the result to generate a particular mix of correct and incorrect predictions.

To make things even easier for me, I’m not even planning to buy any lottery tickets. Knowing this information, an honest estimate of the odds of me winning the lottery are astronomically small. The odds of winning the lottery are about  1 in 14 million (for the Canadian 6/49 lottery). I’d have to win by accident (one of my relatives buying me a lottery ticket?). Not only that, but since the lottery is only held on Wednesday and Saturday, that makes most of these scenarios even more implausible since the lottery corporation would have to hold the draw by mistake.

I am confident I could make at least 1 billion similar statements of this exact nature and get them all right, so my true confidence must be upwards of (100% - 0.0000001%).

If I assemble 50 of these types of strategically-underconfident predictions (and their 50 opposites) and plot them on a graph, here’s what I get:

 Looks like good calibration...? Not so fast.

You can see that the problem with cheating doesn’t occur only at 50%. It can occur anywhere!

But here’s the trick: The log scoring algorithm rates me -0.37. If I had made the same 100 predictions all at my true confidence (99.9999999%), then my score would have been -0.000000001. A much better score! My attempt to cheat in order to make a pretty graph has only sabotaged my score.

By the way, what if I had gotten one of those wrong, and actually won the lottery one of those times without even buying a ticket? In that case my score is -0.41 (the wrong prediction had a probability of 1 in 10^9 which is about 1 in e^21, so it’s worth -21 points, but then that averages down to -0.41 due to the 49 correct predictions that are collectively worth a negligible fraction of a point).* Not terrible! The log scoring rule is pretty gentle about being very badly wrong sometimes, just as long as you aren’t infinitely wrong. However, if I had been a little less confident and said the chance of winning each time was only 1 in a million, rather than 1 in a billion, my score would have improved to -0.28, and if I had expressed only 98% confidence I would have scored -0.098, the best possible score for someone who is wrong one in every fifty times.

This has another important ramification: If you're going to honestly test your calibration, you shouldn't pick the predictions you'll make. It is easy to improve your score by throwing in a couple predictions that you are very certain about, like that you won't win the lottery, and by making few predictions that you are genuinely uncertain about. It is fairer to use a list of propositions that is generated by somebody else, and then pick your probabilities. Scott demonstrates his honesty by making public predictions about a mix of things he was genuinely uncertain about, but if he wanted to cook his way to a better score in the future, he would avoid making any predictions at the 50% category that he wasn't forced to.

 

Input and comments are welcome! Let me know what you think!

* This result surprises me enough that I would appreciate if someone in the comments can double-check it on their own. What is the proper score for being right 49 times with 1-1 in a billion certainty, but wrong once?

Rationality Reading Group: Part P: Reductionism 101

5 Gram_Stone 17 December 2015 03:03AM

This is part of a semi-monthly reading group on Eliezer Yudkowsky's ebook, Rationality: From AI to Zombies. For more information about the group, see the announcement post.


Welcome to the Rationality reading group. This fortnight we discuss Part P: Reductionism (pp. 887-935). This post summarizes each article of the sequence, linking to the original LessWrong post where available.

P. Reductionism 101

189. Dissolving the Question - This is where the "free will" puzzle is explicitly posed, along with criteria for what does and does not constitute a satisfying answer.

190. Wrong Questions - Where the mind cuts against reality's grain, it generates wrong questions - questions that cannot possibly be answered on their own terms, but only dissolved by understanding the cognitive algorithm that generates the perception of a question.

191. Righting a Wrong Question - When you are faced with an unanswerable question - a question to which it seems impossible to even imagine an answer - there is a simple trick which can turn the question solvable. Instead of asking, "Why do I have free will?", try asking, "Why do I think I have free will?"

192. Mind Projection Fallacy - E. T. Jaynes used the term Mind Projection Fallacy to denote the error of projecting your own mind's properties into the external world. The Mind Projection Fallacy generalizes as an error. It is in the argument over the real meaning of the word sound, and in the magazine cover of the monster carrying off a woman in the torn dress, and Kant's declaration that space by its very nature is flat, and Hume's definition of a priori ideas as those "discoverable by the mere operation of thought, without dependence on what is anywhere existent in the universe"...

193. Probability is in the Mind - Probabilities express uncertainty, and it is only agents who can be uncertain. A blank map does not correspond to a blank territory. Ignorance is in the mind.

194. The Quotation is Not the Referent - It's very easy to derive extremely wrong conclusions if you don't make a clear enough distinction between your beliefs about the world, and the world itself.

195. Qualitatively Confused - Using qualitative, binary reasoning may make it easier to confuse belief and reality; if we use probability distributions, the distinction is much clearer.

196. Think Like Reality - "Quantum physics is not "weird". You are weird. You have the absolutely bizarre idea that reality ought to consist of little billiard balls bopping around, when in fact reality is a perfectly normal cloud of complex amplitude in configuration space. This is your problem, not reality's, and you are the one who needs to change."

197. Chaotic Inversion - If a problem that you're trying to solve seems unpredictable, then that is often a fact about your mind, not a fact about the world. Also, this feeling that a problem is unpredictable can stop you from trying to actually solve it.

198. ReductionismWe build models of the universe that have many different levels of description. But so far as anyone has been able to determine, the universe itself has only the single level of fundamental physics - reality doesn't explicitly compute protons, only quarks.

199. Explaining vs. Explaining Away - Apparently "the mere touch of cold philosophy", i.e., the truth, has destroyed haunts in the air, gnomes in the mine, and rainbows. This calls to mind a rather different bit of verse:

One of these things Is not like the others One of these things Doesn't belong

The air has been emptied of its haunts, and the mine de-gnomed—but the rainbow is still there!

200. Fake ReductionismThere is a very great distinction between being able to see where the rainbow comes from, and playing around with prisms to confirm it, and maybe making a rainbow yourself by spraying water droplets, versus some dour-faced philosopher just telling you, "No, there's nothing special about the rainbow. Didn't you hear? Scientists have explained it away. Just something to do with raindrops or whatever. Nothing to be excited about." I think this distinction probably accounts for a hell of a lot of the deadly existential emptiness that supposedly accompanies scientific reductionism.

201. Savannah PoetsEquations of physics aren't about strong emotions. They can inspire those emotions in the mind of a scientist, but the emotions are not as raw as the stories told about Jupiter (the god). And so it might seem that reducing Jupiter to a spinning ball of methane and ammonia takes away some of the poetry in those stories. But ultimately, we don't have to keep telling stories about Jupiter. It's not necessary for Jupiter to think and feel in order for us to tell stories, because we can always write stories with humans as its protagonists.

 


This has been a collection of notes on the assigned sequence for this fortnight. The most important part of the reading group though is discussion, which is in the comments section. Please remember that this group contains a variety of levels of expertise: if a line of discussion seems too basic or too incomprehensible, look around for one that suits you better!

The next reading will cover Part Q: Joy in the Merely Real (pp. 939-979). The discussion will go live on Wednesday, 30 December 2015, right here on the discussion forum of LessWrong.

The art of grieving well

41 Valentine 15 December 2015 07:55PM

[This is one post I've written in an upcoming sequence on what I call "yin". Yin, in short, is the sub-art of giving perception of truth absolutely no resistance as it updates your implicit world-model. Said differently, it's the sub-art of subconsciously seeking out and eliminating ugh fields and also eliminating the inclination to form them in the first place. This is the first piece I wrote, and I think it stands on its own, but it probably won't be the first post in the final sequence. My plan is to flesh out the sequence and then post a guide to yin giving the proper order. I'm posting the originals on my blog, and you can view the original of this post here, but my aim is to post a final sequence here on Less Wrong.]


In this post, I'm going to talk about grief. And sorrow. And the pain of loss.

I imagine this won't be easy for you, my dear reader. And I wish I could say that I'm sorry for that.

…but I'm not.

I think there's a skill to seeing horror clearly. And I think we need to learn how to see horror clearly if we want to end it.

This means that in order to point at the skill, I need to also point at real horror, to show how it works.

So, I'm not sorry that I will make you uncomfortable if I succeed at conveying my thoughts here. I imagine I have to.

Instead, I'm sorry that we live in a universe where this is necessary.


If you Google around, you'll find all kinds of lists of what to say and avoid saying to a grieving person. For reasons I'll aim to make clear later on, I want to focus for a moment on some of the things not to say. Here are a few from Grief.com:

  • "He is in a better place."
  • "There is a reason for everything."
  • "I know how you feel."
  • "Be strong."

I can easily imagine someone saying things like this with the best of intentions. They see someone they care about who is suffering greatly, and they want to help.

But to the person who has experienced a loss, these are very unpleasant to hear. The discomfort is often pre-verbal and can be difficult to articulate, especially when in so much pain. But a fairly common theme is something like:

"Don't heave your needs on me. I'm too tired and in too much pain to help you."

If you've never experienced agonizing loss, this might seem really confusing at first — which is why it seems tempting to say those things in the first place, I think. But try assuming that the grieving person sees the situation more clearly, and see if you can make sense of this reaction before reading on.

If you look at the bulleted statements above, there's a way of reading them that says "You're suffering. Maybe try this, to stop your suffering." There's an imposition there, telling the grieving person to add more burden to how they are in the moment. In many cases, the implicit request to stop suffering comes from the speaker's discomfort with the griever's pain, so an uncharitable (but sometimes accurate) read of those statements is "I don't like it when you hurt, so stop hurting."

Notice that the person who lost someone doesn't have to think through all this. They just see it, directly, and emotionally respond. They might not even be able to say why others' comments feel like impositions, but there's very little doubt that they do. It's just that social expectations take so much energy, and the grief is already so much to carry, that it's hard not to notice.

There's only energy for what really, actually matters.

And, it turns out, not much matters when you hurt that much.


I'd like to suggest that grieving is how we experience the process of a very, very deep part of our psyches becoming familiar with a painful truth. It doesn't happen only when someone dies. For instance, people go through a very similar process when mourning the loss of a romantic relationship, or when struck with an injury or illness that takes away something they hold dear (e.g., quadriplegia). I think we even see smaller versions of it when people break a precious and sentimental object, or when they fail to get a job or into a school they had really hoped for, or even sometimes when getting rid of a piece of clothing they've had for a few years.

In general, I think familiarization looks like tracing over all the facets of the thing in question until we intuitively expect what we find. I'm particularly fond of the example of arriving in a city for the first time: At first all I know is the part of the street right in front of where I'm staying. Then, as I wander around, I start to notice a few places I want to remember: the train station, a nice coffee shop, etc. After a while of exploring different alleyways, I might make a few connections and notice that the coffee shop is actually just around the corner from that nice restaurant I went to on my second night there. Eventually the city (or at least those parts of it) start to feel smaller to me, like the distances between familiar locations are shorter than I had first thought, and the areas I can easily think of now include several blocks rather than just parts of streets.

I'm under the impression that grief is doing a similar kind of rehearsal, but specifically of pain. When we lose someone or something precious to us, it hurts, and we have to practice anticipating the lack of the preciousness where it had been before. We have to familiarize ourselves with the absence.

When I watch myself grieve, I typically don't find myself just thinking "This person is gone." Instead, my grief wants me to call up specific images of recurring events — holding the person while watching a show, texting them a funny picture & getting a smiley back, etc. — and then add to that image a feeling of pain that might say "…and that will never happen again." My mind goes to the feeling of wanting to watch a show with that person and remembering they're not there, or knowing that if I send a text they'll never see it and won't ever respond. My mind seems to want to rehearse the pain that will happen, until it becomes familiar and known and eventually a little smaller.

I think grieving is how we experience the process of changing our emotional sense of what's true to something worse than where we started.

Unfortunately, that can feel on the inside a little like moving to the worse world, rather than recognizing that we're already here.


It looks to me like it's possible to resist grief, at least to some extent. I think people do it all the time. And I think it's an error to do so.

If I'm carrying something really heavy and it slips and drops on my foot, I'm likely to yelp. My initial instinct once I yank my foot free might be to clutch my foot and grit my teeth and swear. But in doing so, even though it seems I'm focusing on the pain, I think it's more accurate to say that I'm distracting myself from the pain. I'm too busy yelling and hopping around to really experience exactly what the pain feels like.

I could instead turn my mind to the pain, and look at it in exquisite detail. Where exactly do I feel it? Is it hot or cold? Is it throbbing or sharp or something else? What exactly is the most aversive aspect of it? This doesn't stop the experience of pain, but it does stop most of my inclination to jump and yell and get mad at myself for dropping the object in the first place.

I think the first three so-called "stages of grief" — denial, anger, and bargaining — are avoidance behaviors. They're attempts to distract oneself from the painful emotional update. Denial is like trying to focus on anything other than the hurt foot, anger is like clutching and yelling and getting mad at the situation, and bargaining is like trying to rush around and bandage the foot and clean up the blood. In each case, there's an attempt to keep the mind preoccupied so that it can't start the process of tracing the pain and letting the agonizing-but-true world come to feel true. It's as though there's a part of the psyche that believes it can prevent the horror from being real by avoiding coming to feel as though it's real.

The above might seem kind of abstract, so let me list a very few examples that I think do in fact apply to resisting grief:

  • After a breakup, someone might refuse to talk about their ex and insist that no one around them bring up their ex. They might even start dating a lot more right away (the "rebound" phenomenon, or dismissive-avoidant dating patterns). They might insist on acting like their ex doesn't exist, for months, and show flashes of intense anger when they find a lost sweater under their bed that had belonged to the ex.
  • While trying to finish a project for a major client (or an important class assignment, if a student), a person might realize that they simply don't have the time they need, and start to panic. They might pour all their time into it, even while knowing on some level that they can't finish on time, but trying desperately anyway as though to avoid looking at the inevitability of their meaningful failure.
  • The homophobia of the stereotypical gay man in denial looks to me like a kind of distraction. The painful truth for him here is that he is something he thinks it is wrong to be, so either his morals or his sense of who he is must die a little. Both are agonizing, too much for him to handle, so instead he clutches his metaphorical foot and screams.

In every case, the part of the psyche driving the behavior seems to think that it can hold the horror at bay by preventing the emotional update that the horror is real. The problem is, success requires severely distorting your ability to see what is real, and also your desire to see what's real. This is a cognitive black hole — what I sometimes call a "metacognitive blindspot" — from which it is enormously difficult to return.

This means that if we want to see reality clearly, we have to develop some kind of skill that lets us grieve well — without resistance, without flinching, without screaming to the sky with declarations of war as a distraction from our pain.

We have to be willing to look directly and unwaveringly at horror.


In 2014, my marriage died.

A friend warned me that I might go through two stages of grief: one for the loss of the relationship, and one for the loss of our hoped-for future together.

She was exactly right.

The second one hit me really abruptly. I had been feeling solemn and glum since the previous night, and while riding public transit I found myself crying. Specific imagined futures — of children, of holidays, of traveling together — would come up, as though raising the parts that hurt the most and saying "See this, and wish it farewell."

The pain was so much. I spent most of that entire week just moving around slowly, staring off into space, mostly not caring about things like email or regular meetings.

Two things really stand out for me from that experience.

First, there were still impulses to flinch away. I wanted to cry about how the pain was too much to bear and curl up in a corner — but I could tell that impulse came from a different place in my psyche than the grief did. It felt easier to do that, like I was trading some of my pain for suffering instead and could avoid being present to my own misery. I had worked enough with grief at that point to intuit that I needed to process or digest the pain, and that this slow process would go even more slowly if I tried not to experience it. It required a choice, every moment, to keep my focus on what hurt rather than on how much it hurt or how unfair things were or any other story that decreased the pain I felt in that moment. And it was tiring to make that decision continuously.

Second, there were some things I did feel were important, even in that state. At the start of this post I referenced how mourners can sometimes see others' motives more plainly than those others can. What I imagine is the same thing gave me a clear sense of how much nonsense I waste my time on — how most emails don't matter, most meetings are pointless, most curriculum design thoughts amount to rearranging deck chairs on the Titanic. I also vividly saw how much nonsense I project about who I am and what my personal story is — including the illusions I would cast on myself. Things like how I thought I needed people to admire me to feel motivated, or how I felt most powerful when championing the idea of ending aging. These stories looked embarrassingly false, and I just didn't have the energy to keep lying to myself about them.

What was left, after tearing away the dross, was simple and plain and beautiful in its nakedness. I felt like I was just me, and there were a very few things that still really mattered. And, even while drained and mourning for the lovely future that would never be, I found myself working on those core things. I could send emails, but they had to matter, and they couldn't be full of blather. They were richly honest and plain and simply directed at making the actually important things happen.

It seems to me that grieving well isn't just a matter of learning to look at horror without flinching. It also lets us see through certain kinds of illusion, where we think things matter but at some level have always known they don't.

I think skillful grief can bring us more into touch with our faculty of seeing the world plainly as we already know it to be.


I think we, as a species, dearly need to learn to see the world clearly.

A humanity that makes global warming a politicized debate, with name-calling and suspicion of data fabrication, is a humanity that does not understand what is at stake.

A world that waits until its baby boomers are doomed to die of aging before taking aging seriously has not understood the scope of the problem and is probably still approaching it with distorted thinking.

A species that has great reason to fear human-level artificial intelligence and does not pause to seriously figure out what if anything is correct to do about it (because "that's silly" or "the Terminator is just fiction") has not understood just how easily it can go horribly wrong.

Each one of these cases is bad enough — but these are just examples of the result of collectively distorted thinking. We will make mistakes this bad, and possibly worse, again and again as long as we are willing to let ourselves turn our awareness away from our own pain. As long as the world feels safer to us than it actually is, we will risk obliterating everything we care about.

There is hope for immense joy in our future. We have conquered darkness before, and I think we can do so again.

But doing so requires that we see the world clearly.

And the world has devastatingly more horror in it than most people seem willing to acknowledge.

The path of clear seeing is agonizing — but that is because of the truth, not because of the path. We are in a kind of hell, and avoiding seeing that won't make it less true.

But maybe, if we see it clearly, we can do something about it.

Grieve well, and awaken.

Non-communicable Evidence

9 adamzerner 17 November 2015 03:46AM

In this video, Douglas Crockford (JavaScript MASTER) says:

So I think programming would not be possible without System I; without the gut. Now, I have absolutely no evidence to support that statement, but my gut tells me it's true, so I believe it.

 

1

I don't think he has "absolutely no evidence". In worlds where DOUGLAS CROCKFORD has a gut feeling about something related to programming, how often does that gut feeling end up being correct? Probably a lot more than 50% of the time. So according to Bayes, his gut feeling is definitely evidence.

The problem isn't that he lacks evidence. It's that he lacks communicable evidence. He can't say "I believe A because X, Y and Z." The best he could do is say, "just trust me, I have a feeling about this".

Well, "just trust me, I have a feeling about this" does qualify as evidence if you have a good track record, but my point is that he can't communicate the rest of the evidence his brain used to produce the resulting belief.

 

2

How do you handle a situation where you're having a conversation with someone and they say, "I can't explain why I believe X; I just do."

Well, as far as updating beliefs, I think the best you could do is update on the track record of the person. I don't see any way around it. For example, you should update your beliefs when you hear Douglas Crockford say that he has a gut feeling about something related to programming. But I don't see how you could do any further updating of your beliefs. You can't actually see the evidence he used, so you can't use it to update your beliefs. If you do, the Bayes Police will come find you.

Perhaps it's also worth trying to dig the evidence out of the other persons subconscious.

  • If the person has a good track record, maybe you could say, "Hmm, you have a good track record so I'm sad to hear that you're struggling to recall why it is you believe what you do. I'd be happy to wait for you to spend some time trying to dig it up."
  • Maybe there are some techniques that can be used to "dig evidence out of one's subconscious". I don't know of any, but maybe they exist.

 

3

Ok, now let's talk about what you shouldn't do. You shouldn't say, "Well if you can't provide any evidence, you shouldn't believe what you do." The problem with that statement is that it assumes that the person has "no evidence". This was addressed in Section 1. It's akin to saying, "Well Douglas Crockford, you're telling me that you believe X and you have a fantastic track record, but I don't know anything about why you believe it, so I'm not going to update my beliefs at all, and you shouldn't either."

Brains are weird and fantastic thingys. They process information and produce outputs in the form of beliefs (amongst other things). Sometimes they're nice and they say, "Ok Adam - here is what you believe, and here is why you believe it". Other times they're not so nice and the conversation goes like this:

Brain: Ok Adam, here is what you think.

Adam: Awesome, thanks! But wait - why do I think that?

Brain: Fuck you, I'm not telling.

Adam: Fuck me? Fuck you!

Brain: Who the fuck do you think you're talking to?!!!

Just because brains could be mean doesn't mean they should be discounted.

The Market for Lemons: Quality Uncertainty on Less Wrong

8 signal 18 November 2015 10:06PM

Tl;dr: Articles on LW are, if unchecked (for now by you), heavily distorting a useful view (yours) on what matters.

 

[This is (though in part only) a five-year update to Patrissimo’s article Self-Improvement or Shiny Distraction: Why Less Wrong is anti-Instrumental Rationality. However, I wrote most of this article before I became aware of its predecessor. Then again, this reinforces both our articles' main critique.]

 

I claim that rational discussions in person, conferences, forums, social media, and blogs suffer from adverse selection and promote unwished-for phenomena such as the availability heuristic. Bluntly stated, they do (as all other discussions) have a tendency to support ever worse, unimportant, or wrong opinions and articles. More importantly, articles of high relevancy regarding some topics are conspicuously missing. This can be also observed on Less Wrong. It is not the purpose of this article to determine the exact extent of this problem. It shall merely bring to attention that “what you get is not what you should see." However, I am afraid this effect is largely undervalued.

 

This result is by design and therefore to be expected. A rational agent will, by definition, post incorrect, incomplete, or not at all in the following instances:

  • Cost-benefit analysis: A rational agent will not post information that reduces his utility by enabling others to compete better and, more importantly, by causing him any effort unless some gain (status, monetary, happiness,…) offsets the former effect. Example: Have you seen articles by Mark Zuckerberg? But I also argue that for random John Doe the personal cost-benefit-analysis from posting an article is negative. Even more, the value of your time should approach infinity if you really drink the LW Kool-Aid, however, this shall be the topic of a subsequent article. I suspect the theme of this article may also be restated as a free-riding problem as it postulates the non-production or under-production of valuable articles and other contributions.
  • Conflicting with law: Topics like drugs (in the western world) and maybe politics or sexuality in other parts of the world are biased due to the risk of persecution, punishment, extortion, etc. And many topics such as in the spheres of rationality, transhumanism, effective altruism, are at least highly sensitive, especially when you continue arguing until you reach their moral extremes.
  • Inconvenience of disagreement: Due to the effort of posting truly anonymously (which currently requires a truly anonymous e-mail address and so forth), disagreeing posts will be avoided, particularly when the original poster is of high status and the risk to rub off on one’s other articles thus increased. This is obviously even truer for personal interactions. Side note: The reverse situation may also apply: more agreement (likes) with high status.
  • Dark knowledge: Even if I know how to acquire a sniper gun that cannot be traced, I will not share this knowledge (as for all other reasons, there are substantially better examples, but I do not want to make spreading dark knowledge a focus of this article).
  • Signaling: Seriously, would you discuss your affiliation to LW in a job interview?! Or tell your friends that you are afraid we live in a simulation? (If you don’t see my point, your rationality is totally off base, see the next point). LW user “Timtyler” commented before: “I also found myself wondering why people remained puzzled about the high observed levels of disagreement. It seems obvious to me that people are poor approximations of truth-seeking agents—and instead promote their own interests. If you understand that, then the existence of many real-world disagreements is explained: people disagree in order to manipulate the opinions and actions of others for their own benefit.”
  • WEIRD-M-LW: It is a known problem that articles on LW are going to be written by authors that are in the overwhelming majority western, educated, industrialized, rich, democratic, and male. The LW surveys show distinctly that there are most likely many further attributes in which the population on LW differs from the rest of the world. LW user “Jpet” argued in a comment very nicely: “But assuming that the other party is in fact totally rational is just silly. We know we're talking to other flawed human beings, and either or both of us might just be totally off base, even if we're hanging around on a rationality discussion board.” LW could certainly use more diversity. Personal anecdote: I was dumbfounded by the current discussion around LW T-shirts sporting slogans such as "Growing Mentally Stronger" which seemed to me intuitively highly counterproductive. I then asked my wife who is far more into fashion and not at all into LW. Her comment (Crocker's warning): “They are great! You should definitely buy one for your son if you want him to go to high school and to be all for himself for the next couple of years; that is, except for the mobbing, maybe.”
  • Genes, minds, hormones & personal history: (Even) rational agents are highly influenced by those factors. This fact seems underappreciated. Think of SSC's "What universal human experiences are you missing without realizing it?" Think of inferential distances and the typical mind fallacy. Think of slight changes in beliefs after drinking coffee, been working out, deeply in love for the first time/seen your child born, being extremely hungry, wanting to and standing on the top of the mountain (especially Mt. Everest). Russell pointed out the interesting and strong effect of Schopenhauer’s and Nietzsche’s personal history on their misogyny. However, it would be a stretch to simply call them irrational. In every discussion, you have to start somewhere, but finding a starting point is a lot more difficult when the discussion partners are more diverse. All factors may not result in direct misinformation on LW but certainly shape the conversation (see also the next point).
  • Priorities: Specific “darlings” of the LW sphere such as Newcomb’s paradox or MW are regularly discussed. Just one moment of not paying bias attention, and you may assume they are really relevant. For those of us currently not programming FAI, they aren’t and steal attention from more important issues.
  • Other beliefs/goals: Close to selfishness, but not quite the same. If an agent’s beliefs and goals differ from most others, the discussion would benefit from your post. Even so, that by itself may not be a sufficient reason for an agent to post. Example: Imagine somebody like Ben Goertzel. His beliefs on AI, for instance, differed from the mainstream on LW. This did not necessarily result in him posting an article on LW. And to my knowledge, he won’t, at least not directly. Plus, LW may try to slow him down as he seems less concerned about the F of FAI.
  • Vanity: Considering the amount of self-help threads, nerdiness, and alike on LW, it may be suspected that some refrain from posting due to self-respect. E.g. I do not want to signal myself that I belong to this tribe. This may sound outlandish but then again, have a look at the Facebook groups of LW and other rationalists where people ask frequently how they can be more interesting, or how “they can train how to pause for two seconds before they speak to increase their charisma." Again, if this sounds perfectly fine to you, that may be bad news.
  • Barriers to entry: Your first post requires creating an account. Karma that signals the quality of your post is still absent. An aspiring author may question the relative importance of his opinion (especially for highly complex topics), his understanding of the problem, the quality of his writing, and if his research on the chosen topic is sufficient.
  • Nothing new under the sun: Writing an article requires the bold assumption that its marginal utility is significantly above zero. The likelihood of which probably decreases with the number of posts, which is, as of now, quite impressive. Patrissimo‘s article (footnote [10]) addresses the same point, others mention being afraid of "reinventing the wheel."
  • Error: I should point out that most of the reasons brought forward in this list talk about deliberate misinformation. In many cases, an article will just be wrong which the author does not realize. Examples: facts (the earth is flat), predications (planes cannot fly), and, seriously underestimated, horizon effects (if more information is provided the rational agent realizes that his action did not yield the desired outcome, e.g. ban of plastic bags).
  • Protection of the group: Opinions though being important may not be discussed to protect the group or its image to outsiders. See “is LW a c***” and Roko’s ***." This argument can also be brought forward much more subtle: an agent may, for example, hold the opinion that rationality concepts are information hazards by nature if they reduce the happiness of the otherwise blissfully unaware.
  • Topicality: This is a problem specific to LW. Many of the great posts as well as the sequences have originated about five to ten years ago. While the interest in AI has now reached mainstream awareness, the solid intellectual basis (centered around a few individuals) which LW offered seems to break away gradually and rationality topics experience their diaspora. What remains is a less balanced account of important topics in the sphere of rationality and new authors are discouraged to enter the conversation.
  • Russell’s antinomy: Is the contribution that states its futility ever expressed? Random example article title: “Writing articles on LW is useless because only nerds will read them."
  • +Redundancy: If any of the above reasons apply, I may choose not to post. However, I also expect a rational agent with sufficiently close knowledge to attain the same knowledge himself so it is at the same time not absolutely necessary to post. An article will “only” speed up the time required to understand a new concept and reduce the likelihood of rationalists diverting due to disagreement (if Aumann is ignored) or faulty argumentation.

This list is not exhaustive. If you do not find a factor in this list that you expect to accounts for much of the effect, I will appreciate a hint in the comments.

 

There are a few outstanding examples pointing in the opposite direction. They appear to provide uncensored accounts of their way of thinking and take arguments to their logical extremes when necessary. Most notably Bostrom and Gwern, but then again, feel free to read the latter’s posts on endured extortion attempts.

 

A somewhat flippant conclusion (more in a FB than LW voice): After reading the article from 2010, I cannot expect this article (or the ones possibly following that have already been written) to have a serious impact. It thus can be concluded that it should not have been written. Then again, observing our own thinking patterns, we can identify influences of many thinkers who may have suspected the same (hubris not intended). And step by step, we will be standing on the shoulders of giants. At the same time, keep in mind that articles from LW won’t get you there. They represent only a small piece of the jigsaw. You may want to read some, observe how instrumental rationality works in the “real world," and, finally, you have to draw the critical conclusions for yourself. Nobody truly rational will lay them out for you. LW is great if you have an IQ of 140 and are tired of superficial discussions with the hairstylist in your village X. But keep in mind that the instrumental rationality of your hairstylist may still surpass yours, and I don’t even need to say much about the one of your president, business leader, and club Casanova. And yet, they may be literally dead wrong, because they have overlooked AI and SENS.

 

A final personal note: Kudos to the giants for building this great website and starting point for rationalists and the real-life progress in the last couple of years! This is a rather skeptical article to start with, but it does have its specific purpose of laying out why I, and I suspect many others, almost refrained from posting.

 

 

LINK: An example of the Pink Flamingo, the obvious-yet-overlooked cousin of the Black Swan

3 polymathwannabe 05 November 2015 04:55PM

India vs. Pakistan: the nuclear option is dangerously close, and nobody seems to want to prevent it

http://qz.com/541502/a-nuclear-war-between-india-and-pakistan-is-a-very-real-possibility/

The Triumph of Humanity Chart

23 Dias 26 October 2015 01:41AM

Cross-posted from my blog here.

One of the greatest successes of mankind over the last few centuries has been the enormous amount of wealth that has been created. Once upon a time virtually everyone lived in grinding poverty; now, thanks to the forces of science, capitalism and total factor productivity, we produce enough to support a much larger population at a much higher standard of living.

EAs being a highly intellectual lot, our preferred form of ritual celebration is charts. The ordained chart for celebrating this triumph of our people is the Declining Share of People Living in Extreme Poverty Chart.

Share in Poverty

(Source)

However, as a heretic, I think this chart is a mistake. What is so great about reducing the share? We could achieve that by killing all the poor people, but that would not be a good thing! Life is good, and poverty is not death; it is simply better for it to be rich.

As such, I think this is a much better chart. Here we show the world population. Those in extreme poverty are in purple – not red, for their existence is not bad. Those who the wheels of progress have lifted into wealth unbeknownst to our ancestors, on the other hand, are depicted in blue, rising triumphantly.

Triumph of Humanity2

Long may their rise continue.

 

Improving the Effectiveness of Effective Altruism Outreach

4 Gleb_Tsipursky 18 October 2015 03:38AM

Disclaimer: This post is mainly relevant to those who are interested in Effective Altruism

 

Introduction

As a Less Wronger and Effective Altruist who is skilled at marketing, education, and outreach, I think we can do a lot of good if we improve the effectiveness of Effective Altruism outreach. I am not talking about EA pitches in particular, although these are of course valuable in the right time and place, but more broadly issues of strategy. I am talking about making Effective Altruism outreach effective through relying on research-based strategies of effective outreach.

To be clear, I should say that I have been putting my money/efforts where my mouth is, and devoting a lot of my time and energy to a project, Intentional Insights, of spreading rationality and effective altruism to a broad audience, as I think I can do the most good through convincing others to do the most good, through their giving and through rational thinking. Over the last year, I devoted approximately 2400 hours and $33000 to this project. Here's what I found helpful in my own outreach efforts to non-EAs, and lots of these ideas also apply to my outreach regarding rationality more broadly.

 

Telling Stories

I found it quite helpful to focus much more on speaking to people's emotions rather than their cognition. Now, this was not intuitive to me. I'm much more motivated by data than the typical person, and I bet you are too. But I think we need to remember that we suffer from a typical mind fallacy, in that most EAs are much more data-driven than the typical person. Moreover, after we got into the EA movement, we forget how weird it looks from the outside - we suffer from the curse of knowledge.

Non-EAs usually give because of the pull of their heartstrings, not because of raw data on QALYs. Telling people emotional stories is a research-based strategy to pull at heartstrings. So I practice doing so, about the children saved from malaria, of the benefits people gained from GiveDirectly, and other benefits. Then, the non-analytically inclined people become open to the numbers and metrics. However, the story is what opens people up to the numbers and metrics. This story helps address the drowning child problem and similar challenges.

However, this is not sufficient if we want to get people into EA. Once they are open to the numbers and metrics through the story about a concrete and emotional example, it's very important to tell the story of Effective Altruism, to get people to engage with the movement. After leading with a story about children saved or something like that, I talk about how great it would be to save the most children most effectively. I paint a verbal and emotion-laden picture of how regrettable it is that the nonprofits that are best able to tell stories get the most money, not the nonprofits that are most effective. I talk about how people tend to give to nonprofits with the best marketing, not the ones that get the work done. This is meant to appeal to arouse negative emotions in people and put them before the essence of the problem that EA is trying to solve.

Once they are in a state of negative emotional arousal about other charities, this is the best time to sell them on EA, I find. I talk to them about how EA offers a solution to their problem. It offers a way to evaluate charities based on their outcome, not on their marketing. They can trust EA sources as rigorous and data-driven. They can be confident in their decision-making based on GiveWell and other EA-vetted sources. Even if they don't understand the data-based analytical methodology, an issue I address below, they should still trust the outcomes. I'm currently drafting an article for a broad media forum, such as Huffington Post or something like that, which uses some of these strategies, and would be glad for feedback: link here.

 

Presenting Data

A big issue that many non-EAs have when presented with Effective Altruism is the barrier to entry to understanding data. For example, let's go to back to the example of saving children through malaria nets that I used earlier. What happens when I direct people to the major EA evaluation of Against Malaria Foundation, GiveWell's write-up on it? They get hit with a research paper, essentially. So many people who I directed there just get overwhelmed, as they do not have the skills to process it.

I'd suggest developing more user-friendly ways of presenting data. We know that our minds process visual information much quicker and more effectively than text. So what about having infographics, charts, and other visual methods of presenting EA analyses? These can accompany the complex research-based analyses and give their results in an easy-to-digest visual format.

 

Social Affiliation

Research shows that people desire social affiliation with people they like. This is part of the reason why as part of Intentional Insights, we are focusing on secular people as our first target audience.

First, the vast majority of EAs are secular. This fact creates positive social signaling to secular people who are not currently EAs. Moreover, it is clear evidence that Effective Altruism appeals to them most. Second, network effects cause it to be more likely for people who already became Effective Altruists to cause others in their contact networks to become EAs. Therefore, it pays well and is highly effective in terms of resource investment to focus on secular people, as they can get others in their social circles to become EAs. Third, the presence of prominent notables who are EAs allows good promotion through a desire to be socially affiliated with prominent secular notables. Here's an example of how I did it in a blog post for Intentional Insights.

There are so many secular people and if we can get more of them into the EA movement, it would be great! To be clear, this is not an argument against reaching out to religious EAs, which is a worthwhile project in and of itself. This is just a point about effectiveness and where to spend resources for outreach.

 

Meta-Comments About Outreach

These are just some specific strategies. I think we need to be much more intentional about our communication to non-EAs. We need to develop guidelines for how to communicate to people who are not intuitively rational about their donations. 

To do so, I think we need to focus much more efforts - time and money - on developing Effective Altruist outreach and communication. This is why I am trying to fill the gap here with my own project. We haven't done nearly enough research or experimentation on how to grow the movement most effectively through communicating effectively to outsiders. Investing resources in this area would be a very low-hanging fruit with very high returns, I think. If anyone is interested in learning more about my experience here, or wants to talk about collaborating, or just has some thoughts to share better suited for one-on-one than for discussion comments, my email is gleb@intentionalinsights.org and Skype is gleb.tsipursky

In conclusion, I strongly believe we can do much better at our outreach if we apply research-based strategies of effective outreach. I'd love to hear your thoughts about it.

 

(Cross-posted on the Effective Altruism Forum)

 

Deliberate Grad School

22 Academian 04 October 2015 10:11AM

Among my friends interested in rationality, effective altruism, and existential risk reduction, I often hear: "If you want to have a real positive impact on the world, grad school is a waste of time. It's better to use deliberate practice to learn whatever you need instead of working within the confines of an institution."

While I'd agree that grad school will not make you do good for the world, if you're a self-driven person who can spend time in a PhD program deliberately acquiring skills and connections for making a positive difference, I think you can make grad school a highly productive path, perhaps more so than many alternatives. In this post, I want to share some advice that I've been repeating a lot lately for how to do this:

  1. Find a flexible program. PhD programs in mathematics, statistics, philosophy, and theoretical computer science tend to give you a great deal of free time and flexibility, provided you can pass the various qualifying exams without too much studying. By contrast, sciences like biology and chemistry can require time-consuming laboratory work that you can't always speed through by being clever.

     

  2. Choose high-impact topics to learn about. AI safety and existential risk reduction are my favorite examples, but there are others, and I won't spend more time here arguing their case. If you can't make your thesis directly about such a topic, choosing a related more popular topic can give you valuable personal connections, and you can still learn whatever you want during the spare time a flexible program will afford you.

     

  3. Teach classes. Grad programs that let you teach undergraduate tutorial classes provide a rare opportunity to practice engaging a non-captive audience. If you just want to work on general presentation skills, maybe you practice on your friends... but your friends already like you. If you want to learn to win over a crowd that isn't particularly interested in you, try teaching calculus! I've found this skill particularly useful when presenting AI safety research that isn't yet mainstream, which requires carefully stepping through arguments that are unfamiliar to the audience.

     

  4. Use your freedom to accomplish things. I used my spare time during my PhD program to cofound CFAR, the Center for Applied Rationality. Alumni of our workshops have gone on to do such awesome things as creating the Future of Life Institute and sourcing a $10MM donation from Elon Musk to fund AI safety research. I never would have had the flexibility to volunteer for weeks at a time if I'd been working at a typical 9-to-5 or a startup.

     

  5. Organize a graduate seminar. Organizing conferences is critical to getting the word out on important new research, and in fact, running a conference on AI safety in Puerto Rico is how FLI was able to bring so many researchers together on its Open Letter on AI Safety. It's also where Elon Musk made his donation. During grad school, you can get lots of practice organizing research events by running seminars for your fellow grad students. In fact, several of the organizers of the FLI conference were grad students.

     

  6. Get exposure to experts. A top 10 US school will have professors around that are world-experts on myriad topics, and you can attend departmental colloquia to expose yourself to the cutting edge of research in fields you're curious about. I regularly attended cognitive science and neuroscience colloquia during my PhD in mathematics, which gave me many perspectives that I found useful working at CFAR.

     

  7. Learn how productive researchers get their work done. Grad school surrounds you with researchers, and by getting exposed to how a variety of researchers do their thing, you can pick and choose from their methods and find what works best for you. For example, I learned from my advisor Bernd Sturmfels that, for me, quickly passing a draft back and forth with a coauthor can get a paper written much more quickly than agonizing about each revision before I share it.

     

  8. Remember you don't have to stay in academia. If you limit yourself to only doing research that will get you good post-doc offers, you might find you aren't able to focus on what seems highest impact (because often what makes a topic high impact is that it's important and neglected, and if a topic is neglected, it might not be trendy enough land you good post-doc). But since grad school is run by professors, becoming a professor is usually the most salient path forward for most grad students, and you might end up pressuring yourself to follow that standards of that path. When I graduated, I got my top choice of post-doc, but then I decided not to take it and to instead try earning to give as an algorithmic stock trader, and now I'm a research fellow at MIRI. In retrospect, I might have done more valuable work during my PhD itself if I'd decided in advance not to do a typical post-doc.

That's all I have for now. The main sentiment behind most of this, I think, is that you have to be deliberate to get the most out of a PhD program, rather than passively expecting it to make you into anything in particular. Grad school still isn't for everyone, and far from it. But if you were seriously considering it at some point, and "do something more useful" felt like a compelling reason not to go, be sure to first consider the most useful version of grad that you could reliably make for yourself... and then decide whether or not to do it.

Please email me (lastname@thisdomain.com) if you have more ideas for getting the most out of grad school!

View more: Next