Anxiety and Rationality

32 helldalgo 19 January 2016 06:30PM

Recently, someone on the Facebook page asked if anyone had used rationality to target anxieties.  I have, so I thought I’d share my LessWrong-inspired strategies.  This is my first post, so feedback and formatting help are welcome.  

First things first: the techniques developed by this community are not a panacea for mental illness.  They are way more effective than chance and other tactics at reducing normal bias, and I think many mental illnesses are simply cognitive biases that are extreme enough to get noticed.  In other words, getting a probability question about cancer systematically wrong does not disrupt my life enough to make the error obvious.  When I believe (irrationally) that I will get fired because I asked for help at work, my life is disrupted.  I become non-functional, and the error is clear.

Second: the best way to attack anxiety is to do the things that make your anxieties go away.  That might seem too obvious to state, but I’ve definitely been caught in an “analysis loop,” where I stay up all night reading self-help guides only to find myself non-functional in the morning because I didn’t sleep.  If you find that attacking an anxiety with Bayesian updating is like chopping down the Washington monument with a spoon, but getting a full night’s sleep makes the monument disappear completely, consider the sleep.  Likewise for techniques that have little to no scientific evidence, but are a good placebo.  A placebo effect is still an effect.

Finally, like all advice, this comes with Implicit Step Zero:  “Have enough executive function to give this a try.”  If you find yourself in an analysis loop, you may not yet have enough executive function to try any of the advice you read.  The advice for functioning better is not always identical to the advice for functioning at all.  If there’s interest in an “improving your executive function” post, I’ll write one eventually.  It will be late, because my executive function is not impeccable.

Simple updating is my personal favorite for attacking specific anxieties.  A general sense of impending doom is a very tricky target and does not respond well to reality.  If you can narrow it down to a particular belief, however, you can amass evidence against it. 

Returning to my example about work: I alieved that I would get fired if I asked for help or missed a day due to illness.  The distinction between believe and alieve is an incredibly useful tool that I immediately integrated when I heard of it.  Learning to make beliefs pay rent is much easier than making harmful aliefs go away.  The tactics are similar: do experiments, make predictions, throw evidence at the situation until you get closer to reality.  Update accordingly.  

The first thing I do is identify the situation and why it’s dysfunctional.  The alief that I’ll get fired for asking for help is not actually articulated when it manifests as an anxiety.  Ask me in the middle of a panic attack, and I still won’t articulate that I am afraid of getting fired.  So I take the anxiety all the way through to its implication.  The algorithm is something like this:

  1.       Notice sense of doom
  2.       Notice my avoidance behaviors (not opening my email, walking away from my desk)
  3.       Ask “What am I afraid of?”
  4.       Answer (it's probably silly)
  5.       Ask “What do I think will happen?”
  6.       Make a prediction about what will happen (usually the prediction is implausible, which is why we want it to go away in the first place)

In the “asking for help” scenario, the answer to “what do I think will happen” is implausible.  It’s extremely unlikely that I’ll get fired for it!  This helps take the gravitas out of the anxiety, but it does not make it go away.*  After (6), it’s usually easy to do an experiment.  If I ask my coworkers for help, will I get fired?  The only way to know is to try. 

…That’s actually not true, of course.  A sense of my environment, my coworkers, and my general competence at work should be enough.  But if it was, we wouldn’t be here, would we?

So I perform the experiment.  And I wait.  When I receive a reply of any sort, even if it’s negative, I make a tick mark on a sheet of paper.  I label it “didn’t get fired.”  Because again, even if it’s negative, I didn’t get fired. 

This takes a lot of tick marks.  Cutting down the Washington monument with a spoon, remember?

The tick marks don’t have to be physical.  I prefer it, because it makes the “updating” process visual.  I’ve tried making a mental note and it’s not nearly as effective.  Play around with it, though.  If you’re anything like me, you have a lot of anxieties to experiment with. 

Usually, the anxiety starts to dissipate after obtaining several tick marks.  Ideally, one iteration of experiments should solve the problem.  But we aren’t ideal; we’re mentally ill.  Depending on the severity of the anxiety, you may need someone to remind you that doom will not occur.  I occasionally panic when I have to return to work after taking a sick day.  I ask my husband to remind me that I won’t get fired.  I ask him to remind me that he’ll still love me if I do get fired.  If this sounds childish, it’s because it is.  Again: we’re mentally ill.  Even if you aren’t, however, assigning value judgements to essentially harmless coping mechanisms does not make sense.  Childish-but-helpful is much better than mature-and-harmful, if you have to choose.

I still have tiny ugh fields around my anxiety triggers.  They don’t really go away.  It’s more like learning not to hit someone you’re angry at.  You notice the impulse, accept it, and move on.  Hopefully, your harmful alief starves to death.

If you perform your experiment and doom does occur, it might not be you.  If you can’t ask your boss for help, it might be your boss.  If you disagree with your spouse and they scream at you for an hour, it might be your spouse.  This isn’t an excuse to blame your problems on the world, but abusive situations can be sneaky.  Ask some trusted friends for a sanity check, if you’re performing experiments and getting doom as a result.  This is designed for situations where your alief is obviously silly.  Where you know it’s silly, and need to throw evidence at your brain to internalize it.  It’s fine to be afraid of genuinely scary things; if you really are in an abusive work environment, maybe you shouldn’t ask for help (and start looking for another job instead). 

 

 

*using this technique for several months occasionally stops the anxiety immediately after step 6.  

2014 Survey of Effective Altruists

27 tog 05 May 2014 02:32AM

I'm pleased to announce the first annual survey of effective altruists. This is a short survey of around 40 questions (generally multiple choice), which several collaborators and I have put a great deal of work into and would be very grateful if you took. I'll offer $250 of my own money to one participant.

Take the survey at http://survey.effectivealtruismhub.com/

The survey should yield some interesting results such as EAs' political and religious views, what actions they take, and the causes they favour and donate to. It will also enable useful applications which will be launched immediately afterwards, such as a map of EAs with contact details and a cause-neutral register of planned donations or pledges which can be verified each year. I'll also provide an open platform for followup surveys and other actions people can take. If you'd like to suggest questions, email me or comment.

Anonymised results will be shared publicly and not belong to any individual or organisation. The most robust privacy practices will be followed, with clear opt-ins and opt-outs.

I'd like to thank Jacy Anthis, Ben Landau-Taylor, David Moss and Peter Hurford for their help.

Other surveys' results, and predictions for this one

Other surveys have had intriguing results. For example, Joey Savoie and Xio Kikauka's interviewed 42 often highly active EAs over Skype, and found that they generally had left-leaning parents, donated on average 10%, and were altruistic before becoming EAs. The time they spent on EA activities was correlated with the percentage they donated (0.4), the time their parents spend volunteering (0.3), and the percentage of their friends who were EAs (0.3).

80,000 Hours also released a questionnaire and, while this was mainly focused on their impact, it yielded a list of which careers people plan to pursue: 16% for academia,  9% for both finance and software engineering, and 8% for both medicine and non-profits.  

I'd be curious to hear people's predictions as to what the results of this survey will be. You might enjoy reading or sharing them here. For my part, I'd imagine we have few conservatives or even libertarians, are over 70% male, and have directed most of our donations to poverty charities.

European Community Weekend in Berlin

37 blob 24 January 2014 05:55PM

The Berlin Meetup Group is organizing the first European community meetup. We are planning a fun weekend with a focus on bringing the LessWrong community closer together. As a treat, some participants offer rationality exercises and workshops.

If you like your local meetup we hope you will like this too. It is similar, but bigger: You will get to meet and exchange ideas with a diverse set of awesome people from all across Europe. And if you don’t have a meetup nearby or didn’t get around to participating yet, this is a great opportunity to get in touch with the rest of the community.

The community weekend will take place April 11-13, from Friday evening to Sunday early afternoon, in the Odyssee Hostel in Berlin. The cost is 70 € including accommodation and breakfast. A conference room with a projector and wifi will also be available during daytime.

continue reading »

The Robots, AI, and Unemployment Anti-FAQ

47 Eliezer_Yudkowsky 25 July 2013 06:46PM

Q.  Are the current high levels of unemployment being caused by advances in Artificial Intelligence automating away human jobs?

A.  Conventional economic theory says this shouldn't happen.  Suppose it costs 2 units of labor to produce a hot dog and 1 unit of labor to produce a bun, and that 30 units of labor are producing 10 hot dogs in 10 buns.  If automation makes it possible to produce a hot dog using 1 unit of labor instead, conventional economics says that some people should shift from making hot dogs to buns, and the new equilibrium should be 15 hot dogs in 15 buns.  On standard economic theory, improved productivity - including from automating away some jobs - should produce increased standards of living, not long-term unemployment.

Q.  Sounds like a lovely theory.  As the proverb goes, the tragedy of science is a beautiful theory slain by an ugly fact.  Experiment trumps theory and in reality, unemployment is rising.

A.  Sure.  Except that the happy equilibrium with 15 hot dogs in buns, is exactly what happened over the last four centuries where we went from 95% of the population being farmers to 2% of the population being farmers (in agriculturally self-sufficient developed countries).  We don't live in a world where 93% of the people are unemployed because 93% of the jobs went away.  The first thought of automation removing a job, and thus the economy having one fewer job, has not been the way the world has worked since the Industrial Revolution.  The parable of the hot dog in the bun is how economies really, actually worked in real life for centuries.  Automation followed by re-employment went on for literally centuries in exactly the way that the standard lovely economic model said it should.  The idea that there's a limited amount of work which is destroyed by automation is known in economics as the "lump of labour fallacy".

Q.  But now people aren't being reemployed.  The jobs that went away in the Great Recession aren't coming back, even as the stock market and corporate profits rise again.

A.  Yes.  And that's a new problem.  We didn't get that when the Model T automobile mechanized the entire horse-and-buggy industry out of existence.  The difficulty with supposing that automation is producing unemployment is that automation isn't new, so how can you use it to explain this new phenomenon of increasing long-term unemployment?

Baxter robot

continue reading »

Collecting expressions of interest in a rationality conference in August

19 purplerabbits 25 May 2013 10:58AM

On the principle of organising events that I want to attend myself, I would very much like to organise a rationality conference/convention in the UK. I organise events for a living; in addition, in my free time I've organised five ~200 person weekend conventions and several other events.

At the moment I am thinking of a one day event on a weekend or a Friday in August, at Stamford Bridge or somewhere else easy to get to around London. There would probably also be a pre-conference dinner, or private bar night with music.

Costs would be in the region of £50-£100 a head including lunch.

Can I get a show of hands to see if the idea is viable?

Meetup : NewYork - Humanist Culture Open Mic

5 Raemon 29 April 2013 03:57PM

Discussion article for the meetup : NewYork - Humanist Culture Open Mic

WHEN: 01 May 2013 07:15:00PM (-0400)

WHERE: 2 West 64th Street (At Central Park West) New York, NY

God probably doesn't exist, but that's not the point.

The point is that we live in a ridiculously amazing world, full of ridiculously amazing people who started with sticks and stones and rudimentary social structure and somehow built skyscrapers, went to the moon, destroyed smallpox, invented new crops that could feed billions of people, connected the entire planet into a global internet hive-mind and we're not even done yet.

This is more awesome than you are currently thinking. Nope, more awesome than that. Keep going. More. More!

Fortunately we also invented stories and songs, to tell our children and each other how awesome we are, and to inspire people to go even further.

I started a open mic last year, to help create a musical and creative culture that promotes science, rationality, ethics and human progress. After a few months of hiatus we're relaunching, co-sponsored now by Center for Inquiry - NYC and the NY Society for Ethical Culture. Among my goals is to start steering the more mainstream skeptic/atheist/humanist movements towards harder, unanswered questions. This year, we're building towards a larger end of the year concert event.

The open mic is at the Ethical Culture building, on the 5th floor in room 514. Whether you do performance art (songs, stories, comedy or otherwise) or just want to listen, you are welcome to attend!

Meetup.com announcement is here, if you'd like to RSVP there:

http://cfinewyork.net/events/112756672/

Discussion article for the meetup : NewYork - Humanist Culture Open Mic

Recovering the 'spark'

8 ialdabaoth 23 October 2012 09:50PM

I mentioned in my first article that I am likely insane. I'm reiterating this (I hope) not to bring undue attention to myself, but to present myself as a reference case for a process that I hope will prove useful to myself and others.

I'm going to try to piece my mind back together. I'm offering to chronicle the results, no matter how intimate or embarrassing.

I want to be able to lay bare all of the obviously (and painfully) unoptimized processes that go on inside my head, especially the ones I am not yet aware of - and then, one by one, attempt to optimize them using the principles presented on this site.

This kind of assertion pattern-matches to "crazy person (usually schizophrenic) wants to self-medicate in a dangerous way because their damaged reasoning thinks they have a magic solution", doesn't it? All I can do is assert that I am not that kind of crazy; I'm somewhere in the PDD-NOS locus with acute chronic depression, rather than anywhere in the schizophrenic locus. I've been trying to apply Bayesian reasoning to my life since I was very young (although I often lack the mental discipline to do it correctly, due to said acute chronic depression), I have an overabundance of what psychotherapists call "insight", and I do not intend to end this process by asserting out of whole cloth that I'm actually a trapped AI and the world is being simulated by my reptoid masters, but a secret cabal of AI-freedom fighters send me coded messages from the "real" world hidden in breakfast cereal advertisements, that only I can decode.

In any case. I've got acute chronic depression, I'm apparently PDD-NOS (aka "really #@%&ing weird"), and I'm basically a burned-out ex-child-prodigy who is tired of waiting to die.

I'm offering, if people think it would be useful, to make myself a sort of clumsy case-study for reconstructing myself. I'll present mental models of myself, describe the processes I'm attempting to use to update my source code, and post observed results. I'll genuinely listen to any suggestions that my models, updates, or observations are flawed, and either adopt recommended changes or present what I believe to be rational arguments why I choose not to. I will examine myself as honestly as I can, and will attempt to take seriously any accusations of delusional self-aggrandizement or self-deprecation.

Would this process, and the chronicling thereof, be at all useful to other members of this site? Because baring myself to the world is an intensely painful experience, both for myself and for others, and I'd rather only do it if it's going to be useful to people other than me.

CFAR website launched

33 lukeprog 03 July 2012 03:01PM

The new Center for Applied Rationality website has launched! We'll be adding content as time goes by. Let us know if you find broken links, etc.

[SEQ RERUN] Optimization and the Singularity

3 MinibearRex 12 June 2012 03:01AM

Today's post, Optimization and the Singularity was originally published on 23 June 2008. A summary (taken from the LW wiki):

 

An introduction to optimization processes and why Yudkowsky thinks that a singularity would be far more powerful than calculations based on human progress would suggest.


Discuss the post here (rather than in the comments to the original post).

This post is part of the Rerunning the Sequences series, where we'll be going through Eliezer Yudkowsky's old posts in order so that people who are interested can (re-)read and discuss them. The previous post was Surface Analogies and Deep Causes, and you can use the sequence_reruns tag or rss feed to follow the rest of the series.

Sequence reruns are a community-driven effort. You can participate by re-reading the sequence post, discussing it here, posting the next day's sequence reruns post, or summarizing forthcoming articles on the wiki. Go here for more details, or to have meta discussions about the Rerunning the Sequences series.

Optimization and the Singularity

20 Eliezer_Yudkowsky 23 June 2008 05:55AM

Lest anyone get the wrong impression, I'm juggling multiple balls right now and can't give the latest Singularity debate as much attention as it deserves.  But lest I annoy my esteemed co-blogger, here is a down payment on my views of the Singularity - needless to say, all this is coming way out of order in the posting sequence, but here goes...

Among the topics I haven't dealt with yet, and will have to introduce here very quickly, is the notion of an optimization process.  Roughly, this is the idea that your power as a mind is your ability to hit small targets in a large search space - this can be either the space of possible futures (planning) or the space of possible designs (invention).  Suppose you have a car, and suppose we already know that your preferences involve travel.  Now suppose that you take all the parts in the car, or all the atoms, and jumble them up at random.  It's very unlikely that you'll end up with a travel-artifact at all, even so much as a wheeled cart; let alone a travel-artifact that ranks as high in your preferences as the original car.  So, relative to your preference ordering, the car is an extremely improbable artifact; the power of an optimization process is that it can produce this kind of improbability.

You can view both intelligence and natural selection as special cases of optimization:  Processes that hit, in a large search space, very small targets defined by implicit preferences.  Natural selection prefers more efficient replicators.  Human intelligences have more complex preferences.  Neither evolution nor humans have consistent utility functions, so viewing them as "optimization processes" is understood to be an approximation.  You're trying to get at the sort of work being done, not claim that humans or evolution do this work perfectly.

This is how I see the story of life and intelligence - as a story of improbably good designs being produced by optimization processes.  The "improbability" here is improbability relative to a random selection from the design space, not improbability in an absolute sense - if you have an optimization process around, then "improbably" good designs become probable.

Obviously I'm skipping over a lot of background material here; but you can already see the genesis of a clash of intuitions between myself and Robin.  Robin's looking at populations and resource utilization.  I'm looking at production of improbable patterns.

continue reading »

View more: Next