Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

[Link] Dominic Cummings: how the Brexit referendum was won

16 The_Jaded_One 12 January 2017 09:26PM

[Link] Dominic Cummings: how the Brexit referendum was won

1 The_Jaded_One 12 January 2017 07:26PM

[Link] Rationality 101 videotaped presentation with link to slides in description (from our LessWrong meetup introductory event)

0 Gleb_Tsipursky 11 January 2017 07:07PM

Rationality Considered Harmful (In Politics)

9 The_Jaded_One 08 January 2017 10:36AM

Why you should be very careful about trying to openly seek truth in any political discussion


1. Rationality considered harmful for Scott Aaronson in the great gender debate

In 2015, complexity theorist and rationalist Scott Aaronson was foolhardy enough to step into the Gender Politics war on his blog with a comment stating that extreme feminism that he bought into made him hate himself and try to seek ways to chemically castrate himself. The feminist blogoshere got hold of this and crucified him for it, and he has written a few followup blog posts about it. Recently I saw this comment by him on his blog:

As the comment 171 affair blew up last year, one of my female colleagues in quantum computing remarked to me that the real issue had nothing to do with gender politics; it was really just about the commitment to truth regardless of the social costs—a quality that many of the people attacking me (who were overwhelmingly from outside the hard sciences) had perhaps never encountered before in their lives. That remark cheered me more than anything else at the time

 

2. Rationality considered harmful for Sam Harris in the islamophobia war

I recently heard a very angry, exasperated 2 hour podcast by the new atheist and political commentator Sam Harris about how badly he has been straw-manned, misrepresented and trash talked by his intellectual rivals (who he collectively refers to as the "regressive left"). Sam Harris likes to tackle hard questions such as when torture is justified, which religions are more or less harmful than others, defence of freedom of speech, etc. Several times, Harris goes to the meta-level and sees clearly what is happening:

Rather than a searching and beautiful exercise in human reason to have conversations on these topics [ethics of torture, military intervention, Islam, etc], people are making it just politically so toxic, reputationally so toxic to even raise these issues that smart people, smarter than me, are smart enough not to go near these topics

Everyone on the left at the moment seems to be a mind reader.. no matter how much you try to take their foot out of your mouth, the mere effort itself is going to be counted against you - you're someone who's in denial, or you don't even understand how racist you are, etc

 

3. Rationality considered harmful when talking to your left-wing friends about genetic modification

In the SlateStarCodex comments I posted complaining that many left-wing people were responding very personally (and negatively) to my political views. 

One long term friend openly and pointedly asked whether we should still be friends over the subject of eugenics and genetic engineering, for example altering the human germ-line via genetic engineering to permanently cure a genetic disease. This friend responded to a rational argument about why some modifications of the human germ line may in fact be a good thing by saying that "(s)he was beginning to wonder whether we should still be friends". 

A large comment thread ensued, but the best comment I got was this one:

One of the useful things I have found when confused by something my brain does is to ask what it is *for*. For example: I get angry, the anger is counterproductive, but recognizing that doesn’t make it go away. What is anger *for*? Maybe it is to cause me to plausibly signal violence by making my body ready for violence or some such.

Similarly, when I ask myself what moral/political discourse among friends is *for* I get back something like “signal what sort of ally you would be/broadcast what sort of people you want to ally with.” This makes disagreements more sensible. They are trying to signal things about distribution of resources, I am trying to signal things about truth value, others are trying to signal things about what the tribe should hold sacred etc. Feeling strong emotions is just a way of signaling strong precommitments to these positions (i.e. I will follow the morality I am signaling now because I will be wracked by guilt if I do not. I am a reliable/predictable ally.) They aren’t mad at your positions. They are mad that you are signaling that you would defect when push came to shove about things they think are important.

Let me repeat that last one: moral/political discourse among friends is for “signalling what sort of ally you would be/broadcast what sort of people you want to ally with”. Moral/political discourse probably activates specially evolved brainware in human beings; that brainware has a purpose and it isn't truthseeking. Politics is not about policy

 

4. Takeaways

This post is already getting too long so I deleted the section on lessons to be learned, but if there is interest I'll do a followup. Let me know what you think in the comments!

[Link] Applied Rationality Exercises

2 SquirrelInHell 07 January 2017 06:13PM

Actually Practicing Rationality and the 5-Second Level

5 lifelonglearner 06 January 2017 06:50AM

[I first posted this as a link to my blog post, but I'm reposting as a focused article here that trims some fat of the original post, which was less accessible]


I think a lot about heuristics and biases, and I admit that many of my ideas on rationality and debiasing get lost in the sea of my own thoughts.  They’re accessible, if I’m specifically thinking about rationality-esque things, but often invisible otherwise.  

That seems highly sub-optimal, considering that the whole point of having usable mental models isn’t to write fancy posts about them, but to, you know, actually use them.

To that end, I’ve been thinking about finding some sort of systematic way to integrate all of these ideas into my actual life.  

(If you’re curious, here’s the actual picture of what my internal “concept-verse” (w/ associated LW and CFAR memes) looks like)

 

MLU Mind Map v1.png


Open Image In New Tab for all the details

So I have all of these ideas, all of which look really great on paper and in thought experiments.  Some of them even have some sort of experimental backing.  Given this, how do I put them together into a kind of coherent notion?

Equivalently, what does it look like if I successfully implement these mental models?  What sorts of changes might I expect to see?  Then, knowing the end product, what kind of process can get me there?

One way of looking it would to say that if I implemented techniques well, then I’d be better able to tackle my goals and get things done.  Maybe my productivity would go up.  That sort of makes sense.  But this tells us nothing about how I’d actually be going about, using such skills.  

We want to know how to implement these skills and then actually utilize them.

Yudkowsky gives a highly useful abstraction when he talks about the five-second level.  He gives some great tips on breaking down mental techniques into their component mental motions.  It’s a step-by-step approach that really goes into the details of what it feels like to undergo one of the LessWrong epistemological techniques.  We’d like our mental techniques to be actual heuristics that we can use in the moment, so having an in-depth breakdown makes sense.

Here’s my attempt at a 5-second-level breakdown for Going Meta, or "popping" out of one's head to stay mindful of the moment:

  1. Notice the feeling that you are being mentally “dragged” towards continuing an action.
    1. (It can feel like an urge, or your mind automatically making a plan to do something.  Notice your brain simulating you taking an action without much conscious input.)
  2. Remember that you have a 5-second-level series of steps to do something about it.
  3. Feel aversive towards continuing the loop.  Mentally shudder at the part of you that tries to continue.
  4. Close your eyes.  Take in a breath.
  5. Think about what 1-second action you could take to instantly cut off the stimulus from whatever loop you’re stuck in. (EX: Turning off the display, closing the window, moving to somewhere else).
  6. Tense your muscles and clench, actually doing said action.
  7. Run a search through your head, looking for an action labeled “productive”.  Try to remember things you’ve told yourself you “should probably do” lately.  
    1. (If you can’t find anything, pattern-match to find something that seems “productive-ish”.)
  8. Take note of what time it is.  Write it down.
  9. Do the new thing.  Finish.
  10. Note the end time.  Calculate how long you did work.

Next, the other part is actually accessing the heuristic in the situations where you want it.  We want it to be habitual.

After doing some quick searches on the existing research on habits, it appears that many of the links go to Charles Duhigg, author of The Power of Habit, or B J Fogg of Tiny Habits. Both models focus on two things: Identifying the Thing you want to do.  Then setting triggers so you actually do It.  (There’s some similarity to CFAR’s Trigger Action Plans.)  

B J’s approach focuses on scaffolding new habits into existing routines, like brushing your teeth, which are already automatic.  Duhigg appears to be focused more on reinforcement and rewards, with several nods to Skinner.  CFAR views actions as self-reinforcing, so the reward isn’t even necessary— they see repetition as building automation.

Overlearning the material also seems to be useful in some contexts, for skills like acquiring procedural knowledge.  And mental notions do seem to be more like procedural knowledge.

For these mental skills specifically, we’d want them to go off, time irrespective, so anchoring it to an existing routine might not be best.  Having it as a response to an internal state (EX: “When I notice myself being ‘dragged’ into a spiral, or automatically making plans to do a thing”) may be more useful.


(Follow-up post forthcoming on concretely trying to apply habit research to implementing heuristics.)

 

 

 

[Link] Rationality 101 (An Intro Post to the Rationalist-Sphere for Friends: Kahneman, LW, etc.)

8 lifelonglearner 17 December 2016 10:13PM

[Link] Ozy's Thoughts on CFAR's Mission Statement

2 Raemon 14 December 2016 04:25PM

[Link] Take the Rationality Test to determine your rational thinking style

1 Gunnar_Zarncke 09 December 2016 11:10PM

Measuring the Sanity Waterline

4 moridinamael 06 December 2016 08:38PM

I've always appreciated the motto, "Raising the sanity waterline." Intentionally raising the ambient level of rationality in our civilization strikes me as a very inspiring and important goal.

It occurred to me some time ago that the "sanity waterline" could be more than just a metaphor, that it could be quantified. What gets measured gets managed. If we have metrics to aim at, we can talk concretely about strategies to effectively promulgate rationality by improving those metrics. A "rationality intervention" that effectively improves a targeted metric can be said to be effective.

It is relatively easy to concoct or discover second-order metrics. You would expect a variety of metrics to respond to the state of ambient sanity. For example, I would expect that, all things being equal, preventable deaths should decrease when overall sanity increases, because a sane society acts to effectively prevent the kinds of things that lead to preventable deaths. But of course other factors may also cause these contingent measures to fluctuate whichever way, so it's important to remember that these are only indirect measures of sanity.

The UN collects a lot of different types of data. Perusing their database, it becomes obvious that there are a lot of things that are probably worth caring about but which have only a very indirect relationship with what we could call "sanity". For example, one imagines that GDP would increase under conditions of high sanity, but that'd be a pretty noisy measure.

Take five minutes to think about how one might measure global sanity, and maybe brainstorm some potential metrics. Part of the prompt, of course, is to consider what we could mean by "sanity" in the first place.

~~~ THINK ABOUT THE PROBLEM FOR FIVE MINUTES ~~~

This is my first pass at brainstorming metrics which may more-or-less directly indicate the level of civilizational sanity:

  • (+) Literacy rate
  • (+) Enrollment rates in primary/secondary/tertiary education
  • (-) Deaths due to preventable disease
  • (-) QALYs lost due to preventable causes
  • (+) Median level of awareness about world events
  • (-) Religiosity rate
  • (-) Fundamentalist religiosity rate
  • (-) Per-capita spent on medical treatments that have not been proven to work
  • (-) Per-capita spent on medical treatments that have been proven not to work
  • (-) Adolescent fertility rate
  • (+) Human development index

It's potentially more productive (and probably more practically difficult) to talk concretely about how best to improve one or two of these metrics via specific rationality interventions, than it is to talk about popularizing abstract rationality concepts.

Sidebar: The CFAR approach may yield something like "trickle down rationality", where the top 0.0000001% of rational people are selected and taught to be even more rational, and maybe eventually good thinking habits will infect everybody in the world from the top down. But I wouldn't bet on that being the most efficient path to raising the global sanity waterline.

As to the question of the meaning of "sanity", it seems to me that this indicates a certain basic package of rationality.

In Eliezer's original post on the topic, he seems to suggest a platform that boils down to a comprehensive embrace of probability-based reasoning and reductionism, with enough caveats and asterisks applied to that summary that you might as well go back and read his original post to get his full point. The idea was that with a high enough sanity waterline, obvious irrationalities like religion would eventually "go underwater" and cease to be viable. I see no problem with any of the "curricula" Eliezer lists in his post.

It has become popular within the rationalsphere to push back against reductionism, positivism, Bayesianism, etc. While such critiques of "extreme rationality" have an important place in the discourse, I think for the sake of this discussion, we should remember that the median human being really would benefit from more rationality in their thinking, and that human societies would benefit from having more rational citizens. Maybe we can all agree on that, even if we continue to disagree on, e.g., the finer points of positivism.

"Sanity" shouldn't require dogmatic adherence to a particular description of rationality, but it must include at least a basic inoculation of rationality to be worthy of the name. The type of sanity that I would advocate for promoting is this more "basic" kind, where religion ends up underwater, but people are still socially allowed to be contrarian in certain regards. After all, a sane society is aware of the power of conformity, and should actively promote some level of contrarianism within its population to promote a diversity of ideas and therefor avoid letting itself become stuck on local maxima.

Epistemic Effort

29 Raemon 29 November 2016 04:08PM

Epistemic Effort: Thought seriously for 5 minutes about it. Thought a bit about how to test it empirically. Spelled out my model a little bit. I'm >80% confident this is worth trying and seeing what happens. Spent 45 min writing post.

I've been pleased to see "Epistemic Status" hit a critical mass of adoption - I think it's a good habit for us to have. In addition to letting you know how seriously to take an individual post, it sends a signal about what sort of discussion you want to have, and helps remind other people to think about their own thinking.

I have a suggestion for an evolution of it - "Epistemic Effort" instead of status. Instead of "how confident you are", it's more of a measure of "what steps did you actually take to make sure this was accurate?" with some examples including:

  • Thought about it musingly
  • Made a 5 minute timer and thought seriously about possible flaws or refinements
  • Had a conversation with other people you epistemically respect and who helped refine it
  • Thought about how to do an empirical test
  • Thought about how to build a model that would let you make predictions about the thing
  • Did some kind of empirical test
  • Did a review of relevant literature
  • Ran an Randomized Control Trial
[Edit: the intention with these examples is for it to start with things that are fairly easy to do to get people in the habit of thinking about how to think better, but to have it quickly escalate to "empirical tests, hard to fake evidence and exposure to falsifiability"]

A few reasons I think this (most of these reasons are "things that seem likely to me" but which I haven't made any formal effort to test - they come from some background in game design and reading some books on habit formation, most of which weren't very well cited)
  • People are more likely to put effort into being rational if there's a relatively straightforward, understandable path to do so
  • People are more likely to put effort into being rational if they see other people doing it
  • People are more likely to put effort into being rational if they are rewarded (socially or otherwise) for doing so.
  • It's not obvious that people will get _especially_ socially rewarded for doing something like "Epistemic Effort" (or "Epistemic Status") but there are mild social rewards just for doing something you see other people doing, and a mild personal reward simply for doing something you believe to be virtuous (I wanted to say "dopamine" reward but then realized I honestly don't know if that's the mechanism, but "small internal brain happy feeling")
  • Less Wrong etc is a more valuable project if more people involved are putting more effort into thinking and communicating "rationally" (i.e. making an effort to make sure their beliefs align with the truth, and making sure to communicate so other people's beliefs align with the truth)
  • People range in their ability / time to put a lot of epistemic effort into things, but if there are easily achievable, well established "low end" efforts that are easy to remember and do, this reduces the barrier for newcomers to start building good habits. Having a nice range of recommended actions can provide a pseudo-gamified structure where there's always another slightly harder step you available to you.
  • In the process of writing this very post, I actually went from planning a quick, 2 paragraph post to the current version, when I realized I should really eat my own dogfood and make a minimal effort to increase my epistemic effort here. I didn't have that much time so I did a couple simpler techniques. But even that I think provided a lot of value.
Results of thinking about it for 5 minutes.

  • It occurred to me that explicitly demonstrating the results of putting epistemic effort into something might be motivational both for me and for anyone else thinking about doing this, hence this entire section. (This is sort of stream of conscious-y because I didn't want to force myself to do so much that I ended up going 'ugh I don't have time for this right now I'll do it later.')
  • One failure mode is that people end up putting minimal, token effort into things (i.e. randomly tried something on a couple doubleblinded people and call it a Randomized Control Trial).
  • Another is that people might end up defaulting to whatever the "common" sample efforts are, instead of thinking more creatively about how to refine their ideas. I think the benefit of providing a clear path to people who weren't thinking about this at all outweights people who might end up being less agenty about their epistemology, but it seems like something to be aware of.
  • I don't think it's worth the effort to run a "serious" empirical test of this, but I do think it'd be worth the effort, if a number of people started doing this on their posts, to run a followup informal survey asking "did you do this? Did it work out for you? Do you have feedback."
  • A neat nice-to-have, if people actually started adopting this and it proved useful, might be for it to automatically appear at the top of new posts, along with a link to a wiki entry that explained what the deal was.

Next actions, if you found this post persuasive:


Next time you're writing any kind of post intended to communicate an idea (whether on Less Wrong, Tumblr or Facebook), try adding "Epistemic Effort: " to the beginning of it. If it was intended to be a quick, lightweight post, just write it in its quick, lightweight form.

After the quick, lightweight post is complete, think about whether it'd be worth doing something as simple as "set a 5 minute timer and think about how to refine/refute the idea". If not, just write "thought about it musingly" after Epistemic Status. If so, start thinking about it more seriously and see where it leads.

While thinking about it for 5 minutes, some questions worth asking yourself:
  • If this were wrong, how would I know?
  • What actually led me to believe this was a good idea? Can I spell that out? In how much detail?
  • Where might I check to see if this idea has already been tried/discussed?
  • What pieces of the idea might you peel away or refine to make the idea stronger? Are there individual premises you might be wrong about? Do they invalidate the idea? Does removing them lead to a different idea? 

[Link] Video using humor to spread rationality

-8 Gleb_Tsipursky 23 November 2016 02:18AM

[Link] Irrationality is the worst problem in politics

-14 Gleb_Tsipursky 21 November 2016 04:53PM

[Link] Major Life Course Change: Making Politics Less Irrational

-8 Gleb_Tsipursky 11 November 2016 03:30AM

[Link] Psychological skills

0 arunbharatula 10 November 2016 10:02AM

[Link] Raising the sanity waterline in politics

-15 Gleb_Tsipursky 08 November 2016 04:10PM

[Link] Voting is like donating hundreds of thousands to charity

-6 Gleb_Tsipursky 02 November 2016 09:22PM

[Link] Trying to make politics less irrational by cognitive bias-checking the US presidential debates

-6 Gleb_Tsipursky 22 October 2016 02:32AM

June Outreach Thread

-7 Gleb_Tsipursky 06 June 2016 01:47PM

Please share about any outreach that you have done to convey rationality and effective altruism-themed ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.

Review and Thoughts on Current Version of CFAR Workshop

11 Gleb_Tsipursky 06 June 2016 01:44PM

Outline: I will discuss my background and how I prepared for the workshop, and then how I would have prepared differently if I could go back and have the chance to do it again; I will then discuss my experience at the CFAR workshop, and what I would have done differently if I had the chance to do it again; I will then discuss what my take-aways were from the workshop, and what I am doing to integrate CFAR strategies into my life; finally, I will give my assessment of its benefits and what other folks might expect to get who attend the workshop.


 

Acknowledgments: Thanks to fellow CFAR alumni and CFAR staff for feedback on earlier versions of this post


 

Introduction

 

Many aspiring rationalists have heard about the Center for Applied Rationality, an organization devoted to teaching applied rationality skills to help people improve their thinking, feeling, and behavior patterns. This nonprofit does so primarily through its intense workshops, and is funded by donations and revenue from its workshop. It fulfills its social mission through conducting rationality research and through giving discounted or free workshops to those people its staff judge as likely to help make the world a better place, mainly those associated with various Effective Altruist cause areas, especially existential risk.

 

To be fully transparent: even before attending the workshop, I already had a strong belief that CFAR is a great organization and have been a monthly donor to CFAR for years. So keep that in mind as you read my description of my experience (you can become a donor here).


Preparation

 

First, some background about myself, so you know where I’m coming from in attending the workshop. I’m a professor specializing in the intersection of history, psychology, behavioral economics, sociology, and cognitive neuroscience. I discovered the rationality movement several years ago through a combination of my research and attending a LessWrong meetup in Columbus, OH, and so come from a background of both academic and LW-style rationality. Since discovering the movement, I have become an activist in the movement as the President of Intentional Insights, a nonprofit devoted to popularizing rationality and effective altruism (see here for our EA work). So I came to the workshop with some training and knowledge of rationality, including some CFAR techniques.

 

To help myself prepare for the workshop, I reviewed existing posts about CFAR materials, with an eye toward being careful not to assume that the actual techniques match their actual descriptions in the posts.

 

I also delayed a number of tasks for after the workshop, tying up loose ends. In retrospect, I wish I did not leave myself some ongoing tasks to do during the workshop. As part of my leadership of InIn, I coordinate about 50ish volunteers, and I wish I had placed those responsibilities on someone else during the workshop.

 

Before the workshop, I worked intensely on finishing up some projects. In retrospect, it would have been better to get some rest and come to the workshop as fresh as possible.

 

There were some communication snafus with logistics details before the workshop. It all worked out in the end, but I would have told myself in retrospect to get the logistics hammered out in advance to not experience anxiety before the workshop about how to get there.


Experience

 

The classes were well put together, had interesting examples, and provided useful techniques. FYI, my experience in the workshop was that reading these techniques in advance was not harmful, but that the techniques in the CFAR classes were quite a bit better than the existing posts about them, so don’t assume you can get the same benefits from reading posts as attending the workshop. So while I was aware of the techniques, the ones in the classes definitely had optimized versions of them - maybe because of the “broken telephone” effect or maybe because CFAR optimized them from previous workshops, not sure. I was glad to learn that CFAR considers the workshop they gave us in May as satisfactory enough to scale up their workshops, while still improving their content over time.

 

Just as useful as the classes were the conversations held in between and after the official classes ended. Talking about them with fellow aspiring rationalists and seeing how they were thinking about applying these to their lives was helpful for sparking ideas about how to apply them to my life. The latter half of the CFAR workshop was especially great, as it focused on pairing off people and helping others figure out how to apply CFAR techniques to themselves and how to address various problems in their lives. It was especially helpful to have conversations with CFAR staff and trained volunteers, of whom there were plenty - probably about 20 volunteers/staff for the 50ish workshop attendees.

 

Another super-helpful aspect of the conversations was networking and community building. Now, this may have been more useful to some participants than others, so YMMV. As an activist in the moment, I talked to many folks in the CFAR workshop about promoting EA and rationality to a broad audience. I was happy to introduce some people to EA, with my most positive conversation there being to encourage someone to switch his efforts regarding x-risk from addressing nuclear disarmament to AI safety research as a means of addressing long/medium-term risk, and promoting rationality as a means of addressing short/medium-term risk. Others who were already familiar with EA were interested in ways of promoting it broadly, while some aspiring rationalists expressed enthusiasm over becoming rationality communicators.

 

Looking back at my experience, I wish I was more aware of the benefits of these conversations. I went to sleep early the first couple of nights, and I would have taken supplements to enable myself to stay awake and have conversations instead.


Take-Aways and Integration

 

The aspects of the workshop that I think will help me most were what CFAR staff called “5-second” strategies - brief tactics and techniques that could be executed in 5 seconds or less and address various problems. The stuff that we learned at the workshops that I was already familiar with required some time to learn and practice, such as Trigger Action Plans, Goal Factoring, Murphyjitsu, Pre-Hindsight, often with pen and paper as part of the work. However, with sufficient practice, one can develop brief techniques that mimic various aspects of the more thorough techniques, and apply them quickly to in-the-moment decision-making.

 

Now, this doesn’t mean that the longer techniques are not helpful. They are very important, but they are things I was already generally familiar with, and already practice. The 5-second versions were more of a revelation for me, and I anticipate will be more helpful for me as I did not know about them previously.

 

Now, CFAR does a very nice job of helping people integrate the techniques into daily life, as this is a common failure mode of CFAR attendees, with them going home and not practicing the techniques. So they have 6 Google Hangouts with CFAR staff and all attendees who want to participate, 4 one-on-one sessions with CFAR trained volunteers or staff, and they also pair you with one attendee for post-workshop conversations. I plan to take advantage of all these, although my pairing did not work out.

 

For integration of CFAR techniques into my life, I found the CFAR strategy of “Overlearning” especially helpful. Overlearning refers to trying to apply a single technique intensely for a while to all aspect of one’s activities, so that it gets internalized thoroughly. I will first focus on overlearning Trigger Action Plans, following the advice of CFAR.

 

I also plan to teach CFAR techniques in my local rationality dojo, as teaching is a great way to learn, naturally.

 

Finally, I plan to integrate some CFAR techniques into Intentional Insights content, at least the more simple techniques that are a good fit for the broad audience with which InIn is communicating.


Benefits

 

I have a strong probabilistic belief that having attended the workshop will improve my capacity to be a person who achieves my goals for doing good in the world. I anticipate I will be able to figure out better whether the projects I am taking on are the best uses of my time and energy. I will be more capable of avoiding procrastination and other forms of akrasia. I believe I will be more capable of making better plans, and acting on them well. I will also be more in touch with my emotions and intuitions, and be able to trust them more, as I will have more alignment among different components of my mind.

 

Another benefit is meeting the many other people at CFAR who have similar mindsets. Here in Columbus, we have a flourishing rationality community, but it’s still relatively small. Getting to know 70ish people, attendees and staff/volunteers, passionate about rationality was a blast. It was especially great to see people who were involved in creating new rationality strategies, something that I am engaged in myself in addition to popularizing rationality - it’s really heartening to envision how the rationality movement is growing.

 

These benefits should resonate strongly with those who are aspiring rationalists, but they are really important for EA participants as well. I think one of the best things that EA movement members can do is studying rationality, and it’s something we promote to the EA movement as part of InIn’s work. What we offer is articles and videos, but coming to a CFAR workshop is a much more intense and cohesive way of getting these benefits. Imagine all the good you can do for the world if you are better at planning, organizing, and enacting EA-related tasks. Rationality is what has helped me and other InIn participants make the major impact that we have been able to make, and there are a number of EA movement members who have rationality training and who reported similar benefits. Remember, as an EA participant, you can likely get a scholarship with a partial or full coverage of the regular $3900 price of the workshop, as I did myself when attending it, and you are highly likely to be able to save more lives as a result of attending the workshop over time, even if you have to pay some costs upfront.

 

Hope these thoughts prove helpful to you all, and please contact me at gleb@intentionalinsights.org if you want to chat with me about my experience.

 

rationalfiction.io - publish, discover, and discuss rational fiction

7 rayalez 31 May 2016 12:02PM

Hey, everyone! I want to share with you a project I've been working on for a while - http://rationalfiction.io.

I want it to become the perfect place to publish, discover, and discuss rational fiction.

We already have a lot of awesome stories, and I invite you to join and post more! =)

May Outreach Thread

-2 Gleb_Tsipursky 06 May 2016 08:02PM

Please share about any outreach that you have done to convey rationality-style ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.

Collaborative Truth-Seeking

11 Gleb_Tsipursky 04 May 2016 11:28PM

Summary: We frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal.

 

Acknowledgments: Thanks to Pete Michaud, Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Szun S. Tay, Alfredo Parra, Michael Estes, Aaron Thoma, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.

 

The Problem with Debates

 

Aspiring rationalists generally aim to figure out the truth, and often disagree about it. The usual method of hashing out such disagreements in order to discover the truth is through debates, in person or online.

 

Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritizing truth discovery. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.

 

We may hope that as aspiring rationalists, we would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode of engagement called collaborative truth-seeking.


Collaborative Truth-Seeking

 

Collaborative truth-seeking is one way of describing a more intentional approach in which two or more people with different opinions engage in a process that focuses on finding out the truth. Collaborative truth-seeking is a modality that should be used among people with shared goals and a shared sense of trust.

 

Some important features of collaborative truth-seeking, which are often not present in debates, are: focusing on a desire to change one’s own mind toward the truth; a curious attitude; being sensitive to others’ emotions; striving to avoid arousing emotions that will hinder updating beliefs and truth discovery; and a trust that all other participants are doing the same. These can contribute to increased  social sensitivity, which, together with other attributes, correlate with accomplishing higher group performance  on a variety of activities.

 

The process of collaborative truth-seeking starts with establishing trust, which will help increase social sensitivity, lower barriers to updating beliefs, increase willingness to be vulnerable, and calm emotional arousal. The following techniques are helpful for establishing trust in collaborative truth-seeking:

  • Share weaknesses and uncertainties in your own position

  • Share your biases about your position

  • Share your social context and background as relevant to the discussion

    • For instance, I grew up poor once my family immigrated to the US when I was 10, and this naturally influences me to care about poverty more than some other issues, and have some biases around it - this is one reason I prioritize poverty in my Effective Altruism engagement

  • Vocalize curiosity and the desire to learn

  • Ask the other person to call you out if they think you're getting emotional or engaging in emotive debate instead of collaborative truth-seeking, and consider using a safe word



Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:

  • Self-signal: signal to yourself that you want to engage in collaborative truth-seeking, instead of debating

  • Empathize: try to empathize with the other perspective that you do not hold by considering where their viewpoint came from, why they think what they do, and recognizing that they feel that their viewpoint is correct

  • Keep calm: be prepared with emotional management to calm your emotions and those of the people you engage with when a desire for debate arises

    • watch out for defensiveness and aggressiveness in particular

  • Go slow: take the time to listen fully and think fully

  • Consider pausing: have an escape route for complex thoughts and emotions if you can’t deal with them in the moment by pausing and picking up the discussion later

    • say “I will take some time to think about this,” and/or write things down

  • Echo: paraphrase the other person’s position to indicate and check whether you’ve fully understood their thoughts

  • Be open: orient toward improving the other person’s points to argue against their strongest form

  • Stay the course: be passionate about wanting to update your beliefs, maintain the most truthful perspective, and adopt the best evidence and arguments, no matter if they are yours of those of others

  • Be diplomatic: when you think the other person is wrong, strive to avoid saying "you're wrong because of X" but instead to use questions, such as "what do you think X implies about your argument?"

  • Be specific and concrete: go down levels of abstraction

  • Be clear: make sure the semantics are clear to all by defining terms

  • Be probabilistic: use probabilistic thinking and probabilistic language, to help get at the extent of disagreement and be as specific and concrete as possible

    • For instance, avoid saying that X is absolutely true, but say that you think there's an 80% chance it's the true position

    • Consider adding what evidence and reasoning led you to believe so, for both you and the other participants to examine this chain of thought

  • When people whose perspective you respect fail to update their beliefs in response to your clear chain of reasoning and evidence, update a little somewhat toward their position, since that presents evidence that your position is not very convincing

  • Confirm your sources: look up information when it's possible to do so (Google is your friend)

  • Charity mode: trive to be more charitable to others and their expertise than seems intuitive to you

  • Use the reversal test to check for status quo bias

    • If you are discussing whether to change some specific numeric parameter - say increase by 50% the money donated to charity X - state the reverse of your positions, for example decreasing the amount of money donated to charity X by 50%, and see how that impacts your perspective

  • Use CFAR’s double crux technique

    • In this technique, two parties who hold different positions on an argument each writes the the fundamental reason for their position (the crux of their position). This reason has to be the key one, so if it was proven incorrect, then each would change their perspective. Then, look for experiments that can test the crux. Repeat as needed. If a person identifies more than one reason as crucial, you can go through each as needed. More details are here.  


Of course, not all of these techniques are necessary for high-quality collaborative truth-seeking. Some are easier than others, and different techniques apply better to different kinds of truth-seeking discussions. You can apply some of these techniques during debates as well, such as double crux and the reversal test. Try some out and see how they work for you.


Conclusion

 

Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly. It also tends to take more time and effort than just debating. It is also easy to slip into debate mode even when using collaborative truth-seeking, because of the intuitive nature of debate mode.

 

Moreover, collaborative truth-seeking need not replace debates at all times. This non-intuitive mode of engagement can be chosen when discussing issues that relate to deeply-held beliefs and/or ones that risk emotional triggering for the people involved. Because of my own background, I would prefer to discuss poverty in collaborative truth-seeking mode rather than debate mode, for example. On such issues, collaborative truth-seeking can provide a shortcut to resolution, in comparison to protracted, tiring, and emotionally challenging debates. Likewise, using collaborative truth-seeking to resolve differing opinions on all issues holds the danger of creating a community oriented excessively toward sensitivity to the perspectives of others, which might result in important issues not being discussed candidly. After all, research shows the importance of having disagreement in order to make wise decisions and to figure out the truth. Of course, collaborative truth-seeking is well suited to expressing disagreements in a sensitive way, so if used appropriately, it might permit even people with triggers around certain topics to express their opinions.

 

Taking these caveats into consideration, collaborative truth-seeking is a great tool to use to discover the truth and to update our beliefs, as it can get past the high emotional barriers to altering our perspectives that have been put up by evolution. Rationality venues are natural places to try out collaborative truth-seeking.

 

 

 

Monthly Outreach Thread

0 Gleb_Tsipursky 17 April 2016 11:18PM

Please share about any outreach that you have done to convey rationality-style ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.

[Link] Op-Ed on Brussels Attacks

-6 Gleb_Tsipursky 02 April 2016 05:38PM

Trigger warning: politics is hard mode.


"How to you make America safer from terrorists" is the title of my op-ed published in Sun Sentinel, a very prominent newspaper in Florida, one of the most swingiest of the swing states in the US for the presidential election, and the one with the most votes. The maximum length of the op-ed was 450 words, and it was significantly edited by the editor, so it doesn't convey the full message I wanted with all the nuances, but such is life. My primary goal with the piece was to convey methods of thinking more rationally about politics, such as to use probabilistic thinking, evaluating the full consequences of our actions, and avoiding attention bias. I used the example of the proposal to police heavily Muslim neighborhoods as a case study. Hope this helps Floridians think more rationally and raises the sanity waterline regarding politics!

 

 

EDIT: To be totally clear, I used guesstimates for the numbers I suggested. Following Yvain/Scott Alexander's advice, I prefer to use guesstimates rather than vague statements.

[Video] The Essential Strategies To Debiasing From Academic Rationality

1 Gleb_Tsipursky 27 March 2016 03:04AM
A lifetime of work by a world expert in debiasing boiled down into four broad strategies in this video. A nice approach to this topic from the academic side of rationality.

Disclosure - the academic is Dr. Hal Arkes, a personal friend and Advisory Board Member of Intentional Insights, which I run.

EDIT: Seems like the sound quality is low. Anyone willing to do a transcript of this video as a volunteer activity for the rationality community? We can then subtitle the video.

Outreach Thread

6 Gleb_Tsipursky 06 March 2016 10:18PM

Based on an earlier suggestion, here's an outreach thread where you can leave comments about any recent outreach that you have done to convey rationality-style ideas broadly. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.


 

Religious and Rational?

3 Gleb_Tsipursky 09 February 2016 08:12PM

Reverend Caleb Pitkin, an aspiring rationalist and United Methodist Minister, wrote an article about combining religion and rationality which was recently published on the Intentional Insights blog. He's the only Minister I know who is also an aspiring rationalist, so I thought it would be an interesting piece for Less Wrong as well. Besides, it prompted an interesting discussion on the Less Wrong Facebook group, so I thought some people here who don't look at the Facebook group might be interested in checking it out as well. Caleb does not have enough karma to post, so I am posting it on his behalf, but he will engage with the comments.

______________________________________________________________________________

 

Religious and Rational?

 

“Wisdom shouts in the street; in the public square she raises her voice.”

Proverbs 1:20 Common English Bible

The Biblical book of Proverbs is full of imagery of wisdom personified as a woman calling and extorting people to come to her and listen.  The wisdom contained in Proverbs is not just spiritual wisdom but also contains a large amount of practical wisdom and advice.  What might the wisdom of Proverbs and rationality have in common?  The wisdom literature in scripture was meant to help people make better and more effective decisions.  In today’s complex and rapidly changing world we have the same need for tools and resources to help us make good decisions.  One great source of wisdom is methods of better thinking that are informed by science.  

Now, not everyone would agree with comparing the wisdom of Proverbs with scientific insights.  Doing so may not sit well with some in the secular rationality community who view all religion as inherently irrational and hindering clear thinking. It also might not sit well with some in my own religious community who are suspicious of scientific thinking as undermining traditional faith.  While it would take a much longer piece to try to completely defend either religion or secular rationality I’m going to try and demonstrate some ways that rationality is useful  for a religious person.

The first way that rationality can be useful for a religious person is in the living of our daily lives.  We are faced with tasks and decisions each day that we try to do our best in.  Learning to recognize common logical fallacies or other biases, like those that cause us to fail to understand other people, will improve our decision making as much as it improves the thinking of non-religious people. For example, a mother driving her kids to Sunday School might benefit from avoiding thinking that the person who cuts her off is definitely a jerk, one common type of thinking error.  Some doing volunteer work for their church could be more effective if they avoid problematic communication with other volunteers. This use of rationality to lead our daily lives in the best way is one that most would find fairly unobjectionable.  It’s easy to say that the way we all achieve our personal goals and objectives could be improved, and we can all gain greater agency.

Rationality can also be of use in theological commentary and discourse.  Many of the theological and religious greats used the available philosophical and intellectual tools of their day to examine their faith. Examples of this include John Wesley, Thomas Aquinas and even the Apostle Paul when he debated Epicurean and Stoic Philosophers.   They also made sure that their theologies were internally, rational and logical.  This means that, from the perspective of a religious person, keeping up with rationality can help with the pursuit of a deeper understanding of our faith.  For a secular person acknowledging the ways in which religious people use rationality within their worldview may be difficult, but it can help to build common ground. The starting point is different.  Secular people start with the faith that they can trust their sensory experience.  Religious people start with conceptions of the divine.  Yet, after each starting point, both seek to proceed in a rational logical manner.

It is not just our personal lives that can be improved by rationality, it’s also the ways in which we interact with communities.  One of the goals of many religious communities is to make a positive impact on the world around them.  When we work to do good in community we want that work to be as effective as possible.  Often when we work in community we find that we are not meeting our goals or having the kind of significant impact that we wish to have.  It is my experience this is often a failure to really examine and gather the facts on the ground.  We set off full of good intentions but with limited resources and time.  Rational examination helps us to figure out how to match our good intentions with our limited resources in the most effective way possible.  For example as the Pastor of two small churches money and people power can be in short supply.  So when we examine all the needs of our community we have to acknowledge we cannot begin to meet all or even most of them.  So we take one issue, hunger, and devote our time and resources to having one big impact on that issue.  As opposed to trying to be a little bit to alleviate a lot of problems.

One other way that rationality can inform our work in the community is to recognize that part of what a scarcity of resources means is that we need to work together with others in our community.  The inter-faith movement has done a lot of good work in bringing together people of faith to work on common goals.  This has meant setting aside traditional differences for the sake of shared goals.  Let us examine the world we live in today though. The amount of nonreligious people is on the rise and there is every indication that it will continue to do so.  On the other hand religion does not seem to be going anywhere either.  Which is good news for a pastor.  Looking at this situation, the rational thing to do is to work together, for religious people to build bridges toward the non-religious and vice versa.

Wisdom still stands on the street calling and imploring us to be improved--not in the form of rationalist street preachers, though that idea has a certain appeal-- but in the form of the growing number of tools being offered to help us improve our capacity for logic, for reasoning, and for the tools that will enable us take part in the world we live in.  

Everyone wants to make good decisions.  This means that everyone tries to make rational decisions.  We all try but we don’t always hit the mark.  Religious people seek to achieve their goals and make good decisions.  Secular people seek to achieve their goals and make good decisions.  Yes, we have different starting points and it’s important to acknowledge that.  Yet, there are similarities in what each group wants out of their lives and maybe we have more in common than we think we do.

On a final note it is my belief that what religious people and what non-religious people fear about each other is the same thing.  The non-religious look at the religious and say God could ask them to do anything... scary.  The religious look at the non-religious and say without God they could do anything... scary.  If we remember though that most people are rational and want to live a good life we have less to be scared of, and are more likely to find common ground.

____________________________________________________________________________________________________________

 

Bio: Caleb Pitkin is a Provisional Elder with the United Methodist Church appointed to Signal Mountain United Methodist Church. Caleb is a huge fan of the theology of John Wesley, which ask that Christians use reason in their faith journey.  This helped lead Caleb to Rationality and participation in Columbus Rationality, a Less Wrong meetup that is part of the Humanist Community of Central Ohio. Through that, Caleb got involved with Intentional Insights. Caleb spends his time trying to live a faithful and rational life. 

Conveying rational thinking about long-term goals to youth and young adults

8 Gleb_Tsipursky 07 February 2016 01:54AM
More than a year ago, I discussed here how we at Intentional Insights intended to convey rationality to young adults through our collaboration with the Secular Student Alliance. This international organization unites over 270 clubs at colleges and high schools in English-speaking countries, mainly the US, with its clubs spanning from a few students to a few hundred students. The SSA's Executive Director is an aspiring rationalist and CFAR alum who is on our Advisory Board.

Well, we've been working on a project with the SSA for the last 8 months to create and evaluate an event aimed to help its student members figure out and orient toward the long term, thus both fighting Moloch on a societal level and helping them become more individually rational as well (the long-term perspective is couched in the language of finding purpose using science) It's finally done, and here is the link to the event packet. The SSA will be distributing this packet broadly, but in the meantime, if you have any connections to secular student groups, consider encouraging them to hold this event. The event would also fit well for adult secular groups with minor editing, in case any of you are involved with them. It's also easy to strip the secular language from the packet, and just have it as an event for a philosophy/science club of any sort, at any level from youth to adult. Although I would prefer you cite Intentional Insights when you do it, I'm comfortable with you not doing so if circumstances don't permit it for some reason.

We're also working on similar projects with the SSA, focusing on being rational in the area of giving, so promoting Effective Altruism. I'll post it here when it's ready.  

[Link] How I Escaped The Darkness of Mental Illness

5 Gleb_Tsipursky 04 February 2016 11:08PM
A deeply personal account by aspiring rationalist Agnes Vishnevkin, who shares the broad overview of how she used rationality-informed strategies to recover from mental illness. She will also appear on the Unbelievers Radio podcast today live at 10:30 PM EST (-5 UTC), together with JT Eberhard, to speak about mental illness and recovery.

**EDIT** Based on feedback from gjm below, I want to clarify that Agnes is my wife and fellow co-founder of Intentional Insights.


[Link] Huffington Post article about dual process theory

9 Gleb_Tsipursky 06 January 2016 01:44AM

Published a piece in The Huffington Post popularizing dual-process theory in layman's language.

 

P.S. I know some don't like using terms like Autopilot and Intentional to describe System 1 and System 2, but I find from long experience that these terms resonate well with a broad audience. Also, I know dual process theory is criticized by some, but we have to start somewhere, and just explaining dual process theory is a way to start bridging the inference gap to higher meta-cognition.

Forecasting and recursive Inhibition within a decision cycle

1 [deleted] 20 December 2015 05:37AM

When we anticipate the future, we the opportunity to inhibit our behaviours which we anticipate will lead to counterfactual outcomes. Those of us with sufficiently low latencies in our decision cycles may recursively anticipate the consequences of counterfactuating (neologism) interventions to recursively intervene against our interventions.

This may be difficult for some. Try modelling that decision cycle as a nano-scale approximation of time travel. One relevant paradox from popular culture is the farther future paradox described in the tv cartoon called Family Guy.

Watch this clip: https://www.youtube.com/watch?v=4btAggXRB_Q

Relating the satire back to our abstraction of the decision cycle, one may ponder:

What is a satisfactory stopping rule for the far anticipation of self-referential consequence?

That is:

(1) what are the inherent harmful implications of inhibiting actions in and of themselves: stress?

(2) what are their inherent merits: self-determination?

and (3) what are the favourable and disfavourable consequences as x point into the future given y number of points of self reference at points z, a, b and c?

see no ready solution to this problem in terms of human rationality, and see no corresponding problem in artificial intelligence, where it would also apply. Given the relevance to MIRI (since CFAR doesn't seem work on open-problems in the same way)

I would like to also take this opportunity to open this as an experimental thread for the community to generate a list of ''open-problems'' in human rationality that are otherwise scattered across the community blog and wiki. 

[Link] Video of a presentation by Hal Arkes, one of the top world experts in debiasing, on dealing with the hindsight bias and overconfidence

1 Gleb_Tsipursky 14 December 2015 06:16PM

Here's a video of a presentation by Hal Arkes, one of the top world experts in debiasing, Emeritus Professor at Ohio State, and Intentional Insights Advisory Board member, on dealing with hindsight bias and overconfidence. This was at a presentation hosted by Intentional Insights and the Columbus, OH Less Wrong group. It received high marks from local Less Wrongers, so I thought I'd share it here.

 

 

 

 

Rationalist Magic: Initiation into the Cult of Rationatron

7 wizard 07 December 2015 09:39PM

I am curious on the perspective of a rationalist discussion board (this seems like a good start) on the practice of magic. I introduce the novel, genius concept of "rationalist magic", i.e. magic practiced by rationalists.


Why would rationalists practice magic? That makes no sense!!

It's the logical conclusion to Making Peace with Belief, Engineering Religion and the self-help threads.

What would that look like?

Good question. Here are some possible considerations to make:

  • It's given low probability that magic is more than purely mental phenomena. The practice is called "placebomancy" to make it clear that such an explanation is favoured.
  • It is practiced as a way to gain placebons.
  • A cult of rationalist magic, the Cult of Rationatron, should be formed to compete against worse (anti-rationality, anti-science, violent) cults.
  • Rationalist groups can retain more members due to abundance of placebons.
  • The ultimate goal is to use rationality to devise a logically optimal system of magic with which to build the Philosopher's Stone and fix the world like in HPMOR. (Just kidding, magic isn't real.)

I looked into magical literature and compiled a few placebo techniques/exercises, along with their informal instructions. These might be used as a starting point. If there are any scientific errors these can eventually be corrected. I favoured techniques that can be done with little to no preparation and provide some results. Of course, professional assistance (e.g. yoga classes) can also be helpful.


1. Mindfulness Meditation

  • (Optional) Do 1-3 deep mouth-exhales to relax.
  • Find a good position.
  • Begin by being aware of breath.
  • (Optional) Move on to calmly observing different parts of the body, vision, the other senses, thoughts, mandala visualization, and so on.
  • (Optional) Compare the experience to teachings of buddhism.
  • (Optional) Say "bud-" for each inhale and "-dho" for each exhale; alternatively, count them from 1 to 10 and reset each time.

Note that trying to focus on not focusing isn't always helpful. Hence the common technique is that of focusing on a single thing, or rather, simply being passively aware of it. The goal is to discipline the mind and develop one-pointedness.

2. Astral Projection (OBE)

  • Lay on a bed, relax.
  • Try out some of the tips from the previous exercise.
  • Stay there for 30-60 minutes until you reach the hypnagogic state (a state where the mind is awake but the body is sleeping) and try to (a) feel your astral body and grab a rope, (b) feel vibrations in your body, (c) roll out of the body, (d) (...).

Astral Projection can be thought of as a very vivid stage of dreaming. Some authors have more detailed exercises related to this[7]. It might take many tries to do this exercise right.

3. Mantras

You can borrow an eastern mantra such as "om mani padme hum", "om namah shivaya" and "hare kṛiṣhṇa hare kṛiṣhṇa / kṛiṣhṇa kṛiṣhṇa hare hare / hare rāma hare rāma / rāma rāma hare hare" or make up some phrase. Whatever works for you.

Chanting mantras is a form of sensory excitation. Both sensory excitation and deprivation can induce trance. This can be used along with exercise 1.

(Optional) Find one of these.

4. Contemplation

  • Take a moment to contemplate the harmony of the universe and/or have love towards some/all beings.

5. Idol Worship

  • Make a shrine dedicated to Rationatron, god of rationality.

This and this are possible forms of Rationatron.

6. Minimal Spellcasting

  • Make a wish.
  • Clear your mind.
  • Take a deep, long breath imagining that as you exhale the wish is being registered into the universe by Rationatron.
  • Forget about the wish.

Magicians claim it's more magically effective to forget about the wish after casting the spell (fourth step) and let your subconscious act than to use the repeat-your-wish-every-day method.

This is, as far as I know, the simplest spellcasting technique. It can be complicated further by the addition of rituals, sigils, poses[6], and stronger methods of inducing trance in the second step.

7. Deity Generator

  • Take any arbitrary concept or amalgam of concepts.
  • (Optional) Associate it to a colour.
  • (Optional) Make it a new deity.

For one reason or another, spiritualists love doing this.

This exercise can make arbitrary "powers" or "deities" for use in other exercises. For example, the association "yellow - wealth" is a "power" for exercise 6, or you might imagine yourself as a "deity" in that same exercise to induce some emotion.

8. Tulpa Making

This is a technique found in Tibetan buddhism lore[5] as one ability held by bodhisattvas and used by Buddha to multiply himself. This was adopted by communities of westerners in the internet who generally don't attribute mystical properties to the practice and made detailed tutorials (tulpa.info).

The technique uses your subconscious to create a companion. It consists in visualizing and talking to a being in your imagination until it eventually produces unexpected thoughts.

You might be asking, "can I model this companion after a cartoon?" The answer is yes.


Note: some of the following techniques might require further examination.

9. Aura Sight

Some authors[1,2,4] give exercises attributed to peripheral vision or meditation. If anyone finds out how to see auras, please confirm.

10. Invoking and Banishing Ritual of Truth

  • Imagine a circle of protection surrounding you.
  • (Invoking) "I open/invoke the powers of p, q, ¬p, ¬q."
  • (Banishing) "I close/revoke/banish the powers of ¬q, ¬p, q, p."
  • (Optional) This can be performed solely in the imagination through visualization or with more realism added to different degrees inbetween (e.g. by making a real circle, pointing a sword or your hand to the four directions).

This has been used for different purposes; as an introduction to ritual work or simply as routine.

The common formulation of this exercise uses a pentagram, holy names, the elements, planets and a bunch of other nonsense. Why do I have to remember all this roleplaying? Do they think this is D&D? Therefore, I designed a more efficient version of the technique that also replaces the magical symbolism with superior logic symbolism.

Note: these are roughly analogous[3] to the simpler placebo techniques known as shielding (imagining a shield), centering (regaining focus by being aware of the solar plexus/heart area) and grounding (putting your feet on the ground to receive/release energy from/to the ground).

11. Demonic Mirror Summoning

Call upon Dark Lord Voldemort and say the "avada kedavra" mantra 7+ times in front of a mirror in a dimly lit room until a demon pops up and/or your appearance gets distorted.

Some individuals report holding a conversation with their mirror self through mirror exercises.


FAQ

What is the purpose of this?

It's about time someone made an atheist religion.

Why not follow the Flying Spaggheti Monster religion instead?

It doesn't provide placebo techniques. It only functions as a point in argumentation.

Do I have to do all of the exercises?

No, only those that you personally deem helpful. However, the first exercise (meditation) is generally recommended by health research. It's also a pre-requisite to many other exercises. Note: although meditation is generally recommended, some caution, common sense and preparation is advised (specially for exercises 2-3).

What are the teachings?

It's acknowledged that rational people can sometimes get to different conclusions. Therefore, there is no mandatory teachings. However, it uses "rationality" as a starting point to distinguish it from other cults meaning that "placebo" is used as the default model of magic and that both logic and the use of such techniques is encouraged. It can be used as a gathering of placebo techniques for atheists and as a blank slate from the dogma of already existing cults on the nature of magic.

What is the pantheon of this religion?

The "official" pantheon is that of the universe itself (Einstein's pantheism; it's used in exercises 4 and 6), Rationatron (a deity of rationality) and Dark Lord Voldemort (the opposer). They fulfill different god-roles. More gods can be created with the Deity Generator exercise or borrowed.

Can I worship Eris, Cthullu or Horus/Isis/Odin...?

Yes, see above answer.

Wait, Dark Lord Voldemort? Really?

Christianity had lazier ways to come up with their demons and nobody noticed. Zing.

Aren't some of those techniques irrational?

Only when used by superstitious people. Once used by rationalists, they become super-rational.

What about black magic? Can I cast hexes?

They aren't going to work because magic is not real.


References

1. frater, ud. high magick. A good overview on different kinds of magic.

2. hine, phil. spirit guides. Another overview.

3. hine, phil. modern shamanism pt1-2. Overview for shamans.

4. samuel sagan. awakening the third eye.

5. alexandra david-neel. magic and mystery in tibet. A book on buddhist lore.

6. crowley. liber o.

7. robert bruce. mastering astral projection.

Engineering Religion

2 KevinGrant 07 December 2015 01:34PM

This topic is vague and open-ended.  I'm leaving it that way deliberately.  Perhaps some interesting, better defined topics will grow out of it.  Or perhaps it's too far afield from the concept of less wrong cognition to be of interest here.  So I view this topic as exploratory rather than as an attempt to solve a specific problem.

What useful purposes does religion serve?  Are any of these purposes non-supernaturalistic in nature?  What is success for a religion and what elements of a religion tend to cause it to become successful?   How would you design a "rational religion", if such an entity is possible?  How and why would a religion with that design become successful and serve a useful purpose?  What are the relationships between aspects of a religion, and outcomes involving that religion?  For example, Catholicism discourages birth control.  Lack of birth control encourages higher birthrates among Catholics.  This encourages there to be a larger number of Catholics in the next generation than would otherwise be the case,  Surely there are other relationships like this?  How do aspects of religion cause them to evolve differently over time?

Playing offense

-4 artemium 30 November 2015 01:59PM

There is a pattern I noticed that appears whenever some new interesting idea or technology gains some media traction above certain threshold. Those who are considered opinion-makers (journalists, intellectuals) more often than not write about this new movement/idea/technology in a way that it somewhere between cautions and negative. As this happens those who adopted this new ideas/technologies became somewhat weary of promoting it and in fear of a low status decide to retreat from their public advocacy of the mentioned ideas. 


I was wondering that maybe in some circumstances, the right move for those that are getting the negative attention is not to defend themselves but instead to go on a offense. And one the most interesting offensive tactic might be is to try to reverse the framing of the subject matter and put the burden of the argument on the critics in a way that requires them to seriously reconsider their position: 

  • Those who are critical of this idea are actually doing something wrong and they are unable to see the magnitude of their mistake 
  • The fact that they are not adopting this idea/product has a big cost that they are not aware of, and in reality they are the ones making a weird choice  
  • They already adopted that idea/position but they don't notice it as it not framed in context they understand or find comfortable
In all of this cases it the critic is usually stuck the status competition that prevents them to analyse situation objectively, and additionally he feels safety in numbers as there are a lot of people who are similarly criticising this idea.

So lets start with Facebook.

When Facebook was expanding rapidly and was predicted to dominate social media market (2008-2010) it became one of the most talked about subject in the public sphere. And the usual attitude towards Facebook from the 'intellectual' press and almost everyone who considered himself an independent thinker was usually negative. The Facebook was this massive corporate behemoth who is assimilating people in its creepy virtual world full of pokes, likes and Farmvilles. It didn't help that its CEO was the guy who had "I am a CEO bitch" written on his business card and walked in his flip-flops to the business meetings. 

I remember endless articles with titles like "Facebook is killing real life frendships", "Facebook is creepy corporate product which wants to diminish your identity ", "Why I am not on facebook", "I left facebook and that was the best decision ever!" At that time everyone who said "Actually I am not on facebook" was a sure way to gain a high status, as someone who refuses to become another FB drone. And those who were on facebook always felt the need to apologize for their decision "Yeah, facebook sucks, l am only there to stay in touch with my high school pals" 

The climax was reached with the "Social network", movie which presented Mark Zuckenberg and founding of facebook as some kind of Batman villain origin story (and it was grossly inaccurate for anyone who actually knows the facts). 

But then Fahrad Manjoo published an article that asked a simple question "Why are you not on facebook?" In it he reversed the framing of the story and presented facebook as something that is the new normal and being a holdout is a weird thing that should get a strange looks. His message was something like: "Actually it is you people who are not on facebook have to explain yourself. Facebook won, it is convenient tool for communication, almost everyone is there and you should get over your high horse and join the civilized world." 

The site has crossed a threshold—it is now so widely trafficked that it's fast becoming a routine aid to social interaction, like e-mail and antiperspirant.


In others words, if you are not on facebook in 2010, you are not brave intellectual maverick standing up against an evil empire.  You are like a cranky old man from 1950s who refuses to own a telephone because he is scared of the demon-machine. And this is making very inconvenient for his relatives to contact him. 

There is probably a lot of whats wrong with Manjoo's approach, some of it would fall under the 'dark arts' arsenal. And to be fair a lot of criticism of Facebook has a point especially after Snowden affair. But I really like Manjoo's  subversive thinking on this issue and the way he pierced the suffocating smugness with a brazen narrative reversal. 

I wonder that this tactic might be useful for other ideas that are slowly entering public space and are similarly getting a nasty look from the "intellectual elite"

Lets look at the Transhumanism and its media portrayal. 

It is important to notice that there is a difference between regular "SF tehnology is cool" and transhumanism. Everyone loves imagining a future world with cool gadgets and flying cars. However, once you start messing out with the human genome or with cybernetic implants things get creepy for a lot of people. When you talk about laser pistols you get heroic rebels fighting stromtroppers with their blasters. When you talk about teleporters and warp drives you get brave Starfleet captain exploring the Galaxy. But when you talk about cybernetic implants you get the Borg, when you talk about genetic enhancement you get Gattaca and when you talk about Immortality you get Voldermort. For the average viewer, technology is good as long it doesn't change what is perceived as 'normal' state of human being.

You can have Star Wars movies where families are watching massive starships destroying entire planets with billions of people but you are not supposed to ask a why is Han Solo so old in the new episode. They solved faster-than-light travel and invented planet-killing lasers but got stuck on longevity research, or at least good anti-aging cosmetics? (yeah I know that it would be expensive to make Harrison Ford look younger, but you get my point)

Basically the mainstream view of the future is "old-fashion humans + jetpacks" and you better not touch the "old fashion" adjective or you will get pattern matched into a creepy dude trying to create utopia , which as we learned from the movies and literature, always makes you a bad guy.

But then in a real world you have a group of smart people who seriously argue that changing the human biology with various weird technologies would actually be a good thing and that we should totally work on reliable ways to increase longevity, intelligence and other abilities and remove any regulations that would stop it. And in response you have much larger group of intellectuals, journalists, politicians and other 'deep thinkers' who are repulsed by this idea and  will immediately start to bludgeon someone who argues that we should improve our natural state.  (I am purposely not mentioning those who question feasibility of transhuman ideas, like if the genetic enhancement is even possible, as this is not relevant here.)

From the political right and religious groups you will instantly hear standard chants of "people playing God" and "destroying the fabric of society ", from the political left you will hear the shouting about "rich silicon valley libertarians trying to recreate feudalism through cognitive inequality and eugenics " and even from political center you will get someone like Francis Fukuyama accusing transhumanism of being "the most dangerous idea of the century" that might "destroy liberal society". Finally you have entire class of people called "bio-ethicitist" whose job description seems to be "bashing transhumanism".   

At best transhumanists are presented as a "well intentioned geeks who are unaware of the bad consequences of their ideas" at worst they will get labeled "rich, entitled Silicon Valley nerds with a bizarre and dangerous pseudo-religious cult " or they can be dismissed altogether from serious conversation and turned into a punchline on "Bing bang theory".

So when transhumanist receive this kind of criticism they would naturally try to soften their arguments and backtrack on their positions, After all many people that are optimistic about the future and sometimes positively talk about human enhancement don't label themselves as transhumanist. But should they do that? What if they "own" their label and go on a offensive? How would narrative reversal look in that case?

Well they could make exact copy of the Manjoo's facebook method and throw "Why are you not a transhumanist?" to their critics. But they can use third method and confidently say: "You are transhumanist too. Actually majority of people are transhumanist but they are not aware of it."

It might sound crazy at first, I mean majority of people usually find tranhsumanist ideas weird and uncanny when they are presented in the usual form. "Designer babies? Nanotech robots inside my body? Memory chips in the brain? Mind uploading??? That is crazy talk!!"

But lets stop for a moment and try to understand what is basis of transhumanism, to reduce it to its core idea. One of the best article ever written about transhumanism is E. Yudkowsy short essay "Simplified humanism". The beauty of this essay is in simplicity, there are no complicated moral theories or deep analysis of various disruptive technologies but just a common sense argument about what are the fundamental values that humans should strive for and how to achieve them . 
 
As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases. 

And it is hard to argue with it. I mean I can't imagine normal person arguing with this statement. But then it gets more interesting

You also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet. 

And at this point we reach the the crux of the issue. It is not that the the values are the problem, but our familiarity of the tools we are using in order to protect our values.

So we can finally define difference between transhumanist and non-transhumanist. Transhumanist is a person who believes that science and technology should be used to make humans happier, smarter and able live as long as possible. Non-transhumanist is usually a person who believes the same except that he technology used in that process should not be too strange compared to the one he is already used to.

Using this definition the pool of people that fall in the transhumanist group whether they admit it or not should rise significantly. 

But actually we can go better then than that. How many people would honestly define themselves as non-transhumanist in this case?

Imagine that you have a lovely 5-year old daughter that got struck by a horrible medical condition. The doctors says to you that there is no cure and that your daughter will suffer severe pains for the several months before she finally dies in agony. 

While you are contemplating the horrible twist of faith another doctor approaches you and says: 

"Well, there is potential cure. We just finished the series of successful trials that cured this condition by using revolutionary medical nanobots. They are injected in the patient bloodstream and then they are able to physically destroy the cancer. If we have your approval we can start the therapy next week.  

Oh... before you answer, there are some serious side-effects we should mention. For the reasons we don't completely understand, the nanobots will significantly increase the hosts IQ and they are able to rejuvenate old cells, so in addition to curing your daughter's disease they will also make her super-smart and allow her to live several hundred years. Now many people have some ethical issues with that, because it will give her unfair advantage over her peers, and once she becomes older it might create feelings of guilt of being superior to everyone else and might make her socially awkward because of it . I know that this is a difficult decision. So what will it be.. horrible death in pain after few months or long fulfilling life with occasional bout of existential angst for being superhuman and feeling unease for being unfairly celebrated for getting all those Nobel prizes and solving world hunger?"

Do you honestly know a single person that would choose that their child would die in pain instead of nanobot therapy? Well I admit that this example is super-contrived but it still represents general idea of transhumanism clearly. The point is that EVERYONE is transhumanist and will quickly dismiss any intellectual posturing when push comes to shove, and when they or their loved ones face the dark spectre of death and suffering .   

And don't forget, if you go to the past just several generations compared to them we are the transhuman beings from utopian future. Just a few centuries ago average human lifespan was only half of what it is now, and on almost any objective measure of well-being humans from the past were living horrible and miserable lives compared to the life we now take for granted. If someone is willing to argue against transhumanism he should also in principle be against all of the advances that made us more healthier, intelligent and wealthier then our ancestors by using technologies that from the point of view of our ancestors were looking more crazy then many SF transhumanist technology would look from our perspective. 

So next time someone starts criticising transhuman beliefs, don't defend your self by trying to retreat from your position and trying to avoid looking weird. Ask him to prove that he isn't transhumainst himself by present transhuman idea in it basic form as stated in the Yudkowsky essay. Ask him why he considers trying to use technology to improve human condition should be considered a bad thing, and let him try to define at which point it becomes a bad thing. 

Playing offense might also work in other domains. 

Effective altruism

Criticism: You are using your nerdy math in a field that should be guided by passion on strong moral convictions.

Response : Actually it is you who should explain why you are not effective altruist. EA has proven track record of using the most effective tools to improve outcomes in charity work on the level that surpasses traditional charities. How do you explain your decision to not use this kind of systemic analysis of your work that would result in better outcomes and more lives saved by your charity? 


Smartphones

Criticism: you are using smartphone only as status symbol. They are unnecessary 

Answer: On the contrary I am using it as useful tool that help me in my everyday activities. Why are you not using smartphone when everyone else recognised their obvious value? Are you aware of the opportunity costs of not using smartphone like being unable to use Google maps and translate in your travels ?


We on LW and in larger rationalist community are used to having a defence posture when we are faced with a broad public scrutiny. In many cases that is correct approach as we should avoid unnecessary hubris. But we should recognise circumstances when we are coming from a position of strength and where we can play more offensive game in order to defeat bad arguments that might still damage our cause if they are not met with strong response.

The Winding Path

6 OrphanWilde 24 November 2015 09:23PM

The First Step

The first step on the path to truth is superstition.  We all start there, and should acknowledge that we start there.

Superstition is, contrary to our immediate feelings about the word, the first stage of understanding.  Superstition is the attribution of unrelated events to a common (generally unknown or unspecified) cause - it could be called pattern recognition. The "supernatural" component generally included in the definition is superfluous, because supernatural merely refers to that which isn't part of nature - which means reality -, which is an elaborate way of saying something whose relationship to nature is not yet understood, or else nonexistent.  If we discovered that ghosts are real, and identified an explanation - overlapping entities in a many-worlds universe, say - they'd cease to be supernatural and merely be natural.

Just as the supernatural refers to unexplained or imaginary phenomena, superstition refers to unexplained or imaginary relationships, without the necessity of cause.  If you designed an AI in a game which, after five rounds of being killed whenever it went into rooms with green-colored walls, started avoiding rooms with green-colored walls, you've developed a good AI.  It is engaging in superstition, it has developed an incorrect understanding of the issue.  But it hasn't gone down the wrong path - there is no wrong path in understanding, there is only the mistake of stopping.  Superstition, like all belief, is only useful if you're willing to discard it.

The Next Step

Incorrect understanding is the first - and necessary - step to correct understanding.  It is, indeed, every step towards correct understanding.  Correct understanding is a path, not an achievement, and it is pursued, not by arriving at the correct conclusion in the first place, but by testing your ideas and discarding those which are incorrect.

No matter how much intelligent you are, you cannot skip the "incorrect understanding" step of knowledge, because that is every step of knowledge.  You must come up with wrong ideas in order to get at the right ones - which will always be one step further.  You must test your ideas.  And again, the only mistake is stopping, in assuming that you have it right now.

Intelligence is never your bottleneck.  The ability to think faster isn't necessarily the ability to arrive at the right answer faster, because the right answer requires many wrong ones, and more importantly, identifying which answers are indeed wrong, which is the slow part of the process.

Better answers are arrived at by the process of invalidating wrong answers.

The Winding Path

The process of becoming Less Wrong is the process of being, in the first place, wrong.  It is the state of realizing that you're almost certainly incorrect about everything - but working on getting incrementally closer to an unachievable "correct".  It is a state of anti-hubris, and requires a delicate balance between the idea that one can be closer to the truth, and the idea that one cannot actually achieve it.

The art of rationality is the art of walking this narrow path.  If ever you think you have the truth - discard that hubris, for three steps from here you'll see it for superstition, and if you cannot see that, you cannot progress, and there your search for truth will end.  That is the path of the faithful.

But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking.  If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end.  That is the path of the crank.

The path of rationality is winding and directionless.  It may head towards beauty, then towards ugliness; towards simplicity, then complexity.  The correct direction isn't the aesthetic one; those who head towards beauty may create great art, but do not find truth.  Those who head towards simplicity might open new mathematical doors and find great and useful things inside - but they don't find truth, either.  Truth is its own path, found only by discarding what is wrong.  It passes through simplicity, it passes through ugliness; it passes through complexity, and also beauty.  It doesn't belong to any one of these things.

The path of rationality is a path without destination.

 


 

Written as an experiment in the aesthetic of Less Wrong.  I'd appreciate feedback into the aesthetic interpretation of Less Wrong, rather than the sense of deep wisdom emanating from it (unless the deep wisdom damages the aesthetic).

[Link] A rational response to the Paris attacks and ISIS

-1 Gleb_Tsipursky 23 November 2015 01:47AM

Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer​, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.

[Link] Less Wrong Wiki article with very long summary of Daniel Kahneman's Thinking, Fast and Slow

6 Gleb_Tsipursky 22 November 2015 04:32PM

I've made very extensive notes, along with my assessment, of Daniel Kahneman's Thinking, Fast and Slow, and have passed it around to aspiring rationalist friends who found my notes very useful. So I though I would share these with the Less Wrong community by creating a Less Wrong Wiki article with these notes. Feel free to optimize the article based on your own notes as well. Hope this proves as helpful to you as it did to those others whom I shared my notes with.

 

 

[Link] Lifehack Article Promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies

9 Gleb_Tsipursky 14 November 2015 08:34PM

Nice to get this list-style article promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies, as part of a series of strategies for growing mentally stronger, published on Lifehack, a very popular self-improvement website. It's part of my broader project of promoting rationality and effective altruism to a broad audience, Intentional Insights.

 

EDIT: To be clear, based on my exchange with gjm below, the article does not promote these heavily and links more to Intentional Insights. I was excited to be able to get links to LessWrong, Rationality Dojo, and Rationality: From AI to Zombies included in the Lifehack article, as previously editors had cut out such links. I pushed back against them this time, and made a case for including them as a way of growing mentally stronger, and thus was able to get them in.

View more: Next