June Outreach Thread
Please share about any outreach that you have done to convey rationality and effective altruism-themed ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.
Review and Thoughts on Current Version of CFAR Workshop
Outline: I will discuss my background and how I prepared for the workshop, and then how I would have prepared differently if I could go back and have the chance to do it again; I will then discuss my experience at the CFAR workshop, and what I would have done differently if I had the chance to do it again; I will then discuss what my take-aways were from the workshop, and what I am doing to integrate CFAR strategies into my life; finally, I will give my assessment of its benefits and what other folks might expect to get who attend the workshop.
Acknowledgments: Thanks to fellow CFAR alumni and CFAR staff for feedback on earlier versions of this post
Introduction
Many aspiring rationalists have heard about the Center for Applied Rationality, an organization devoted to teaching applied rationality skills to help people improve their thinking, feeling, and behavior patterns. This nonprofit does so primarily through its intense workshops, and is funded by donations and revenue from its workshop. It fulfills its social mission through conducting rationality research and through giving discounted or free workshops to those people its staff judge as likely to help make the world a better place, mainly those associated with various Effective Altruist cause areas, especially existential risk.
To be fully transparent: even before attending the workshop, I already had a strong belief that CFAR is a great organization and have been a monthly donor to CFAR for years. So keep that in mind as you read my description of my experience (you can become a donor here).
Preparation
First, some background about myself, so you know where I’m coming from in attending the workshop. I’m a professor specializing in the intersection of history, psychology, behavioral economics, sociology, and cognitive neuroscience. I discovered the rationality movement several years ago through a combination of my research and attending a LessWrong meetup in Columbus, OH, and so come from a background of both academic and LW-style rationality. Since discovering the movement, I have become an activist in the movement as the President of Intentional Insights, a nonprofit devoted to popularizing rationality and effective altruism (see here for our EA work). So I came to the workshop with some training and knowledge of rationality, including some CFAR techniques.
To help myself prepare for the workshop, I reviewed existing posts about CFAR materials, with an eye toward being careful not to assume that the actual techniques match their actual descriptions in the posts.
I also delayed a number of tasks for after the workshop, tying up loose ends. In retrospect, I wish I did not leave myself some ongoing tasks to do during the workshop. As part of my leadership of InIn, I coordinate about 50ish volunteers, and I wish I had placed those responsibilities on someone else during the workshop.
Before the workshop, I worked intensely on finishing up some projects. In retrospect, it would have been better to get some rest and come to the workshop as fresh as possible.
There were some communication snafus with logistics details before the workshop. It all worked out in the end, but I would have told myself in retrospect to get the logistics hammered out in advance to not experience anxiety before the workshop about how to get there.
Experience
The classes were well put together, had interesting examples, and provided useful techniques. FYI, my experience in the workshop was that reading these techniques in advance was not harmful, but that the techniques in the CFAR classes were quite a bit better than the existing posts about them, so don’t assume you can get the same benefits from reading posts as attending the workshop. So while I was aware of the techniques, the ones in the classes definitely had optimized versions of them - maybe because of the “broken telephone” effect or maybe because CFAR optimized them from previous workshops, not sure. I was glad to learn that CFAR considers the workshop they gave us in May as satisfactory enough to scale up their workshops, while still improving their content over time.
Just as useful as the classes were the conversations held in between and after the official classes ended. Talking about them with fellow aspiring rationalists and seeing how they were thinking about applying these to their lives was helpful for sparking ideas about how to apply them to my life. The latter half of the CFAR workshop was especially great, as it focused on pairing off people and helping others figure out how to apply CFAR techniques to themselves and how to address various problems in their lives. It was especially helpful to have conversations with CFAR staff and trained volunteers, of whom there were plenty - probably about 20 volunteers/staff for the 50ish workshop attendees.
Another super-helpful aspect of the conversations was networking and community building. Now, this may have been more useful to some participants than others, so YMMV. As an activist in the moment, I talked to many folks in the CFAR workshop about promoting EA and rationality to a broad audience. I was happy to introduce some people to EA, with my most positive conversation there being to encourage someone to switch his efforts regarding x-risk from addressing nuclear disarmament to AI safety research as a means of addressing long/medium-term risk, and promoting rationality as a means of addressing short/medium-term risk. Others who were already familiar with EA were interested in ways of promoting it broadly, while some aspiring rationalists expressed enthusiasm over becoming rationality communicators.
Looking back at my experience, I wish I was more aware of the benefits of these conversations. I went to sleep early the first couple of nights, and I would have taken supplements to enable myself to stay awake and have conversations instead.
Take-Aways and Integration
The aspects of the workshop that I think will help me most were what CFAR staff called “5-second” strategies - brief tactics and techniques that could be executed in 5 seconds or less and address various problems. The stuff that we learned at the workshops that I was already familiar with required some time to learn and practice, such as Trigger Action Plans, Goal Factoring, Murphyjitsu, Pre-Hindsight, often with pen and paper as part of the work. However, with sufficient practice, one can develop brief techniques that mimic various aspects of the more thorough techniques, and apply them quickly to in-the-moment decision-making.
Now, this doesn’t mean that the longer techniques are not helpful. They are very important, but they are things I was already generally familiar with, and already practice. The 5-second versions were more of a revelation for me, and I anticipate will be more helpful for me as I did not know about them previously.
Now, CFAR does a very nice job of helping people integrate the techniques into daily life, as this is a common failure mode of CFAR attendees, with them going home and not practicing the techniques. So they have 6 Google Hangouts with CFAR staff and all attendees who want to participate, 4 one-on-one sessions with CFAR trained volunteers or staff, and they also pair you with one attendee for post-workshop conversations. I plan to take advantage of all these, although my pairing did not work out.
For integration of CFAR techniques into my life, I found the CFAR strategy of “Overlearning” especially helpful. Overlearning refers to trying to apply a single technique intensely for a while to all aspect of one’s activities, so that it gets internalized thoroughly. I will first focus on overlearning Trigger Action Plans, following the advice of CFAR.
I also plan to teach CFAR techniques in my local rationality dojo, as teaching is a great way to learn, naturally.
Finally, I plan to integrate some CFAR techniques into Intentional Insights content, at least the more simple techniques that are a good fit for the broad audience with which InIn is communicating.
Benefits
I have a strong probabilistic belief that having attended the workshop will improve my capacity to be a person who achieves my goals for doing good in the world. I anticipate I will be able to figure out better whether the projects I am taking on are the best uses of my time and energy. I will be more capable of avoiding procrastination and other forms of akrasia. I believe I will be more capable of making better plans, and acting on them well. I will also be more in touch with my emotions and intuitions, and be able to trust them more, as I will have more alignment among different components of my mind.
Another benefit is meeting the many other people at CFAR who have similar mindsets. Here in Columbus, we have a flourishing rationality community, but it’s still relatively small. Getting to know 70ish people, attendees and staff/volunteers, passionate about rationality was a blast. It was especially great to see people who were involved in creating new rationality strategies, something that I am engaged in myself in addition to popularizing rationality - it’s really heartening to envision how the rationality movement is growing.
These benefits should resonate strongly with those who are aspiring rationalists, but they are really important for EA participants as well. I think one of the best things that EA movement members can do is studying rationality, and it’s something we promote to the EA movement as part of InIn’s work. What we offer is articles and videos, but coming to a CFAR workshop is a much more intense and cohesive way of getting these benefits. Imagine all the good you can do for the world if you are better at planning, organizing, and enacting EA-related tasks. Rationality is what has helped me and other InIn participants make the major impact that we have been able to make, and there are a number of EA movement members who have rationality training and who reported similar benefits. Remember, as an EA participant, you can likely get a scholarship with a partial or full coverage of the regular $3900 price of the workshop, as I did myself when attending it, and you are highly likely to be able to save more lives as a result of attending the workshop over time, even if you have to pay some costs upfront.
Hope these thoughts prove helpful to you all, and please contact me at gleb@intentionalinsights.org if you want to chat with me about my experience.
rationalfiction.io - publish, discover, and discuss rational fiction
Hey, everyone! I want to share with you a project I've been working on for a while - http://rationalfiction.io.
I want it to become the perfect place to publish, discover, and discuss rational fiction.
We already have a lot of awesome stories, and I invite you to join and post more! =)
May Outreach Thread
Please share about any outreach that you have done to convey rationality-style ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.
Collaborative Truth-Seeking
Summary: We frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal.
Acknowledgments: Thanks to Pete Michaud, Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Szun S. Tay, Alfredo Parra, Michael Estes, Aaron Thoma, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.
The Problem with Debates
Aspiring rationalists generally aim to figure out the truth, and often disagree about it. The usual method of hashing out such disagreements in order to discover the truth is through debates, in person or online.
Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritizing truth discovery. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.
We may hope that as aspiring rationalists, we would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode of engagement called collaborative truth-seeking.
Collaborative Truth-Seeking
Collaborative truth-seeking is one way of describing a more intentional approach in which two or more people with different opinions engage in a process that focuses on finding out the truth. Collaborative truth-seeking is a modality that should be used among people with shared goals and a shared sense of trust.
Some important features of collaborative truth-seeking, which are often not present in debates, are: focusing on a desire to change one’s own mind toward the truth; a curious attitude; being sensitive to others’ emotions; striving to avoid arousing emotions that will hinder updating beliefs and truth discovery; and a trust that all other participants are doing the same. These can contribute to increased social sensitivity, which, together with other attributes, correlate with accomplishing higher group performance on a variety of activities.
The process of collaborative truth-seeking starts with establishing trust, which will help increase social sensitivity, lower barriers to updating beliefs, increase willingness to be vulnerable, and calm emotional arousal. The following techniques are helpful for establishing trust in collaborative truth-seeking:
-
Share weaknesses and uncertainties in your own position
-
Share your biases about your position
-
Share your social context and background as relevant to the discussion
-
For instance, I grew up poor once my family immigrated to the US when I was 10, and this naturally influences me to care about poverty more than some other issues, and have some biases around it - this is one reason I prioritize poverty in my Effective Altruism engagement
-
-
Vocalize curiosity and the desire to learn
-
Ask the other person to call you out if they think you're getting emotional or engaging in emotive debate instead of collaborative truth-seeking, and consider using a safe word
Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:
-
Self-signal: signal to yourself that you want to engage in collaborative truth-seeking, instead of debating
-
Empathize: try to empathize with the other perspective that you do not hold by considering where their viewpoint came from, why they think what they do, and recognizing that they feel that their viewpoint is correct
-
Keep calm: be prepared with emotional management to calm your emotions and those of the people you engage with when a desire for debate arises
-
watch out for defensiveness and aggressiveness in particular
-
-
Go slow: take the time to listen fully and think fully
-
Consider pausing: have an escape route for complex thoughts and emotions if you can’t deal with them in the moment by pausing and picking up the discussion later
-
say “I will take some time to think about this,” and/or write things down
-
-
Echo: paraphrase the other person’s position to indicate and check whether you’ve fully understood their thoughts
-
Be open: orient toward improving the other person’s points to argue against their strongest form
-
Stay the course: be passionate about wanting to update your beliefs, maintain the most truthful perspective, and adopt the best evidence and arguments, no matter if they are yours of those of others
-
Be diplomatic: when you think the other person is wrong, strive to avoid saying "you're wrong because of X" but instead to use questions, such as "what do you think X implies about your argument?"
-
Be specific and concrete: go down levels of abstraction
-
Be clear: make sure the semantics are clear to all by defining terms
-
consider tabooing terms if some are emotionally arousing, and make sure you are describing the same territory of reality
-
-
Be probabilistic: use probabilistic thinking and probabilistic language, to help get at the extent of disagreement and be as specific and concrete as possible
-
For instance, avoid saying that X is absolutely true, but say that you think there's an 80% chance it's the true position
-
Consider adding what evidence and reasoning led you to believe so, for both you and the other participants to examine this chain of thought
-
-
When people whose perspective you respect fail to update their beliefs in response to your clear chain of reasoning and evidence, update a little somewhat toward their position, since that presents evidence that your position is not very convincing
-
Confirm your sources: look up information when it's possible to do so (Google is your friend)
-
Charity mode: trive to be more charitable to others and their expertise than seems intuitive to you
-
Use the reversal test to check for status quo bias
-
If you are discussing whether to change some specific numeric parameter - say increase by 50% the money donated to charity X - state the reverse of your positions, for example decreasing the amount of money donated to charity X by 50%, and see how that impacts your perspective
-
-
Use CFAR’s double crux technique
-
In this technique, two parties who hold different positions on an argument each writes the the fundamental reason for their position (the crux of their position). This reason has to be the key one, so if it was proven incorrect, then each would change their perspective. Then, look for experiments that can test the crux. Repeat as needed. If a person identifies more than one reason as crucial, you can go through each as needed. More details are here.
-
Of course, not all of these techniques are necessary for high-quality collaborative truth-seeking. Some are easier than others, and different techniques apply better to different kinds of truth-seeking discussions. You can apply some of these techniques during debates as well, such as double crux and the reversal test. Try some out and see how they work for you.
Conclusion
Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly. It also tends to take more time and effort than just debating. It is also easy to slip into debate mode even when using collaborative truth-seeking, because of the intuitive nature of debate mode.
Moreover, collaborative truth-seeking need not replace debates at all times. This non-intuitive mode of engagement can be chosen when discussing issues that relate to deeply-held beliefs and/or ones that risk emotional triggering for the people involved. Because of my own background, I would prefer to discuss poverty in collaborative truth-seeking mode rather than debate mode, for example. On such issues, collaborative truth-seeking can provide a shortcut to resolution, in comparison to protracted, tiring, and emotionally challenging debates. Likewise, using collaborative truth-seeking to resolve differing opinions on all issues holds the danger of creating a community oriented excessively toward sensitivity to the perspectives of others, which might result in important issues not being discussed candidly. After all, research shows the importance of having disagreement in order to make wise decisions and to figure out the truth. Of course, collaborative truth-seeking is well suited to expressing disagreements in a sensitive way, so if used appropriately, it might permit even people with triggers around certain topics to express their opinions.
Taking these caveats into consideration, collaborative truth-seeking is a great tool to use to discover the truth and to update our beliefs, as it can get past the high emotional barriers to altering our perspectives that have been put up by evolution. Rationality venues are natural places to try out collaborative truth-seeking.
Monthly Outreach Thread
Please share about any outreach that you have done to convey rationality-style ideas broadly, whether recent or not, which you have not yet shared on previous Outreach threads. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline, a form of cognitive altruism that contributes to creating a flourishing world. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.
[Link] Op-Ed on Brussels Attacks
Trigger warning: politics is hard mode.
"How to you make America safer from terrorists" is the title of my op-ed published in Sun Sentinel, a very prominent newspaper in Florida, one of the most swingiest of the swing states in the US for the presidential election, and the one with the most votes. The maximum length of the op-ed was 450 words, and it was significantly edited by the editor, so it doesn't convey the full message I wanted with all the nuances, but such is life. My primary goal with the piece was to convey methods of thinking more rationally about politics, such as to use probabilistic thinking, evaluating the full consequences of our actions, and avoiding attention bias. I used the example of the proposal to police heavily Muslim neighborhoods as a case study. Hope this helps Floridians think more rationally and raises the sanity waterline regarding politics!
EDIT: To be totally clear, I used guesstimates for the numbers I suggested. Following Yvain/Scott Alexander's advice, I prefer to use guesstimates rather than vague statements.
[Video] The Essential Strategies To Debiasing From Academic Rationality
How It Feels to Improve My Rationality
Note: this has started as a comment reply, but I thought it got interesting (and long) enough to deserve its own post.
Important note: this post is likely to spark some extreme reactions, because of how human brains are built. I'm including warnings, so please read this post carefully and in order written or don't read it at all.
I'm going to attempt to describe my subjective experience of progress in rationality.
Important edit: I learned from the responses to this post that there's a group of people which whom this resonates pretty well, and there's also a substantial group with whom it does not at all resonate, to the degree they don't know if what I'm saying even makes sense and is correlated to rationality in any meaningful way. If you find yourself in the second group, please notice that trying to verify if I'm doing "real rationality" or not is not a way to resolve your doubts. There is no reason why you would need to feel the same. It's OK to have different experiences. How you experience things is not a test of your rationality. It's also not a test of my rationality. All in all, because of publishing this and reading the comments, I've found out some interesting stuff about how some clusters of people tend to think about this :)
Also, I need to mention that I am not an advanced rationalist, and my rationality background is mostly reading Eliezer's sequences and self-experimentation.
I'm still going to give this a shot, because I think it's going to be a useful reference for a certain level in rationality progress.
I even expect myself to find all that I write here silly and stupid some time later.
But that's the whole point, isn't it?
What I can say about how rationality feels to me now, is going to be pretty irrelevant pretty soon.
I also expect a significant part of readers to be outraged by it, one way or the other.
If you think this is has no value, maybe try to imagine a rationality-beginner version of you that would find a description such as this useful. If only as a reference that says, yes, there is a difference. No, rationality does not feel like a lot of abstract knowledge that you remember from a book. Yes, it does change you deeply, probably deeper than you suspect.
In case you want to downvote this, please do me a favour and write a private message to me, suggesting how I could change this so that it stops offending you.
Please stop any feeling of wanting to compare yourself to me or anyone else, or to prove anyone's superiority or inferiority.
If you can't do this please bookmark this post and return to it some other time.
...
...
Ready?
So, here we go. If you are free from againstness and competitiveness, please be welcome to read on, and feel free to tell me how this resonates, and how different it feels inside your own head and on your own level.
Part 1. Pastures and fences
Let's imagine a vast landscape, full of vibrant greenery of various sorts.
Now, my visualization of object-level rationality is staking out territories, like small parcels of a pasture surrounded by fences.
Inside of the fences, I tend to gave more of neat grass than anything else. It's never perfect, but when I keep working on an area, it's slowly improving. If neglected, weeds will start growing back sooner or later.
Let's also imagine that the ideas and concepts I generalize as I go about my work become seeds of grass, carried by the wind.
What the work feels like, is that I'm running back and forth between object level (my pastures) and meta-level (scattering seeds).
As result of this running back and forth I'm able to stake new territories, or improve previous ones, to have better coverage and less weeds.
The progress I make in my pastures feeds back into interesting meta-level insights (more seeds carried by the wind), which in turn tend to spread to new areas even when I'm not helping with this process on purpose.
My pastures tend to concentrate in clusters, in areas that I have worked on the most.
When I have lots of action in one area, the large amounts of seeds generated (meta techniques) are more often carried to other places, and at those times I experience the most change happening in other, especially new and unexplored, areas.
However even if I can reuse some of my meta-ideas (seeds), then still to have a nice and clear territory I need to go over there, and put in the manual work of clearing it up.
As I'm getting better and more efficient at this, it becomes less work to gain new territories and improve old ones.
But there's always some amount of manual labor involved.
Part 2. Tells of epistemic high ground
Disclaimer: not using this for the Dark Side requires a considerable amount of self-honesty. I'm only posting this because I believe most of you folks reading this are advanced enough not to shoot yourself in the foot by e.g. using this in arguments.
Note: If you feel the slightest urge to flaunt your rationality level, pause and catch it. (You are welcome.) Please do not start any discussion motivated by this.
So, what clues do I tend to notice when my rationality level is going up, relative to other people?
Important note: This is not the same as "how do I notice if I'm mistaken" or "how do I know if I'm on the right path". These are things I notice after the fact, that I judge to be correlates, but they are not to be used to choose direction in learning or sorting out beliefs. I wrote the list below exactly because it is the less talked about part, and it's fun to notice things. Somehow everyone seems to have thought this is more than I meant it to be.
Edit: check Viliam's comment for some concrete examples that make this list better.
In a particular field:
- My language becomes more precise. Where others use one word, I now use two, or six.
- I see more confusion all around.
- Polarization in my evaluations increases. E.g. two sensible sounding ideas become one great idea and one stupid idea.
- I start getting strong impulses that tell me to educate people who I now see are clearly confused, and could be saved from their mistake in one minute if I could tell them what I know... (spoiler alert, this doesn't work).
Rationality level in general:
- I stop having problems in my life that seem to be common all around, and that I used to have in the past.
- I forget how it is to have certain problems, and I need to remind myself constantly that what seems easy to me is not easy for everyone.
- Writings of other people move forward on the path from intimidating to insightful to sensible to confused to pitiful.
- I start to intuitively discriminate between rationality levels of more people above me.
- Intuitively judging someone's level requires less and less data, from reading a book to reading ten articles to reading one article.
Important note: although I am aware that my mind automatically estimates rationality levels of various people, I very strongly discourage anyone (including myself) from ever publishing such scores/lists/rankings. If you ever have an urge to do this, especially in public, think twice, and then think again, and then shut up. The same applies to ever telling your estimates to the people in question.
Note: Growth mindset!
Now let's briefly return to the post I started out replying to. Gram_Stone suggested that:
You might say that one possible statement of the problem of human rationality is obtaining a complete understanding of the algorithm implicit in the physical structure of our brains that allows us to generate such new and improved rules.
Now after everything I've seen until now, my intuition suggests Gram_Stone's idealized method wouldn't work from inside a human brain.
A generalized meta-technique could become one of the many seeds that help me in my work, or even a very important one that would spread very widely, but it still wouldn't magically turn raw territory into perfect grassland.
Part 3. OK or Cancel?
The closest I've come to Gram_Stone's ideal is when I witnessed a whole cycle of improving in a certain area being executed subconsciously.
It was only brought to my full attention when an already polished solution in verbal form popped into my head when I was taking a shower.
It felt like a popup on a computer screen that had "Cancel" and "OK" buttons, and after I chose OK the rest continued automatically.
After this single short moment, I found a subconscious habit was already in place that ensured changing my previous thought patterns, and it proved to work reliably long after.
That's it! I hope I've left you better off reading this, than not reading this.
Meta-note about my writing agenda: I've developed a few useful (I hope) and unique techniques and ideas for applied rationality, which I don't (yet) know how to share with the community. To get that chunk of data birthed out of me, I need some continued engagement from readers who would give me feedback and generally show interest (this needs to be done slowly and in the right order, so I would have trouble persisting otherwise). So for now I'm writing separate posts noncommittally, to test reactions and (hopefully) gather some folks that could support me in the process of communicating my more developed ideas.
Outreach Thread
Based on an earlier suggestion, here's an outreach thread where you can leave comments about any recent outreach that you have done to convey rationality-style ideas broadly. The goal of having this thread is to organize information about outreach and provide community support and recognition for raising the sanity waterline. Likewise, doing so can help inspire others to emulate some aspects of these good deeds through social proof and network effects.
Religious and Rational?
Reverend Caleb Pitkin, an aspiring rationalist and United Methodist Minister, wrote an article about combining religion and rationality which was recently published on the Intentional Insights blog. He's the only Minister I know who is also an aspiring rationalist, so I thought it would be an interesting piece for Less Wrong as well. Besides, it prompted an interesting discussion on the Less Wrong Facebook group, so I thought some people here who don't look at the Facebook group might be interested in checking it out as well. Caleb does not have enough karma to post, so I am posting it on his behalf, but he will engage with the comments.
______________________________________________________________________________
Religious and Rational?
“Wisdom shouts in the street; in the public square she raises her voice.”
Proverbs 1:20 Common English Bible
The Biblical book of Proverbs is full of imagery of wisdom personified as a woman calling and extorting people to come to her and listen. The wisdom contained in Proverbs is not just spiritual wisdom but also contains a large amount of practical wisdom and advice. What might the wisdom of Proverbs and rationality have in common? The wisdom literature in scripture was meant to help people make better and more effective decisions. In today’s complex and rapidly changing world we have the same need for tools and resources to help us make good decisions. One great source of wisdom is methods of better thinking that are informed by science.
Now, not everyone would agree with comparing the wisdom of Proverbs with scientific insights. Doing so may not sit well with some in the secular rationality community who view all religion as inherently irrational and hindering clear thinking. It also might not sit well with some in my own religious community who are suspicious of scientific thinking as undermining traditional faith. While it would take a much longer piece to try to completely defend either religion or secular rationality I’m going to try and demonstrate some ways that rationality is useful for a religious person.
The first way that rationality can be useful for a religious person is in the living of our daily lives. We are faced with tasks and decisions each day that we try to do our best in. Learning to recognize common logical fallacies or other biases, like those that cause us to fail to understand other people, will improve our decision making as much as it improves the thinking of non-religious people. For example, a mother driving her kids to Sunday School might benefit from avoiding thinking that the person who cuts her off is definitely a jerk, one common type of thinking error. Some doing volunteer work for their church could be more effective if they avoid problematic communication with other volunteers. This use of rationality to lead our daily lives in the best way is one that most would find fairly unobjectionable. It’s easy to say that the way we all achieve our personal goals and objectives could be improved, and we can all gain greater agency.
Rationality can also be of use in theological commentary and discourse. Many of the theological and religious greats used the available philosophical and intellectual tools of their day to examine their faith. Examples of this include John Wesley, Thomas Aquinas and even the Apostle Paul when he debated Epicurean and Stoic Philosophers. They also made sure that their theologies were internally, rational and logical. This means that, from the perspective of a religious person, keeping up with rationality can help with the pursuit of a deeper understanding of our faith. For a secular person acknowledging the ways in which religious people use rationality within their worldview may be difficult, but it can help to build common ground. The starting point is different. Secular people start with the faith that they can trust their sensory experience. Religious people start with conceptions of the divine. Yet, after each starting point, both seek to proceed in a rational logical manner.
It is not just our personal lives that can be improved by rationality, it’s also the ways in which we interact with communities. One of the goals of many religious communities is to make a positive impact on the world around them. When we work to do good in community we want that work to be as effective as possible. Often when we work in community we find that we are not meeting our goals or having the kind of significant impact that we wish to have. It is my experience this is often a failure to really examine and gather the facts on the ground. We set off full of good intentions but with limited resources and time. Rational examination helps us to figure out how to match our good intentions with our limited resources in the most effective way possible. For example as the Pastor of two small churches money and people power can be in short supply. So when we examine all the needs of our community we have to acknowledge we cannot begin to meet all or even most of them. So we take one issue, hunger, and devote our time and resources to having one big impact on that issue. As opposed to trying to be a little bit to alleviate a lot of problems.
One other way that rationality can inform our work in the community is to recognize that part of what a scarcity of resources means is that we need to work together with others in our community. The inter-faith movement has done a lot of good work in bringing together people of faith to work on common goals. This has meant setting aside traditional differences for the sake of shared goals. Let us examine the world we live in today though. The amount of nonreligious people is on the rise and there is every indication that it will continue to do so. On the other hand religion does not seem to be going anywhere either. Which is good news for a pastor. Looking at this situation, the rational thing to do is to work together, for religious people to build bridges toward the non-religious and vice versa.
Wisdom still stands on the street calling and imploring us to be improved--not in the form of rationalist street preachers, though that idea has a certain appeal-- but in the form of the growing number of tools being offered to help us improve our capacity for logic, for reasoning, and for the tools that will enable us take part in the world we live in.
Everyone wants to make good decisions. This means that everyone tries to make rational decisions. We all try but we don’t always hit the mark. Religious people seek to achieve their goals and make good decisions. Secular people seek to achieve their goals and make good decisions. Yes, we have different starting points and it’s important to acknowledge that. Yet, there are similarities in what each group wants out of their lives and maybe we have more in common than we think we do.
On a final note it is my belief that what religious people and what non-religious people fear about each other is the same thing. The non-religious look at the religious and say God could ask them to do anything... scary. The religious look at the non-religious and say without God they could do anything... scary. If we remember though that most people are rational and want to live a good life we have less to be scared of, and are more likely to find common ground.
____________________________________________________________________________________________________________
Bio: Caleb Pitkin is a Provisional Elder with the United Methodist Church appointed to Signal Mountain United Methodist Church. Caleb is a huge fan of the theology of John Wesley, which ask that Christians use reason in their faith journey. This helped lead Caleb to Rationality and participation in Columbus Rationality, a Less Wrong meetup that is part of the Humanist Community of Central Ohio. Through that, Caleb got involved with Intentional Insights. Caleb spends his time trying to live a faithful and rational life.
Conveying rational thinking about long-term goals to youth and young adults
[Link] How I Escaped The Darkness of Mental Illness
[Link] Huffington Post article about dual process theory
Published a piece in The Huffington Post popularizing dual-process theory in layman's language.
P.S. I know some don't like using terms like Autopilot and Intentional to describe System 1 and System 2, but I find from long experience that these terms resonate well with a broad audience. Also, I know dual process theory is criticized by some, but we have to start somewhere, and just explaining dual process theory is a way to start bridging the inference gap to higher meta-cognition.
Forecasting and recursive Inhibition within a decision cycle
When we anticipate the future, we the opportunity to inhibit our behaviours which we anticipate will lead to counterfactual outcomes. Those of us with sufficiently low latencies in our decision cycles may recursively anticipate the consequences of counterfactuating (neologism) interventions to recursively intervene against our interventions.
This may be difficult for some. Try modelling that decision cycle as a nano-scale approximation of time travel. One relevant paradox from popular culture is the farther future paradox described in the tv cartoon called Family Guy.
Watch this clip: https://www.youtube.com/watch?v=4btAggXRB_Q
Relating the satire back to our abstraction of the decision cycle, one may ponder:
What is a satisfactory stopping rule for the far anticipation of self-referential consequence?
That is:
(1) what are the inherent harmful implications of inhibiting actions in and of themselves: stress?
(2) what are their inherent merits: self-determination?
and (3) what are the favourable and disfavourable consequences as x point into the future given y number of points of self reference at points z, a, b and c?
see no ready solution to this problem in terms of human rationality, and see no corresponding problem in artificial intelligence, where it would also apply. Given the relevance to MIRI (since CFAR doesn't seem work on open-problems in the same way)
I would like to also take this opportunity to open this as an experimental thread for the community to generate a list of ''open-problems'' in human rationality that are otherwise scattered across the community blog and wiki.
[Link] Video of a presentation by Hal Arkes, one of the top world experts in debiasing, on dealing with the hindsight bias and overconfidence
Here's a video of a presentation by Hal Arkes, one of the top world experts in debiasing, Emeritus Professor at Ohio State, and Intentional Insights Advisory Board member, on dealing with hindsight bias and overconfidence. This was at a presentation hosted by Intentional Insights and the Columbus, OH Less Wrong group. It received high marks from local Less Wrongers, so I thought I'd share it here.
Rationalist Magic: Initiation into the Cult of Rationatron
I am curious on the perspective of a rationalist discussion board (this seems like a good start) on the practice of magic. I introduce the novel, genius concept of "rationalist magic", i.e. magic practiced by rationalists.
Why would rationalists practice magic? That makes no sense!!
It's the logical conclusion to Making Peace with Belief, Engineering Religion and the self-help threads.
What would that look like?
Good question. Here are some possible considerations to make:
- It's given low probability that magic is more than purely mental phenomena. The practice is called "placebomancy" to make it clear that such an explanation is favoured.
- It is practiced as a way to gain placebons.
- A cult of rationalist magic, the Cult of Rationatron, should be formed to compete against worse (anti-rationality, anti-science, violent) cults.
- Rationalist groups can retain more members due to abundance of placebons.
- The ultimate goal is to use rationality to devise a logically optimal system of magic with which to build the Philosopher's Stone and fix the world like in HPMOR. (Just kidding, magic isn't real.)
I looked into magical literature and compiled a few placebo techniques/exercises, along with their informal instructions. These might be used as a starting point. If there are any scientific errors these can eventually be corrected. I favoured techniques that can be done with little to no preparation and provide some results. Of course, professional assistance (e.g. yoga classes) can also be helpful.
1. Mindfulness Meditation
- (Optional) Do 1-3 deep mouth-exhales to relax.
- Find a good position.
- Begin by being aware of breath.
- (Optional) Move on to calmly observing different parts of the body, vision, the other senses, thoughts, mandala visualization, and so on.
- (Optional) Compare the experience to teachings of buddhism.
- (Optional) Say "bud-" for each inhale and "-dho" for each exhale; alternatively, count them from 1 to 10 and reset each time.
Note that trying to focus on not focusing isn't always helpful. Hence the common technique is that of focusing on a single thing, or rather, simply being passively aware of it. The goal is to discipline the mind and develop one-pointedness.
2. Astral Projection (OBE)
- Lay on a bed, relax.
- Try out some of the tips from the previous exercise.
- Stay there for 30-60 minutes until you reach the hypnagogic state (a state where the mind is awake but the body is sleeping) and try to (a) feel your astral body and grab a rope, (b) feel vibrations in your body, (c) roll out of the body, (d) (...).
Astral Projection can be thought of as a very vivid stage of dreaming. Some authors have more detailed exercises related to this[7]. It might take many tries to do this exercise right.
3. Mantras
You can borrow an eastern mantra such as "om mani padme hum", "om namah shivaya" and "hare kṛiṣhṇa hare kṛiṣhṇa / kṛiṣhṇa kṛiṣhṇa hare hare / hare rāma hare rāma / rāma rāma hare hare" or make up some phrase. Whatever works for you.
Chanting mantras is a form of sensory excitation. Both sensory excitation and deprivation can induce trance. This can be used along with exercise 1.
(Optional) Find one of these.
4. Contemplation
- Take a moment to contemplate the harmony of the universe and/or have love towards some/all beings.
5. Idol Worship
- Make a shrine dedicated to Rationatron, god of rationality.
This and this are possible forms of Rationatron.
6. Minimal Spellcasting
- Make a wish.
- Clear your mind.
- Take a deep, long breath imagining that as you exhale the wish is being registered into the universe by Rationatron.
- Forget about the wish.
Magicians claim it's more magically effective to forget about the wish after casting the spell (fourth step) and let your subconscious act than to use the repeat-your-wish-every-day method.
This is, as far as I know, the simplest spellcasting technique. It can be complicated further by the addition of rituals, sigils, poses[6], and stronger methods of inducing trance in the second step.
7. Deity Generator
- Take any arbitrary concept or amalgam of concepts.
- (Optional) Associate it to a colour.
- (Optional) Make it a new deity.
For one reason or another, spiritualists love doing this.
This exercise can make arbitrary "powers" or "deities" for use in other exercises. For example, the association "yellow - wealth" is a "power" for exercise 6, or you might imagine yourself as a "deity" in that same exercise to induce some emotion.
8. Tulpa Making
This is a technique found in Tibetan buddhism lore[5] as one ability held by bodhisattvas and used by Buddha to multiply himself. This was adopted by communities of westerners in the internet who generally don't attribute mystical properties to the practice and made detailed tutorials (tulpa.info).
The technique uses your subconscious to create a companion. It consists in visualizing and talking to a being in your imagination until it eventually produces unexpected thoughts.
You might be asking, "can I model this companion after a cartoon?" The answer is yes.
Note: some of the following techniques might require further examination.
9. Aura Sight
Some authors[1,2,4] give exercises attributed to peripheral vision or meditation. If anyone finds out how to see auras, please confirm.
10. Invoking and Banishing Ritual of Truth

- Imagine a circle of protection surrounding you.
- (Invoking) "I open/invoke the powers of p, q, ¬p, ¬q."
- (Banishing) "I close/revoke/banish the powers of ¬q, ¬p, q, p."
- (Optional) This can be performed solely in the imagination through visualization or with more realism added to different degrees inbetween (e.g. by making a real circle, pointing a sword or your hand to the four directions).
This has been used for different purposes; as an introduction to ritual work or simply as routine.
The common formulation of this exercise uses a pentagram, holy names, the elements, planets and a bunch of other nonsense. Why do I have to remember all this roleplaying? Do they think this is D&D? Therefore, I designed a more efficient version of the technique that also replaces the magical symbolism with superior logic symbolism.
Note: these are roughly analogous[3] to the simpler placebo techniques known as shielding (imagining a shield), centering (regaining focus by being aware of the solar plexus/heart area) and grounding (putting your feet on the ground to receive/release energy from/to the ground).
11. Demonic Mirror Summoning
Call upon Dark Lord Voldemort and say the "avada kedavra" mantra 7+ times in front of a mirror in a dimly lit room until a demon pops up and/or your appearance gets distorted.
Some individuals report holding a conversation with their mirror self through mirror exercises.
FAQ
What is the purpose of this?
It's about time someone made an atheist religion.
Why not follow the Flying Spaggheti Monster religion instead?
It doesn't provide placebo techniques. It only functions as a point in argumentation.
Do I have to do all of the exercises?
No, only those that you personally deem helpful. However, the first exercise (meditation) is generally recommended by health research. It's also a pre-requisite to many other exercises. Note: although meditation is generally recommended, some caution, common sense and preparation is advised (specially for exercises 2-3).
What are the teachings?
It's acknowledged that rational people can sometimes get to different conclusions. Therefore, there is no mandatory teachings. However, it uses "rationality" as a starting point to distinguish it from other cults meaning that "placebo" is used as the default model of magic and that both logic and the use of such techniques is encouraged. It can be used as a gathering of placebo techniques for atheists and as a blank slate from the dogma of already existing cults on the nature of magic.
What is the pantheon of this religion?
The "official" pantheon is that of the universe itself (Einstein's pantheism; it's used in exercises 4 and 6), Rationatron (a deity of rationality) and Dark Lord Voldemort (the opposer). They fulfill different god-roles. More gods can be created with the Deity Generator exercise or borrowed.
Can I worship Eris, Cthullu or Horus/Isis/Odin...?
Yes, see above answer.
Wait, Dark Lord Voldemort? Really?
Christianity had lazier ways to come up with their demons and nobody noticed. Zing.
Aren't some of those techniques irrational?
Only when used by superstitious people. Once used by rationalists, they become super-rational.
What about black magic? Can I cast hexes?
They aren't going to work because magic is not real.
References
1. frater, ud. high magick. A good overview on different kinds of magic.
2. hine, phil. spirit guides. Another overview.
3. hine, phil. modern shamanism pt1-2. Overview for shamans.
4. samuel sagan. awakening the third eye.
5. alexandra david-neel. magic and mystery in tibet. A book on buddhist lore.
6. crowley. liber o.
7. robert bruce. mastering astral projection.
Engineering Religion
This topic is vague and open-ended. I'm leaving it that way deliberately. Perhaps some interesting, better defined topics will grow out of it. Or perhaps it's too far afield from the concept of less wrong cognition to be of interest here. So I view this topic as exploratory rather than as an attempt to solve a specific problem.
What useful purposes does religion serve? Are any of these purposes non-supernaturalistic in nature? What is success for a religion and what elements of a religion tend to cause it to become successful? How would you design a "rational religion", if such an entity is possible? How and why would a religion with that design become successful and serve a useful purpose? What are the relationships between aspects of a religion, and outcomes involving that religion? For example, Catholicism discourages birth control. Lack of birth control encourages higher birthrates among Catholics. This encourages there to be a larger number of Catholics in the next generation than would otherwise be the case, Surely there are other relationships like this? How do aspects of religion cause them to evolve differently over time?
Playing offense
There is a pattern I noticed that appears whenever some new interesting idea or technology gains some media traction above certain threshold. Those who are considered opinion-makers (journalists, intellectuals) more often than not write about this new movement/idea/technology in a way that it somewhere between cautions and negative. As this happens those who adopted this new ideas/technologies became somewhat weary of promoting it and in fear of a low status decide to retreat from their public advocacy of the mentioned ideas.
I was wondering that maybe in some circumstances, the right move for those that are getting the negative attention is not to defend themselves but instead to go on a offense. And one the most interesting offensive tactic might be is to try to reverse the framing of the subject matter and put the burden of the argument on the critics in a way that requires them to seriously reconsider their position:
- Those who are critical of this idea are actually doing something wrong and they are unable to see the magnitude of their mistake
- The fact that they are not adopting this idea/product has a big cost that they are not aware of, and in reality they are the ones making a weird choice
- They already adopted that idea/position but they don't notice it as it not framed in context they understand or find comfortable
The site has crossed a threshold—it is now so widely trafficked that it's fast becoming a routine aid to social interaction, like e-mail and antiperspirant.
As far as a transhumanist is concerned, if you see someone in danger of dying, you should save them; if you can improve someone’s health, you should. There, you’re done. No special cases.
You also don’t ask whether the remedy will involve only “primitive” technologies (like a stretcher to lift the six-year-old off the railroad tracks); or technologies invented less than a hundred years ago (like penicillin) which nonetheless seem ordinary because they were around when you were a kid; or technologies that seem scary and sexy and futuristic (like gene therapy) because they were invented after you turned 18; or technologies that seem absurd and implausible and sacrilegious (like nanotech) because they haven’t been invented yet.
The Winding Path
The First Step
The first step on the path to truth is superstition. We all start there, and should acknowledge that we start there.
Superstition is, contrary to our immediate feelings about the word, the first stage of understanding. Superstition is the attribution of unrelated events to a common (generally unknown or unspecified) cause - it could be called pattern recognition. The "supernatural" component generally included in the definition is superfluous, because supernatural merely refers to that which isn't part of nature - which means reality -, which is an elaborate way of saying something whose relationship to nature is not yet understood, or else nonexistent. If we discovered that ghosts are real, and identified an explanation - overlapping entities in a many-worlds universe, say - they'd cease to be supernatural and merely be natural.
Just as the supernatural refers to unexplained or imaginary phenomena, superstition refers to unexplained or imaginary relationships, without the necessity of cause. If you designed an AI in a game which, after five rounds of being killed whenever it went into rooms with green-colored walls, started avoiding rooms with green-colored walls, you've developed a good AI. It is engaging in superstition, it has developed an incorrect understanding of the issue. But it hasn't gone down the wrong path - there is no wrong path in understanding, there is only the mistake of stopping. Superstition, like all belief, is only useful if you're willing to discard it.
The Next Step
Incorrect understanding is the first - and necessary - step to correct understanding. It is, indeed, every step towards correct understanding. Correct understanding is a path, not an achievement, and it is pursued, not by arriving at the correct conclusion in the first place, but by testing your ideas and discarding those which are incorrect.
No matter how much intelligent you are, you cannot skip the "incorrect understanding" step of knowledge, because that is every step of knowledge. You must come up with wrong ideas in order to get at the right ones - which will always be one step further. You must test your ideas. And again, the only mistake is stopping, in assuming that you have it right now.
Intelligence is never your bottleneck. The ability to think faster isn't necessarily the ability to arrive at the right answer faster, because the right answer requires many wrong ones, and more importantly, identifying which answers are indeed wrong, which is the slow part of the process.
Better answers are arrived at by the process of invalidating wrong answers.
The Winding Path
The process of becoming Less Wrong is the process of being, in the first place, wrong. It is the state of realizing that you're almost certainly incorrect about everything - but working on getting incrementally closer to an unachievable "correct". It is a state of anti-hubris, and requires a delicate balance between the idea that one can be closer to the truth, and the idea that one cannot actually achieve it.
The art of rationality is the art of walking this narrow path. If ever you think you have the truth - discard that hubris, for three steps from here you'll see it for superstition, and if you cannot see that, you cannot progress, and there your search for truth will end. That is the path of the faithful.
But worse, the path is not merely narrow, but winding, with frequent dead ends requiring frequent backtracking. If ever you think you're closer to the truth - discard that hubris, for it may inhibit you from leaving a dead end, and there your search for truth will end. That is the path of the crank.
The path of rationality is winding and directionless. It may head towards beauty, then towards ugliness; towards simplicity, then complexity. The correct direction isn't the aesthetic one; those who head towards beauty may create great art, but do not find truth. Those who head towards simplicity might open new mathematical doors and find great and useful things inside - but they don't find truth, either. Truth is its own path, found only by discarding what is wrong. It passes through simplicity, it passes through ugliness; it passes through complexity, and also beauty. It doesn't belong to any one of these things.
The path of rationality is a path without destination.
Written as an experiment in the aesthetic of Less Wrong. I'd appreciate feedback into the aesthetic interpretation of Less Wrong, rather than the sense of deep wisdom emanating from it (unless the deep wisdom damages the aesthetic).
[Link] A rational response to the Paris attacks and ISIS
Here's my op-ed that uses long-term orientation, probabilistic thinking, numeracy, consider the alternative, reaching our actual goals, avoiding intuitive emotional reactions and attention bias, and other rationality techniques to suggest more rational responses to the Paris attacks and the ISIS threat. It's published in the Sunday edition of The Plain Dealer, a major newspaper (16th in the US). This is part of my broader project, Intentional Insights, of conveying rational thinking, including about politics, to a broad audience to raise the sanity waterline.
[Link] Less Wrong Wiki article with very long summary of Daniel Kahneman's Thinking, Fast and Slow
I've made very extensive notes, along with my assessment, of Daniel Kahneman's Thinking, Fast and Slow, and have passed it around to aspiring rationalist friends who found my notes very useful. So I though I would share these with the Less Wrong community by creating a Less Wrong Wiki article with these notes. Feel free to optimize the article based on your own notes as well. Hope this proves as helpful to you as it did to those others whom I shared my notes with.
[Link] Lifehack Article Promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies
Nice to get this list-style article promoting LessWrong, Rationality Dojo, and Rationality: From AI to Zombies, as part of a series of strategies for growing mentally stronger, published on Lifehack, a very popular self-improvement website. It's part of my broader project of promoting rationality and effective altruism to a broad audience, Intentional Insights.
EDIT: To be clear, based on my exchange with gjm below, the article does not promote these heavily and links more to Intentional Insights. I was excited to be able to get links to LessWrong, Rationality Dojo, and Rationality: From AI to Zombies included in the Lifehack article, as previously editors had cut out such links. I pushed back against them this time, and made a case for including them as a way of growing mentally stronger, and thus was able to get them in.
Optimizing Rationality T-shirts
Thanks again for all the feedback on the first set of Rationality slogan t-shirts, which Intentional Insights developed as part of our broader project of promoting rationality to a wide audience. As a reminder, the t-shirts are meant for aspiring rationalists to show their affiliation with rationality, to remind themselves and other aspiring rationalists to improve, and to spread positive memes broadly. All profits go to promoting rationality widely.
For the first set, we went with a clear and minimal style that conveyed the messages clearly and had an institutional affiliation, based on the advice Less Wrongers gave earlier. While some liked and bought these, plenty wanted something more stylish and designed. As an aspiring rationalist, I am glad to update my beliefs. So we are going back to the drawing board, and trying to design something more stylish.
Now, we are facing the limitation of working with a print on demand service. We need to go with POD as we can't afford to buy shirts and then sell them, it would cost way too much to do so. We decided on CafePress as the most popular and well-known service with the most variety of options. It does limit our ability to design things, though.
So for the next step, we got some aspiring rationalist volunteers for Intentional Insights to find a number of t-shirt designs they liked, and we will create t-shirts that use designs of that style, but with rationality slogans. I'd like to poll fellow Less Wrongers for which designs they like most among the ones found by our volunteers. I will list links below associated with numbers, and in comments, please indicate the t-shirt numbers that you liked best, so that we can make those. Also please link to other shirts you like, or make any other comments on t-shirt designs and styles.
1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17
Thanks all for collaborating on optimizing rationality t-shirts!
[Link] Mainstreaming Tell Culture
Mainstreaming Tell Culture and other rational relationship strategies in this listicle for Lifehack, a very popular self-improvement website, as part of my broader project, Intentional Insights, of promoting rationality and science-based thinking to a broad audience. What are your thoughts about this piece?
Proposal for increasing instrumental rationality value of the LessWrong community
There were some concerns here (http://lesswrong.com/lw/2po/selfimprovement_or_shiny_distraction_why_less/) regarding value of LessWrong community from the perspective of instrumental rationality.
In the discussion on the relevant topic I've seen the story about how community can help http://lesswrong.com/lw/2p5/humans_are_not_automatically_strategic/2l73 from this perspective.
And I think It's a great thing that local community can help people in various ways to achieve their goals. Also it's not the first time I hear about how this kind of community is helpful as a way of achieving personal goals.
Local LessWrong meetups and communities are great, but they have kind of different focus. And a lot of people live in places where there are no local community or it's not active/regular.
So I propose to form small groups (4-8 people). Initially, groups would meet (using whatever means that are convenient for a particular group), discuss the goals of each participant in a long and in a short term (life/year/month/etc). They would collectively analyze proposed strategies for achieving these goals. Discuss how short term goals align with long term goals. And determine whether the particular tactics for achieving stated goal is optimal. And is there any way to improve on it?
Afterwards, the group would meet weekly to:
Set their short term goals, retrospect on the goals set for previous period. Discuss how successfully they were achieved, what problems people encountered and what alterations to overall strategy follows. And they will also analyze how newly set short-term goals coincide with long-term goals.
In this way, each member of the group would receive helpful feedback on his goals and on his approach to attaining them. And also he will fill accountable, in a way, for goals, he have stated before the group and this could be an additional boost to productivity.
I also expect that group would be helpful from the perspective of overcoming different kind of fallacies and gaining more accurate beliefs about the world. Because it's easier for people to spot errors in the beliefs/judgment of others. I hope that group's would be able to develop friendly environment and so it would be easier for people to get to know about their errors and change their mind. Truth springs from argument amongst friends.
Group will reflect on it's effectiveness and procedures every month(?) and will incrementally improve itself. Obviously if somebody have some great idea about group proceedings it makes sense to discuss it after usual meeting and implement it right away. But I think regular in-depth retrospective on internal workings is also important.
If there are several groups available - groups will be able to share insights, things group have learned during it's operation. (I'm not sure how much of this kind of insights would be generated, but maybe it would make sense to once in a while publish post that would sum up groups collective insights.)
There are some things that I'm not sure about:
- I think it would be worth to discuss possibility of shuffling group members (or at least exchanging members in some manner) once in a while to provide fresh insight on goals/problems that people are facing and make the flow of ideas between groups more agile.
- How the groups should be initially formed? Just random assignment or it's reasonable to devise some criteria? (Goals alignment/Diversity/Geography/etc?)
I think initial reglament of the group should be developed by the group, though I guess it's reasonable to discuss some general recommendations.
So what do you think?
If you interested - fill up this google form:
https://docs.google.com/forms/d/1IsUQTp_6pGyNglBiPOGDuwdGTBOolAKfAfRrQloYN_o/viewform?usp=send_form
[Link] Rationality and Willpower in LifeHack
Diversifying the genres of spreading rationality as part of my broader project by publishing a listicle, and getting it published in Lifehack, a major online publication. Included a link to Less Wrong in it, guessing this is the first time there's a link to LessWrong in a listicle. What do you think about the article and this genre of spreading rationality?
"How To Become Less Wrong" - Feedback on Article Request
Would appreciate feedback on this article I plan to submit to a broad media publication as part of my broader project of promoting rationality and raising the sanity waterline. Can't make it much longer as I'm at word limit, so if you suggest adding something, also suggest taking something away. The article is below the black line and thanks for any feedback!
____________________________________________________________________________________________________________
Article - How I Became Less Wrong
On a sunny day in early August, my wife Agnes Vishnevkin and I came to a Rationality Dojo in Columbus, OH. Run by Max Harms, this group is devoted to growing mentally stronger through mental fitness practices. That day, the dojo’s activities focused on probabilistic thinking, a practice of assigning probabilities to our intuitive predictions about the world to improve our ability to evaluate reality accurately and make wise decisions to reach our goals. After learning the principles of probabilistic thinking, we discussed how to apply this strategy to everyday life.
We were so grateful for this practice in early September, when my wife and I started shopping for our new house. We discussed in advance the specific goals we had for the house, enabling us to save a lot of time by narrowing our options. We then spent one day visiting a number of places we liked, rating each aspect of the house important to us on a numerical scale. After visiting all these places, we sat down and discussed the probabilities on what house would best meet our goals. The math made it much easier to overcome our individual aesthetic preferences, and focus on what would make us happiest in the long run. We settled on our top choice, made a bid, and signed our contract.
This sounds like a dry and not very exciting process. Well, we were very excited!
Why? Because we were confident that we made the best decision with the information available to us. The decision to get a new house is one of the biggest financial decisions we will make in our lifetime. It felt great to know that we could not have done any better than we did through applying the principles of probabilistic thinking and other rationality-informed strategies. Of course, we could still be wrong, there are no guarantees in life. Yet we know we did the best we could - we grew less wrong.
These strategies are vital for improving our thinking because our brains are inherently irrational. Research in psychology, cognitive neuroscience, behavioral economics, and other fields from the middle of the twentieth century has discovered hundreds of thinking errors, called cognitive biases. These thinking errors cause us to make flawed decisions – in finances, relationships, health and well-being, politics, etc.
Recently, popular books by scholars such as Daniel Kahneman, Dan Ariely, Chip and Dan Heath, and other scholars have brought these problems from the halls of academia to the attention of the broad public. However, these books have not focused on how we can address these problems in everyday life.
So far, the main genre dedicated to popularizing strategies to improve our patterns of thinking, feeling, and behavior patterns has been in the field of self-improvement. Unfortunately, self-improvement is rarely informed by science, and instead relies on personal experience and inspiring stories. While such self-improvement activities certainly help many, it is hard to tell whether the impact comes from the actual effectiveness of the specific activities or a placebo effect due to people being inspired to work on improving themselves.
The lack of scientific popularization of strategies dealing with thinking errors in large part resulted from the fact that early scholarly efforts to address thinking errors on an individual level did not lead to lasting improvement. Consequently, the brunt of the scholarship and consequent efforts to address these problems focused on organizations and government policy creating nudges and incentives to get people to “do the right thing.” A recent example is Barack Obama issuing an Executive Order for the federal government to use behavioral science insights in all aspects of its work.
However, research in the last decade, from Keith Stanovich, Hal Arkes, and others revealed that we can fix our thinking, sometimes with a single training. For example, my own research and writing shows how people can learn to reach their long-term goals and find their life meaning and purpose using science-based strategies. This scientific approach does not guarantee the right decision, but it is the best method we currently have, and will improve in the future with more research.
This science is mostly trapped in academic books and articles. I teach on this topic to my college students, and they find it enriching: as one student stated, the class "helped me to see some of the problems I may be employing in my thinking about life and other people." Yet most people do not have university library access, and even if they did, would not be interested in making their way through dense academic writing.
Yet a budding movement called Rationality has been going through the complex academic materials and adapting them to everyday life, as exemplified by Rationality Dojo. This small movement has relatively few public outlets. The website LessWrong is dedicated to high-level discussions of strategies to improve thinking patterns and ClearerThinking offers some online courses on improving decision making. The Center for Applied Rationality offers intense in-person workshops for entrepreneurs and founders. Effective Altruism brings insights from rationality to philanthropy. Intentional Insights is a new nonprofit devoted to popularizing rationality-informed strategies to a broad public through blogs, videos, books, apps, and in-person workshops.
Right now, scholars such as myself are testing the strategies developed by Rationality. My probabilistic estimate is that these studies will show that this science-based form of self-improvement is more effective than self-improvement based on personal experience.
In the meantime, I encourage you to consider science-based strategies adapted to everyday life such as probabilistic thinking. You do not have to be nudged by policy makers and CEOs. Instead, you can be intentional and use rationality to make the best decisions for your own goals!
EDIT: Edited based on comments by Lumifer, NancyLebovitz, Romashka, ChristianKi, Vaniver, RichardKennway
[Link] Less Wrong and Agency in the Huffington Post
I'm guessing this is the first time that links to Less Wrong appeared in a Huffington Post article, and three links at that! Please correct me if I'm wrong on this one. I'm also guessing this is the first time that the concept of agency in the rationalist sense was discussed and promoted on the Huffington Post. Another gain for raising the sanity waterline as part of my broader project to promote rationality to the masses! If you want to help, email me at gleb@intentionalinsights.org
The trouble with Bayes (draft)
Prerequisites
This post requires some knowledge of Bayesian and Frequentist statistics, as well as probability. It is intended to explain one of the more advanced concepts in statistical theory--Bayesian non-consistency--to non-statisticians, and although the level required is much less than would be required to read some of the original papers on the topic[1], some considerable background is still required.
The Bayesian dream
Bayesian methods are enjoying a well-deserved growth of popularity in the sciences. However, most practitioners of Bayesian inference, including most statisticians, see it as a practical tool. Bayesian inference has many desirable properties for a data analysis procedure: it allows for intuitive treatment of complex statistical models, which include models with non-iid data, random effects, high-dimensional regularization, covariance estimation, outliers, and missing data. Problems which have been the subject of Ph. D. theses and entire careers in the Frequentist school, such as mixture models and the many-armed bandit problem, can be satisfactorily handled by introductory-level Bayesian statistics.
A more extreme point of view, the flavor of subjective Bayes best exemplified by Jaynes' famous book [2], and also by an sizable contingent of philosophers of science, elevates Bayesian reasoning to the methodology for probabilistic reasoning, in every domain, for every problem. One merely needs to encode one's beliefs as a prior distribution, and Bayesian inference will yield the optimal decision or inference.
To a philosophical Bayesian, the epistemological grounding of most statistics (including "pragmatic Bayes") is abysmal. The practice of data analysis is either dictated by arbitrary tradition and protocol on the one hand, or consists of users creatively employing a diverse "toolbox" of methods justified by a diverse mixture of incompatible theoretical principles like the minimax principle, invariance, asymptotics, maximum likelihood or *gasp* "Bayesian optimality." The result: a million possible methods exist for any given problem, and a million interpretations exist for any data set, all depending on how one frames the problem. Given one million different interpretations for the data, which one should *you* believe?
Why the ambiguity? Take the textbook problem of determining whether a coin is fair or weighted, based on the data obtained from, say, flipping it 10 times. Keep in mind, a principled approach to statistics decides the rule for decision-making before you see the data. So, what rule whould you use for your decision? One rule is, "declare it's weighted, if either 10/10 flips are heads or 0/10 flips are heads." Another rule is, "always declare it to be weighted." Or, "always declare it to be fair." All in all, there are 10 possible outcomes (supposing we only care about the total) and therefore there are 2^10 possible decision rules. We can probably rule out most of them as nonsensical, like, "declare it to be weighted if 5/10 are heads, and fair otherwise" since 5/10 seems like the fairest outcome possible. But among the remaining possibilities, there is no obvious way to choose the "best" rule. After all, the performance of the rule, defined as the probability you will make the correct conclusion from the data, depends on the unknown state of the world, i.e. the true probability of flipping heads for that particular the coin.
The Bayesian approach "cuts" the Gordion knot of choosing the best rule, by assuming a prior distribution over the unknown state of the world. Under this prior distribution, one can compute the average perfomance of any decision rule, and choose the best one. For example, suppose your prior is that with probability 99.9999%, the coin is fair. Then the best decision rule would be to "always declare it to be fair!"
The Bayesian approach gives you the optimal decision rule for the problem, as soon as you come up with a model for the data and a prior for your model. But when you are looking at data analysis problems in the real world (as opposed to a probability textbook), the choice of model is rarely unambiguous. Hence, for me, the standard Bayesian approach does not go far enough--if there are a million models you could choose from, you still get a million different conclusions as a Bayesian.
Hence, one could argue that a "pragmatic" Bayesian who thinks up a new model for every problem is just as epistemologically suspect as any Frequentist. Only the strongest form of subjective Bayesianism can one escape this ambiguity. The dream for the subjective Bayesian dream is to start out in life with a single model. A single prior. For the entire world. This "world prior" would contain all the entirety of one's own life experience, and the grand total of human knowledge. Surely, writing out this prior is impossible. But the point is that a true Bayesian must behave (at least approximately) as if they were driven by such a universal prior. In principle, having such an universal prior (at least conceptually) solves the problem of choosing models and priors for problems: the priors and models you choose for particular problems are determined by the posterior of your universal prior. For example, why did you decide on a linear model for your economics data? It's because according to your universal posterior, you particular economic data is well-described by such a model with high-probability.
The main practical consequence of the universal prior is that your inferences in one problem should be consistent which your inferences in another, related problem. Even if the subjective Bayesian never writes out a "grand model", their integrated approach to data analysis for related problems still distinguishes their approach from the piecemeal approach of frequentists, who tend to treat each data analysis problem as if it occurs in an isolated universe. (So I claim, though I cannot point to any real example of such a subjective Bayesian.)
Yet, even if the subjective Bayesian ideal could be realized, many philosophers of science (e.g. Deborah Mayo) would consider it just as ambiguous as non-Bayesian approaches, since even if you have an unambiguous proecdure for forming personal priors, your priors are still going to differ from mine. I don't consider this a defect, since my worldview necessarily does differ from yours. My ultimate goal is to make the best decision for myself. That said, such egocentrism, even if rationally motivated, may indeed be poorly suited for a collaborative enterprise like science.
For me, the most far more troublesome objection to the "Bayesian dream" is the question, "How would actually you go about constructing this prior that represents all of your beliefs?" Looking in the Bayesian literature, one does not find any convincing examples of any user of Bayesian inference managing to actually encode all (or even a tiny portion) of their beliefs in the form of the prior--in fact, for the most part, we see alarmingly little thought or justification being put into the construction of the priors.
Nevertheless, I myself remained one of these "hardcore Bayesians", at least from a philosophical point of view, ever since I started learning about statistics. My faith in the "Bayesian dream" persisted even after spending three years in the Ph. D. program in Stanford (a department with a heavy bias towards Frequentism) and even after I personally started doing research in frequentist methods. (I see frequentist inference as a poor man's approximation for the ideal Bayesian inference.) Though I was aware of the Bayesian non-consistency results, I largely dismissed them as mathematical pathologies. And while we were still a long way from achieving universal inference, I held the optimistic view that improved technology and theory might one day finally make the "Bayesian dream" achievable. However, I could not find a way to ignore one particular example on Wasserman's blog[3], due to its relevance to very practical problems in causal inference. Eventually I thought of an even simpler counterexample, which devastated my faith in the possibility of constructing a universal prior. Perhaps a fellow Bayesian can find a solution to this quagmire, but I am not holding my breath.
The root of the problem is the extreme degree of ignorance we have about our world, the degree of surprisingness of many true scientific discoveries, and the relative ease with which we accept these surprises. If we consider this behavior rational (which I do), then the subjective Bayesian is obligated to construct a prior which captures this behavior. Yet, the diversity of possible surprises the model must be able to accommodate makes it practically impossible (if not mathematically impossible) to construct such a prior. The alternative is to reject all possibility of surprise, and refuse to update any faster than a universal prior would (extremely slowly), which strikes me as a rather poor epistemological policy.
In the rest of the post, I'll motivate my example, sketch out a few mathematical details (explaining them as best I can to a general audience), then discuss the implications.
Introduction: Cancer classification
Biology and medicine are currently adapting to the wealth of information we can obtain by using high-throughput assays: technologies which can rapidly read the DNA of an individual, measure the concentration of messenger RNA, metabolites, and proteins. In the early days of this "large-scale" approach to biology which began with the Human Genome Project, some optimists had hoped that such an unprecedented torrent of raw data would allow scientists to quickly "crack the genetic code." By now, any such optimism has been washed away by the overwhelming complexity and uncertainty of human biology--a complexity which has been made clearer than ever by the flood of data--and replaced with a sober appreciation that in the new "big data" paradigm, making a discovery becomes a much easier task than understanding any of those discoveries.
Enter the application of machine learning to this large-scale biological data. Scientists take these massive datasets containing patient outcomes, demographic characteristics, and high-dimensional genetic, neurological, and metabolic data, and analyze them using algorithms like support vector machines, logistic regression and decision trees to learn predictive models to relate key biological variables, "biomarkers", to outcomes of interest.
To give a specific example, take a look at this abstract from the Shipp. et. al. paper on detecting survival rates for cancer patients [4]:
Diffuse large B-cell lymphoma (DLBCL), the most common lymphoid malignancy in adults, is curable in less than 50% of patients. Prognostic models based on pre-treatment characteristics, such as the International Prognostic Index (IPI), are currently used to predict outcome in DLBCL. However, clinical outcome models identify neither the molecular basis of clinical heterogeneity, nor specific therapeutic targets. We analyzed the expression of 6,817 genes in diagnostic tumor specimens from DLBCL patients who received cyclophosphamide, adriamycin, vincristine and prednisone (CHOP)-based chemotherapy, and applied a supervised learning prediction method to identify cured versus fatal or refractory disease. The algorithm classified two categories of patients with very different five-year overall survival rates (70% versus 12%). The model also effectively delineated patients within specific IPI risk categories who were likely to be cured or to die of their disease. Genes implicated in DLBCL outcome included some that regulate responses to B-cell−receptor signaling, critical serine/threonine phosphorylation pathways and apoptosis. Our data indicate that supervised learning classification techniques can predict outcome in DLBCL and identify rational targets for intervention.
The term "supervised learning" refers to any algorithm for learning a predictive model for predicting some outcome Y(could be either categorical or numeric) from covariates or features X. In this particular paper, the authors used a relatively simple linear model (which they called "weighted voting") for prediction.
A linear model is fairly easy to interpret: it produces a single "score variable" via a weighted average of a number of predictor variables. Then it predicts the outcome (say "survival" or "no survival") based on a rule like, "Predict survival if the score is larger than 0." Yet, far more advanced machine learning models have been developed, including "deep neural networks" which are winning all of the image recognition and machine translation competitions at the moment. These "deep neural networks" are especially notorious for being difficult to interpret. Along with similarly complicated models, neural networks are often called "black box models": although you can get miraculously accurate answers out of the "box", peering inside won't give you much of a clue as to how it actually works.
Now it is time for the first thought experiment. Suppose a follow-up paper to the Shipp paper reports dramatically improved prediction for survival outcomes of lymphoma patients. The authors of this follow-up paper trained their model on a "training sample" of 500 patients, then used it to predict the five-year outcome of chemotherapy patients, on a "test sample" of 1000 patients. It correctly predicts the outcome ("survival" vs "no survival") on 990 of the 1000 patients.
Question 1: what is your opinion on the predictive accuracy of this model on the population of chemotherapy patients? Suppose that publication bias is not an issue (the authors of this paper designed the study in advance and committed to publishing) and suppose that the test sample of 1000 patients is "representative" of the entire population of chemotherapy patients.
Question 2: does your judgment depend on the complexity of the model they used? What if the authors used an extremely complex and counterintuitive model, and cannot even offer any justification or explanation for why it works? (Nevertheless, their peers have independently confirmed the predictive accuracy of the model.)
A Frequentist approach
The Frequentist answer to the thought experiment is as follows. The accuracy of the model is a probability p which we wish to estimate. The number of successes on the 1000 test patients is Binomial(p, 1000). Based on the data, one can construct a confidence interal: say, we are 99% confident that the accuracy is above 83%. What does 99% confident mean? I won't try to explain, but simply say that in this particular situation, "I'm pretty sure" that the accuracy of the model is above 83%.
A Bayesian approach
The Bayesian interjects, "Hah! You can't explain what your confidence interval actually means!" He puts a uniform prior on the probability p. The posterior distribution of p, conditional on the data, is Beta(991, 11). This gives a 99% credible interval that p is in [0.978, 0.995]. You can actually interpret the interval in probabilistic terms, and it gives a much tighter interval as well. Seems like a Bayesian victory...?
A subjective Bayesian approach
As I have argued before, a Bayesian approach which comes up with a model after hearing about the problem is bound to suffer from the same inconsistency and arbitariness as any non-Bayesian approach. You might assume a uniform distribution for p in this problem... but yet another paper comes along with a similar prediction model? You would need a join distribution for the current model and the new model. What if a theory comes along that could help explain the success of the current method? The parameter p might take a new meaning in this context.
So as a subjective Bayesian, I argue that slapping a uniform prior on the accuracy is the wrong approach. But I'll stop short of actually constructing a Bayesian model of the entire world: let's say we want to restrict our attention to this particular issue of cancer prediction. We want to model the dynamics behind cancer and cancer treatment in humans. Needless to say, the model is still ridiculously complicated. However, I don't think it's out of reach of the efforts of a well-funded, large collaborative effort of scientists.
Roughly speaking, the model can be divided into a distribution over theories of human biology, and conditional on the theory of biology, a course-grained model of an individual patient. The model would not include every cell, every molecule, etc., but it would contain many latent variables in addition to the variables measured in any particular cancer study. Let's call the variables actually measured in the study, X, and also the survival outcome, Y.
Now here is the epistemologically correct way to answer the thought experiment. Take a look at the X's and Y's of the patients in the training and test set. Update your probabilistic model of human biology based on the data. Then take a look at the actual form of the classifier: it's a function f() mapping X's to Y's. The accuracy of the classsifer is no longer parameter: it's a quantity Pr[f(X) = Y] which has a distribution under your posterior. That is, for any given "theory of human biology", Pr[f(X) = Y] has a fixed value: now, over the distribution of possible theories of human biology (based on the data of the current study as well as all previous studies and your own beliefs) Pr[f(X) = Y] has a distribution, and therefore, an average. But what will this posterior give you? Will you get something similar to the interval [0.978, 0.995] you got from the "practical Bayes" approach?
Who knows? But I would guess in all likelihood not. My guess you would get a very different interval from [0.978, 0.995], because in this complex model there is no direct link from the empirical success rate of prediction, and the quantity Pr[f(X) = Y]. But my intuition for this fact comes from the following simpler framework.
A non-parametric Bayesian approach
Instead of reasoning about a gand Bayesian model of biology, I now take a middle ground, and suggesting that while we don't need to capture the entire latent dynamics of cancer, we should at the very least we should try to include the X's and the Y's in the model, instead of merely abstracting the whole experiment as a Binomial trial (as did the frequentist and pragmatic Bayesian.) Hence we need a prior over joint distributions of (X, Y). And yes, I do mean a prior distribution over probability distributions: we are saying that (X, Y) has some unknown joint distribution, which we treat as being drawn at random from a large collection of distributions. This is therefore a non-parametric Bayes approach: the term non-parametric means that the number of the parameters in the model is not finite.
Since in this case Y is a binary outcome, a joint distribution can be decomposed as a marginal distribution over X, and a function g(x) giving the conditional probability that Y=1 given X=x. The marginal distribution is not so interesting or important for us, since it simple reflects the composition of the population of patients. For the purpose of this example, let us say that the marginal is known (e.g., a finite distribution over the population of US cancer patients). What we want to know is the probability of patient survival, and this is given by the function g(X) for the particular patient's X. Hence, we will mainly deal with constructing a prior over g(X).
To construct a prior, we need to think of intuitive properties of the survival probability function g(x). If x is similar to x', then we expect the survival probabilities to be similar. Hence the prior on g(x) should be over random, smooth functions. But we need to choose the smoothness so that the prior does not consist of almost-constant functions. Suppose for now that we choose particular class of smooth functions (e.g. functions with a certain Lipschitz norm) and choose our prior to to be uniform over functions of that smoothness. We could go further and put a prior on the smoothness hyperparameter, but for now we won't.
Now, although I assert my faithfulness to the Bayesian ideal, I still want to think about how whatever prior we choose would allow use to answer some simple though experiments. Why is that? I hold that the ideal Bayesian inference should capture and refine what I take to be "rational behavior." Hence, if a prior produces irrational outcomes, I reject that prior as not reflecting my beliefs.
Take the following thought experiment: we simply want to estimate the expected value of Y, E[Y]. Hence, we draw 100 patients independently with replacement from the population and record their outcomes: suppose the sum is 80 out of 100. The Frequentist (and prgamatic Bayesian) would end up concluding that with high probability/confidence/whatever, the expected value of Y is around 0.8, and I would hold that an ideal rationalist come up with a similar belief. But what would our non-parametric model say? We would draw a random function g(x) conditional on our particular observations: we get a quantity E[g(X)] for each instantiation of g(x): the distribution of E[g(X)]'s over the posterior allows us to make credible intervals for E[Y].
But what do we end up getting? Either one of two things happens. Either you choose too little smoothness, and E[g(X)] ends up concentrating at around 0.5, no matter what data you put into the model. This is the phenomenon of Bayesian non-consistency, and a detailed explanation can be found in several of the listed references: but to put it briefly, sampling at a few isolated points gives you too little information on the rest of the function. This example is not as pathological as the ones used in the literature: if you sample infinitely many points, you will eventually get the posterior to concentrate around the true value of E[Y], but all the same, the convergence is ridiculously slow. Alternatively, use a super-high smoothness, and the posterior of E[g(X)] has a nice interval around the sample value just like in the Binomial example. But now if you look at your posterior draws of g(x), you'll notice the functions are basically constants. Putting a prior on smoothness doesn't change things: the posterior on smoothness doesn't change, since you don't actually have enough data to determine the smoothness of the function. The posterior average of E[g(X)] is no longer always 0.5: it gets a little bit affected by the data, since within the 10% mass of the posterior corresponding to the smooth prior, the average of E[g(X)] is responding to the data. But you are still almost as slow as before in converging to the truth.
At the time that I started thinking about the above "uniform sampling" example, I was stil convinced of a Bayesian resolution. Obviously, using a uniform prior over smooth functions is too naive: you can tell by seeing that the prior distribution over E[g(X)] is already highly concentrated around 0.5. How about a hierarchical model, where first we draw a parameter p from the uniform distribution, and then draw g(x) from the uniform distribution over smooth functions with mean value equal to p? This gets you non-constant g(x) in the posterior, while your posteriors of E[g(X)] converge to the truth as quickly as in the Binomial example. Arguing backwards, I would say that such a prior comes closer to capturing my beliefs.
But then I thought, what about more complicated problems than computing E[Y]? What if you have to compute the expectation of Y conditional on some complicated function of X taking on a certain value: i.e. E[Y|f(X) = 1]? In the frequentist world, you can easily compute E[Y|f(X)=1] by rejection sampling: get a sample of individuals, average the Y's of the individuals whose X's satisfy f(X) = 1. But how could you formulate a prior that has the same property? For a finite collection of functions f, {f1,...,f100}, say, you might be able to construct a prior for g(x) so that the posterior for E[g(X)|fi = 1] converges to the truth for every i in {1,..,100}. I don't know how to do so, but perhaps you know. But the frequentist intervals work for every function f! Can you construct a prior which can do the same?
I am happy to argue that a true Bayesian would not need consistency for every possible f in the mathematical universe. It is cool that frequentist inference works for such a general collection: but it may well be unnecessary for the world we live in. In other words, there may be functions f which are so ridiculous, that even if you showed me that empirically, E[Y|f(X)=1] = 0.9, based on data from 1 million patients, I would not believe that E[Y|f(X)=1] was close to 0.9. It is a counterintuitive conclusion, but one that I am prepared to accept.
Yet, the set of f's which are not so ridiculous, which in fact I might accept to be reasonable based on conventional science, may be so large as to render impossible the construction of a prior which could accommodate them all. But the Bayesian dream makes the far stronger demand that our prior capture not just our current understanding of science but to match the flexibility of rational thought. I hold that given the appropriate evidence, rationalists can be persuaded to accept truths which they could not even imagine beforehand. Thinking about how we could possibly construct a prior to mimic this behavior, the Bayesian dream seems distant indeed.
Discussion
To be updated later... perhaps responding to some of your comments!
[1] Diaconis and Freedman, "On the Consistency of Bayes Estimates"
[2] ET Jaynes, Probability: the Logic of Science
[3] https://normaldeviate.wordpress.com/2012/08/28/robins-and-wasserman-respond-to-a-nobel-prize-winner/
[4] Shipp et al. "Diffuse large B-cell lymphoma outcome prediction by gene-expression profiling and supervised machine learning." Nature
[Link] arguman.org, an argument analysis platform
I recently found out about arguman. It's an online tool to dissect arguments and structure agreement and refutation.
It seems like something that's been discussed about in LW some times in the past.
[Link] Rationality and Mental Illness in the Huffington Post
Just published an article in the The Huffington Post about using rationality-informed strategies to manage my mental illness. Hope this helps people think more rationally about this topic.
Emotional tools for the beginner rationalist
Something that I haven't seen really discussed is what kind of emotional tools would be good for beginner rationalists. I'm especially interested in this topic since as part of my broader project of spreading rationality to a wide audience and thus raising the sanity waterline, I come across a lot of people who are interested in becoming more rational, but have difficulty facing the challenges of the Valley of Bad Rationality. In other words, they have trouble acknowledging their own biases and faults, facing the illusions within their moral systems and values, letting go of cached patterns, updating their beliefs, etc. Many thus abandon their aspiration toward rationality before they get very far. I think this is a systematic failure mode of many beginner aspiring rationalists, and so I wanted to start a discussion about what we can do about it as a community.
Note that this emotional danger does not feel intuitive to me or likely to many of you. In a Facebook discussion with Viliam Bur, he pointed out how he did not experience the Valley. I personally did not experience it that much either. However, based on the evidence of the Intentional Insights outreach efforts, this is a typical mind fallacy particular to many but far from all aspiring rationalists. So we should make an effort to address it in order to raise the sanity waterline effectively.
I'll start by sharing what I found effective in my own outreach efforts. First, I found it helpful to frame the aspiration toward rationality not as a search for a perfect and unreachable ideal, but as a way of constant improvement from the baseline where all humans are to something better. I highlight the benefits people get from this improved mode of thinking, to prime people to focus on their current self and detach themselves from their past selves. I highlight the value of self-empathy and self-forgiveness toward oneself for holding mistaken views, and encourage people to think of themselves as becoming more right, rather than less wrong :-)
Another thing that I found helpful was to provide new aspiring rationalists with a sense of community and social belonging. Joining a community of aspiring rationalists who are sensitive toward a newcomers' emotions, and help that newcomer deal with the challenges s/he experiences, is invaluable for overcoming the emotional strains of the Valley. Something especially useful is having people who are trained in coaching/counseling and serve as mentors for new members, who can help be guides for their intellectual and emotional development alike. I'd suggest that every LW meetup group consider instituting a system of mentors who can provide emotional and intellectual support alike for new members.
Now I'd like to hear about your experiences traveling the Valley, and what tools you and others you know used to manage it. Also, what are your ideas about useful tools for that purpose in general? Look forward to hearing your thoughts!
Ideas for rationality slogans?
As part of my broader project of promoting rationality widely, I'm going to work on making rationality-themed merchandise with slogans. I'd appreciate any ideas on what slogans would be short (5 words or less), engaging, and accessible, and appealing for both aspiring rationalists and smart youth and young adults who are just starting to learn about rationality. As an example, slogans like "Growing Mentally Stronger" or "Updating My Beliefs" are good, but "Tsuyoku Naritai!" is not, however much I personally like that slogan.
[Link] Using mindkillers to promote rationality
As part of my broader project of promoting rationality to a wide audience , I published an article in Salon entitled "Get Donald Trump out of my brain: The neuroscience that explains why he’s running away with the GOP." I'd welcome your thoughts on this article itself, and also meta-comments on the strategy of using mindkillers such as politics to raise the sanity waterline by smuggling in rationality memes into such popular and populist venues.
Feedback on popularizing rationality-informed strategies for making major financial decisions
As part of my broader project of popularizing rationality and raising the sanity waterline, I'm writing a blog about how to make a major financial decision more rationally. The audience we're targeting are educated people into self-improvement, so the blog post, as all of our other content, is couched in that language and style. Any feedback on how to improve the blog to make the blog more clear and emotionally evocative, and thus better suited to spread rationality among a broad audience, would be helpful, as would specific comments on the methodology described. The blog draft itself is below the solid line. Thanks!
P.S. The blog was inspired by this earlier LW discussion post.
___________________________________________________________________________________________________________
Avoid Emotional Traps for Your Happiness!
That backyard was simply gorgeous. Entering it was like going into a magic grove. Lush and shady trees spread their branches around you and protect you from the summer’s heat. Oh, and how beautiful the leaves would get in the fall. Can you imagine all the range of colors that would emerge – different shades of red, yellow, and orange?
The image of that backyard was my single most vivid experience looking for a new house after my wife and I decided to move. It was the strongest impression left after our day of intense house shopping when we were looking at the finalists on our list. I imagined myself lounging in a hammock in the shade of the trees all day, experiencing the calm of a majestic forest, except in the middle of a city. Yet unlike a forest or a public park, it was private, and could be all ours! Exhausted and excited at the end of that long day, my wife and I discussed our top choices, and the backyard was the clincher for both of us. We told our realtor to put in a bid on that house, and couldn’t wait to move in. Little did we know, the backyard was a trap!
Ok, so that might have been a bit overly dramatic. We weren’t going to be swarmed by the Empire’s starfighters in that house. However, it was indeed a trap for our decision-making processes.
Why is that? Here’s an example of similar trap, see if you can spot it.
Doesn’t that Toyota FJ Cruiser look great going into the rugged peaks of the San Juan mountains? Yeah, it’s perfect there. Indeed, Toyota promoted it as the ideal car for that purpose. So if you live in the mountains and drive only there, it’s the car for you!
But let’s be honest. The vast majority of the customers do not live in the mountains, and spent the large majority of their time driving in the city or on highways, and the car had a number of problems for everyday driving. Toyota’s marketing was appealing for people who want to feel like they could go to the mountains, but in actuality, how often are you going to go there?
So now can you guess what is the parallel between the car and the house? If you guessed the actual usage of the backyard, you’re right! Just like taking the car on an off-road trip, using the backyard for lounging around all day is a relatively rare experience. On my days off, I’m much more likely to go visit my friends or go out with my wife than lounge around. I was excessively motivated by my emotional thinking system’s attachment to one aspect of the house at the expense of everything else. This is a classic thinking error, called attentional bias, caused by our brain’s tendency to focus on emotionally dominant information in our environment. Such emotional traps could really undermine long-term happiness with big decisions, such as getting a new car and especially a new home!
Fortunately, my wife and I avoided this trap. The next day after we told our agent to make the offer, we decided to re-evaluate our decision by applying the tool of probabilistic thinking to our estimated likelihood of happiness with our new home.
Below is a photo of our calculations. We compared our first-choice house (170) to our second choice (450). To avoid excessive emotional attachment to any part of the house, we wrote out the various parts of the house (first column). We then gave each a rating of quality on a 1-3 scale, from low to moderate to high. Then, to account for the actual usage of each part of the house, we gave the same rating to usage. We then multiplied both of these numbers by each other to get total value (only the total value is included in the chart). Each of us gave our own ratings for each category to account for our different intuitive valuations of the rating of quality and usage, as you can see from the separate columns for A and G, Agnes and Gleb. Finally, we added them all up at the bottom, and included a couple of small fudge factors due to things like price difference.
Both of us were really surprised by the result. Our second-choice house beat out our first-choice house, and by a lot, 95 to 67.5. We were way off base in our initial decision-making due to our attentional bias on the backyard, which turned out to be much less significant than we originally anticipated once we accounted for actual usage. I shared about my experience with others, and many had similar stories. We quickly called our realtor and asked her to make the bid on the second house. We were so excited when it was accepted!
From that episode, I learned that this type of calculation is incredibly valuable when making any significant financial decisions that can impact your long-term happiness. So how can you use this method to avoid emotional traps for your own happiness?
Let’s go back to the car as an example. Before making a decision, sit down and assign numbers to various components of the car. First, consider how you plan to use the car – city driving, highway driving, road trips, driving in the mountains, driving by yourself, driving with family and friends, driving your date, etc. How much of your time will you use the car for each activity and how important is each activity for you? Assign a numerical value to each activity based on a combination of usage and importance. For instance, you might not be taking family road trips often, but it might be important for the car to be really well suited for those times, so give a higher number for that area.
Then, based on your usage ratings, consider what aspects of the car are important to you – safety, gas mileage, comfort for the driver and passengers, trunk space, off-road capacity, coolness factor, etc.? For example, it might be important to you to impress your dates and friends with your car, so give a higher rating to the coolness factor if that’s the case. Or it might be very valuable to have comfort for yourself and good trunk space if you are taking long car trips around the state for your job. Assign a numerical value to each based on your personal evaluation.
Now, you have a great list to look for in a new car! You know what aspects are most important for you, and are much less likely to be led astray by attentional bias due to test-driving a fun car when you actually need a family-friendly one.
Apply this method to any significant financial decision – car, furniture, vacation, computer, house, etc. A smart investment of less than a half-hour of time could lead to a much happier future for you. Moreover, with a little imagination, this method can be applied to any important decisions, not only financial ones. In future posts, I will discuss how to quantify less tangible values to make the most optimal decisions for your long-term happiness.
Questions to consider
-
What are your strategies for making big decisions wisely?
-
Has attentional bias ever led you astray in big decisions? If so, how could you have applied what you just learned to your previous decisions to make better ones?
-
What kind of significant financial decisions do you have coming up? What kind of factors might inspire attentional bias in these decisions? What specific steps can you take to avoid these problems?
[Link] Rationality-informed approaches in the media
As part of a broader project of promoting rationality, Raelifin and I had some luck in getting media coverage of rationality-informed approaches to probabilistic thinking (1, 2), mental health (1, 2), and reaching life goals through finding purpose and meaning (1, 2). The media includes mainstream media such as the main newspaper in Cleveland, OH; reason-oriented media such as Unbelievers Radio; student-oriented media such as the main newspaper for Ohio State University; and self improvement-oriented media such as the Purpose Revolution.
This is part of our strategy to reach out both to mainstream and to niche groups interested in a specific spin on rationality-informed approaches to winning at life. I wanted to share these here, and see if any of you had suggestions for optimizations of our performance, connections with other media channels both mainstream and nice, and any other thoughts on improving outreach. Thanks!
Calling references: Rational or irrational?
Over the past couple of decades, I've sent out a few hundred resumes (maybe, I don't know, 300 or 400--my spreadsheet for 2013-2015 lists 145 applications). Out of those I've gotten at most two dozen interviews and a dozen job offers.
Throughout that time I've maintained a list of references on my resume. The rest of the resume is, to my mind, not very informative. The list of job titles and degrees says little about how competent I was.
Now and then, I check with one of my references to see if anyone called them. I checked again yesterday with the second reference on my list. The answer was the same: Nope. No one has ever, as far as I can recall, called any of my references. Not the people who interviewed me; not the people who offered me jobs.
When the US government did a background check on me, they asked me for a list of references to contact. My uncertain recollection is that they ignored it and interviewed my neighbors and other contacts instead, as if what I had given them was a list of people not to bother contacting because they'd only say good things about me.
Is this rational or irrational? Why does every employer ask for a list of references, then not call them?
Personal story about benefits of Rationality Dojo and shutting up and multiplying
My wife and I have been going to Ohio Rationality Dojo for a few months now, started by Raelifin, who has substantial expertise in probabilistic thinking and Bayesian reasoning, and I wanted to share about how the dojo helped us make a rational decision about house shopping. We were comparing two houses. We had an intuitive favorite house (170 on the image) but decided to compare it to our second favorite (450) by actually shutting up and multiplying, based on exercises we did as part of the dojo.
What we did was compare mathematically each part of the house by comparing the value of that part of the house multiplied by the use of that part of the house, and had separate values for the two of us (A for my wife, Agnes Vishnevkin, and G for me, Gleb Tsipursky, on the image). By comparing it mathematically, 450 came out way ahead. Hard to update our beliefs, but we did it, and are now orienting toward that one as our primary choice. Rationality for the win!
Here is the image of our back-of-the-napkin calculations.
Pro-Con-lists of arguments and onesidedness points
Follow-up to Reverse Engineering of Belief Structures
Pro-con-lists of arguments such as ProCon.org and BalancedPolitics.org fill a useful purpose. They give an overview over complex debates, and arguably foster nuance. My network for evidence-based policy is currently in the process of constructing a similar site in Swedish.
I'm thinking it might be interesting to add more features to such a site. You could let people create a profile on the site. Then you would let them fill in whether they agree or disagree with the theses under discussion (cannabis legalization, GM foods legalization, etc), and also whether they agree or disagree with the different argument for and against these theses (alternatively, you could let them rate the arguments from 1-5).
Once you have this data, you could use them to give people different kinds of statistics. The most straightforward statistic would be their degree of "onesidedness". If you think that all of the arguments for the theses you believe in are good, and all the arguments against them are bad, then you're defined as onesided. If you, on the other hand, believe that some of your own side's arguments are bad, whereas some of the opponents' arguments are good, you're defined as not being onesided. (The exact mathematical function you would choose could be discussed.)
Once you've told people how one-sided they are, according to the test, you would discuss what might explain onesidedness. My hunch is that the most plausible explanation normally is different kinds of bias. Instead of reviewing new arguments impartially, people treat arguments for their views more leniently than arguments against their views. Hence they end up being onesided, according to the test.
There are other possible explanations, though. One is that all of the arguments against the thesis in question actually are bad. That might happen occassionally, but I don't think that's very common. As Eliezer Yudkowsky says in "Policy Debates Should Not Appear One-sided":
On questions of simple fact (for example, whether Earthly life arose by natural selection) there's a legitimate expectation that the argument should be a one-sided battle; the facts themselves are either one way or another, and the so-called "balance of evidence" should reflect this. Indeed, under the Bayesian definition of evidence, "strong evidence" is just that sort of evidence which we only expect to find on one side of an argument.
But there is no reason for complex actions with many consequences to exhibit this onesidedness property.
Instead, the reason why people end up with one-sided beliefs is bias, Yudkowsky argues:
Why do people seem to want their policy debates to be one-sided?
Politics is the mind-killer. Arguments are soldiers. Once you know which side you're on, you must support all arguments of that side, and attack all arguments that appear to favor the enemy side; otherwise it's like stabbing your soldiers in the back. If you abide within that pattern, policy debates will also appear one-sided to you—the costs and drawbacks of your favored policy are enemy soldiers, to be attacked by any means necessary.
Especially if you're consistently one-sided in lots of different debates, it's hard to see that any other hypothesis besides bias is plausible. It depends a bit on what kinds of arguments you include in the list, though. In our lists we haven't really checked the quality of the arguments (our purpose is to summarize the debate, rather than to judge it), but you could also do that, of course.
My hope is that such a test would make people more aware both of their own biases, and of the problem of political bias in general. I'm thinking that is the first step towards debiasing. I've also constructed a political bias test with similar methods and purposes together with ClearerThinking, which should be released soon.
You could also add other features to a pro-con-list. For instance, you could classify arguments in different ways: ad hominem-arguments, consequentialist arguments, rights-based arguments, etc. (Some arguments might be hard to classify, and then you just wouldn't do that. You wouldn't necessarily have to classify every argument.) Using this info, you could give people a profile: e.g., what kinds of arguments do they find most persuasive? That could make them reflect more on what kinds of arguments really are valid.
You could also combine these two features. For instance, some people might accept ad hominem-arguments when they support their views, but not when they contradict them. That would make your use of ad hominem-arguments onesided.
Yet another feature that could be added is a standard political compass. Since people fill in what theses they believe in (cannabis legalization, GM goods legalization, etc) you could calcluate what party is closest to them, based on the parties' stances on these issues. That could potentially make the test more attractive to take.
Suggestions of more possible features are welcome, as well as general comments - especially about implementation.
View more: Next
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)