Collaborative Truth-Seeking
Summary: We frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal.
Acknowledgments: Thanks to Pete Michaud, Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Szun S. Tay, Alfredo Parra, Michael Estes, Aaron Thoma, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.
The Problem with Debates
Aspiring rationalists generally aim to figure out the truth, and often disagree about it. The usual method of hashing out such disagreements in order to discover the truth is through debates, in person or online.
Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritizing truth discovery. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.
We may hope that as aspiring rationalists, we would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode of engagement called collaborative truth-seeking.
Collaborative Truth-Seeking
Collaborative truth-seeking is one way of describing a more intentional approach in which two or more people with different opinions engage in a process that focuses on finding out the truth. Collaborative truth-seeking is a modality that should be used among people with shared goals and a shared sense of trust.
Some important features of collaborative truth-seeking, which are often not present in debates, are: focusing on a desire to change one’s own mind toward the truth; a curious attitude; being sensitive to others’ emotions; striving to avoid arousing emotions that will hinder updating beliefs and truth discovery; and a trust that all other participants are doing the same. These can contribute to increased social sensitivity, which, together with other attributes, correlate with accomplishing higher group performance on a variety of activities.
The process of collaborative truth-seeking starts with establishing trust, which will help increase social sensitivity, lower barriers to updating beliefs, increase willingness to be vulnerable, and calm emotional arousal. The following techniques are helpful for establishing trust in collaborative truth-seeking:
-
Share weaknesses and uncertainties in your own position
-
Share your biases about your position
-
Share your social context and background as relevant to the discussion
-
For instance, I grew up poor once my family immigrated to the US when I was 10, and this naturally influences me to care about poverty more than some other issues, and have some biases around it - this is one reason I prioritize poverty in my Effective Altruism engagement
-
-
Vocalize curiosity and the desire to learn
-
Ask the other person to call you out if they think you're getting emotional or engaging in emotive debate instead of collaborative truth-seeking, and consider using a safe word
Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:
-
Self-signal: signal to yourself that you want to engage in collaborative truth-seeking, instead of debating
-
Empathize: try to empathize with the other perspective that you do not hold by considering where their viewpoint came from, why they think what they do, and recognizing that they feel that their viewpoint is correct
-
Keep calm: be prepared with emotional management to calm your emotions and those of the people you engage with when a desire for debate arises
-
watch out for defensiveness and aggressiveness in particular
-
-
Go slow: take the time to listen fully and think fully
-
Consider pausing: have an escape route for complex thoughts and emotions if you can’t deal with them in the moment by pausing and picking up the discussion later
-
say “I will take some time to think about this,” and/or write things down
-
-
Echo: paraphrase the other person’s position to indicate and check whether you’ve fully understood their thoughts
-
Be open: orient toward improving the other person’s points to argue against their strongest form
-
Stay the course: be passionate about wanting to update your beliefs, maintain the most truthful perspective, and adopt the best evidence and arguments, no matter if they are yours of those of others
-
Be diplomatic: when you think the other person is wrong, strive to avoid saying "you're wrong because of X" but instead to use questions, such as "what do you think X implies about your argument?"
-
Be specific and concrete: go down levels of abstraction
-
Be clear: make sure the semantics are clear to all by defining terms
-
consider tabooing terms if some are emotionally arousing, and make sure you are describing the same territory of reality
-
-
Be probabilistic: use probabilistic thinking and probabilistic language, to help get at the extent of disagreement and be as specific and concrete as possible
-
For instance, avoid saying that X is absolutely true, but say that you think there's an 80% chance it's the true position
-
Consider adding what evidence and reasoning led you to believe so, for both you and the other participants to examine this chain of thought
-
-
When people whose perspective you respect fail to update their beliefs in response to your clear chain of reasoning and evidence, update a little somewhat toward their position, since that presents evidence that your position is not very convincing
-
Confirm your sources: look up information when it's possible to do so (Google is your friend)
-
Charity mode: trive to be more charitable to others and their expertise than seems intuitive to you
-
Use the reversal test to check for status quo bias
-
If you are discussing whether to change some specific numeric parameter - say increase by 50% the money donated to charity X - state the reverse of your positions, for example decreasing the amount of money donated to charity X by 50%, and see how that impacts your perspective
-
-
Use CFAR’s double crux technique
-
In this technique, two parties who hold different positions on an argument each writes the the fundamental reason for their position (the crux of their position). This reason has to be the key one, so if it was proven incorrect, then each would change their perspective. Then, look for experiments that can test the crux. Repeat as needed. If a person identifies more than one reason as crucial, you can go through each as needed. More details are here.
-
Of course, not all of these techniques are necessary for high-quality collaborative truth-seeking. Some are easier than others, and different techniques apply better to different kinds of truth-seeking discussions. You can apply some of these techniques during debates as well, such as double crux and the reversal test. Try some out and see how they work for you.
Conclusion
Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly. It also tends to take more time and effort than just debating. It is also easy to slip into debate mode even when using collaborative truth-seeking, because of the intuitive nature of debate mode.
Moreover, collaborative truth-seeking need not replace debates at all times. This non-intuitive mode of engagement can be chosen when discussing issues that relate to deeply-held beliefs and/or ones that risk emotional triggering for the people involved. Because of my own background, I would prefer to discuss poverty in collaborative truth-seeking mode rather than debate mode, for example. On such issues, collaborative truth-seeking can provide a shortcut to resolution, in comparison to protracted, tiring, and emotionally challenging debates. Likewise, using collaborative truth-seeking to resolve differing opinions on all issues holds the danger of creating a community oriented excessively toward sensitivity to the perspectives of others, which might result in important issues not being discussed candidly. After all, research shows the importance of having disagreement in order to make wise decisions and to figure out the truth. Of course, collaborative truth-seeking is well suited to expressing disagreements in a sensitive way, so if used appropriately, it might permit even people with triggers around certain topics to express their opinions.
Taking these caveats into consideration, collaborative truth-seeking is a great tool to use to discover the truth and to update our beliefs, as it can get past the high emotional barriers to altering our perspectives that have been put up by evolution. Rationality venues are natural places to try out collaborative truth-seeking.
Lesswrong 2016 Survey
It’s time for a new survey!
The details of the last survey can be found here. And the results can be found here.
I posted a few weeks back asking for suggestions for questions to include on the survey. As much as we’d like to include more of them, we all know what happens when we have too many questions. The following graph is from the last survey.
http://i.imgur.com/KFTn2Bt.png
(Source: JD’s analysis of 2014 survey data)
Two factors seem to predict if a question will get an answer:
-
The position
-
Whether people want to answer it. (Obviously)
People answer fewer questions as we approach the end. They also skip tricky questions. The least answered question on the last survey was - “what is your favourite lw post, provide a link”. Which I assume was mostly skipped for the amount of effort required either in generating a favourite or in finding a link to it. The second most skipped questions were the digit-ratio questions which require more work, (get out a ruler and measure) compared to the others. This is unsurprising.
This year’s survey is almost the same size as the last one (though just a wee bit smaller). Preliminary estimates suggest you should put aside 25 minutes to take the survey, however you can pause at any time and come back to the survey when you have more time. If you’re interested in helping process the survey data please speak up either in a comment or a PM.
We’re focusing this year particularly on getting a glimpse of the size and shape of the LessWrong diaspora. With that in mind; if possible - please make sure that your friends (who might be less connected but still hang around in associated circles) get a chance to see that the survey exists; and if you’re up to it - encourage them to fill out a copy of the survey.
The survey is hosted and managed by the team at FortForecast, you’ll be hearing more from them soon. The survey can be accessed through http://lesswrong.com/2016survey.
Survey responses are anonymous in that you’re not asked for your name. At the end we plan to do an opt-in public dump of the data. Before publication the row order will be scrambled, datestamps, IP addresses and any other non-survey question information will be stripped, and certain questions which are marked private such as the (optional) sign up for our mailing list will not be included. It helps the most if you say yes but we can understand if you don’t.
Thanks to Namespace (JD) and the FortForecast team, the Slack, the #lesswrong IRC on freenode, and everyone else who offered help in putting the survey together, special thanks to Scott Alexander whose 2014 survey was the foundation for this one.
When answering the survey, I ask you be helpful with the format of your answers if you want them to be useful. For example if a question asks for an number, please reply with “4” not “four”. Going by the last survey we may very well get thousands of responses and cleaning them all by hand will cost a fortune on mechanical turk. (And that’s for the ones we can put on mechanical turk!) Thanks for your consideration.
The survey will be open until the 1st of may 2016
Addendum from JD at FortForecast: During user testing we’ve encountered reports of an error some users get when they try to take the survey which erroneously reports that our database is down. We think we’ve finally stamped it out but this particular bug has proven resilient. If you get this error and still want to take the survey here are the steps to mitigate it:
-
Refresh the survey, it will still be broken. You should see a screen with question titles but no questions.
-
Press the “Exit and clear survey” button, this will reset your survey responses and allow you to try again fresh.
-
Rinse and repeat until you manage to successfully answer the first two questions and move on. It usually doesn’t take more than one or two tries. We haven’t received reports of the bug occurring past this stage.
If you encounter this please mail jd@fortforecast.com with details. Screenshots would be appreciated but if you don’t have the time just copy and paste the error message you get into the email.
Meta - this took 2 hours to write and was reviewed by the slack.
My Table of contents can be found here.
LessWrong 2.0
Alternate titles: What Comes Next?, LessWrong is Dead, Long Live LessWrong!
You've seen the articles and comments about the decline of LessWrong. Why pay attention to this one? Because this time, I've talked to Nate at MIRI and Matt at Trike Apps about development for LW, and they're willing to make changes and fund them. (I've even found a developer willing to work on the LW codebase.) I've also talked to many of the prominent posters who've left about the decline of LW, and pointed out that the coordination problem could be deliberately solved if everyone decided to come back at once. Everyone that responded expressed displeasure that LW had faded and interest in a coordinated return, and often had some material that they thought they could prepare and have ready.
But before we leap into action, let's review the problem.
Marketing Rationality
What is your opinion on rationality-promoting articles by Gleb Tsipursky / Intentional Insights? Here is what I think:
Personal story about benefits of Rationality Dojo and shutting up and multiplying
My wife and I have been going to Ohio Rationality Dojo for a few months now, started by Raelifin, who has substantial expertise in probabilistic thinking and Bayesian reasoning, and I wanted to share about how the dojo helped us make a rational decision about house shopping. We were comparing two houses. We had an intuitive favorite house (170 on the image) but decided to compare it to our second favorite (450) by actually shutting up and multiplying, based on exercises we did as part of the dojo.
What we did was compare mathematically each part of the house by comparing the value of that part of the house multiplied by the use of that part of the house, and had separate values for the two of us (A for my wife, Agnes Vishnevkin, and G for me, Gleb Tsipursky, on the image). By comparing it mathematically, 450 came out way ahead. Hard to update our beliefs, but we did it, and are now orienting toward that one as our primary choice. Rationality for the win!
Here is the image of our back-of-the-napkin calculations.
You Are A Brain - Intro to LW/Rationality Concepts [Video & Slides]
Here's a 32-minute presentation I made to provide an introduction to some of the core LessWrong concepts for a general audience:
You Are a Brain [Google Slides] - public domain
I already posted this here in 2009 and some commenters asked for a video, so I immediately recorded one six years later. This time the audience isn't teens from my former youth group, it's employees who work at my software company where we have a seminar series on Thursday afternoons.
Optimizing the Twelve Virtues of Rationality
At the Less Wrong Meetup in Columbus, OH over the last couple of months, we discussed optimizing the Twelve Virtues of Rationality. In doing so, we were inspired by what Eliezer himself said in the essay:
-
Perhaps your conception of rationality is that it is rational to believe the words of the Great Teacher, and the Great Teacher says, “The sky is green,” and you look up at the sky and see blue. If you think: “It may look like the sky is blue, but rationality is to believe the words of the Great Teacher,” you lose a chance to discover your mistake.
So we first decided on the purpose of optimizing, and settled on yielding virtues that would be most impactful and effective for motivating people to become more rational, in other words optimizations that would produce the most utilons and hedons for the purpose of winning. There were a bunch of different suggestions. I tried to apply them to myself over the last few weeks and want to share my findings.
First Suggestion
Replace Perfectionism with Improvement
Motivation for Replacement
Perfectionism, both in how it pattern matches and in its actual description in the essay, orients toward focusing on defects and errors in oneself. By depicting the self as always flawed, and portraying the aspiring rationalist's job as seeking to find the flaws, the virtue of perfectionism is framed negatively, and is bound to result in negative reinforcement. Finding a flaw feels bad, and in many people that creates ugh fields around actually doing that search, as reported by participants at the Meetup. Instead, a positive framing of this virtue would be Improvement. Then, the aspiring rationalist can feel ok about where s/he is right now, but orient toward improving and growing mentally stronger - Tsuyoku Naritai! All improvement would be about gaining more hedons, and thus use the power of positive reinforcement. Generally, research suggests that positive reinforcement is effective in motivating the repetition of behavior, whereas negative reinforcement works best to stop people from doing a certain behavior. No wonder that Meetup participants reported that Perfectionism was not very effective in motivating them to grow more rational. So to get both more hedons, and thereby more utilons in the sense of the utility of seeking to grow more rational, Improvement might be a better term and virtue than perfectionism.
Self-Report
I've been orienting myself toward improvement instead of perfectionism for the last few weeks, and it's been a really noticeable difference. I've become much more motivated to seek ways that I can improve my ability to find the truth. I've been more excited and enthused about finding flaws and errors in myself, because they are now an opportunity to improve and grow stronger, not become less weak and imperfect. It's the same outcome as the virtue of Perfectionism, but deploying the power of positive reinforcement.
Second Suggestion
Replace Argument with Community
Motivation for Replacement
Argument is an important virtue, and a vital way of getting ourselves to see the truth is to rely on others to help us see the truth through debates, highlight mistaken beliefs, and help update on them, as the virtue describes. Yet orienting toward a rationalist Community has additional benefits besides the benefits of argument, which is only one part of a rationalist Community. Such a community would help provide an external perspective that research suggests would be especially beneficial to pointing out flaws and biases within one's ability to evaluate reality rationally, even without an argument. A community can help provide wise advice on making decisions, and it’s especially beneficial to have a community of diverse and intelligent people of all sorts in order to get the benefits of a wide variety of private information that one can aggregate to help make the best decisions. Moreover, a community can provide systematic ways to improve, through giving each systematic feedback, through compensating for each others' weaknesses in rationality, through learning difficult things together, and other ways of supporting each others' pursuit of ever-greater rationality. Likewise, a community can collaborate together, with different people fulfilling different functions in supporting all others in growing mentally stronger - not everybody has to be the "hero," after all, and different people can specialize in various tasks related to supporting others growing mentally stronger, gaining comparative advantage as a result. Studies show that social relationships impact us powerfully in numerous ways, contribute to our mental and physical wellbeing, and that we become more like our social network over time (1, 2, 3). This highlights further the benefits of focusing on developing a rationalist-oriented community of diverse people around ourselves to help us grow mentally stronger and get to the correct answer, and gain hedons and utilons alike for the purpose of winning.
Self-Report
After I updated my beliefs toward Community from Argument, I've been working more intentionally to create a systematic way for other aspiring rationalists in my LW meetup, and even non-rationalists, to point out my flaws and biases to me. I've noticed that by taking advantage of outside perspectives, I've been able to make quite a bit more headway on uncovering my own false beliefs and biases. I asked friends, both fellow aspiring rationalists and other wise friends not currently in the rationalist movement, to help me by pointing out when my biases might be at play, and they were happy to do so. For example, I tend to have an optimism bias, and I have told people around me to watch for me exhibiting this bias. They pointed out a number of times when this occurred, and I was able to improve gradually my ability to notice and deal with this bias.
Third Suggestion
Expand Empiricism to include Experimentation
Motivation for Expansion
This would not be a replacement of a virtue, but an expansion of the definition of Empiricism. As currently stated, Empiricism focused on observation and prediction, and implicitly in making beliefs pay rent in anticipated experience. This is a very important virtue, and fundamental to rationality. It can be improved, however, by adding experimentation to the description of empiricism. By experimentation I mean expanding simply observation as described in the essay currently, to include actually running experiments and testing things out in order to update our maps, both about ourselves and in the world around us. This would help us take initiative in gaining data around the world, not simply relying passively on observation of the world around us. My perspective on this topic was further strengthened by this recent discussion post, which caused me to further update my beliefs toward experimentation as a really valuable part of empiricism. Thus, including experimentation as part of empiricism would get us more utilons for getting at the correct answer and winning.
Self-Report
I have been running experiments on myself and the world around me long before this discussion took place. The discussion itself helped me connect the benefits of experimentation to the virtue of Empiricism, and also see the gap currently present in that virtue. I strengthened my commitment to experimentation, and have been running more concrete experiments, where I both predict the results in advance in order to make my beliefs pay rent, and then run an experiment to test whether my beliefs actually correlated to the outcome of the experiments. I have been humbled several times and got some great opportunities to update my beliefs by combining prediction of anticipated experience with active experimentation.
Conclusion
The Twelve Virtues of Rationality can be optimized to be more effective and impactful for getting at the correct answer and thus winning. There are many way of doing so, but we need to be careful in choosing optimizations that would be most optimal for the most people, as based on the research on how our minds actually work. The suggestions I shared above are just some ways of doing so. What do you think of these suggestions? What are your ideas for optimizing the Twelve Virtues of Rationality?
16 types of useful predictions
How often do you make predictions (either about future events, or about information that you don't yet have)? If you're a regular Less Wrong reader you're probably familiar with the idea that you should make your beliefs pay rent by saying, "Here's what I expect to see if my belief is correct, and here's how confident I am," and that you should then update your beliefs accordingly, depending on how your predictions turn out.
And yet… my impression is that few of us actually make predictions on a regular basis. Certainly, for me, there has always been a gap between how useful I think predictions are, in theory, and how often I make them.
I don't think this is just laziness. I think it's simply not a trivial task to find predictions to make that will help you improve your models of a domain you care about.
At this point I should clarify that there are two main goals predictions can help with:
- Improved Calibration (e.g., realizing that I'm only correct about Domain X 70% of the time, not 90% of the time as I had mistakenly thought).
- Improved Accuracy (e.g., going from being correct in Domain X 70% of the time to being correct 90% of the time)
If your goal is just to become better calibrated in general, it doesn't much matter what kinds of predictions you make. So calibration exercises typically grab questions with easily obtainable answers, like "How tall is Mount Everest?" or "Will Don Draper die before the end of Mad Men?" See, for example, the Credence Game, Prediction Book, and this recent post. And calibration training really does work.
But even though making predictions about trivia will improve my general calibration skill, it won't help me improve my models of the world. That is, it won't help me become more accurate, at least not in any domains I care about. If I answer a lot of questions about the heights of mountains, I might become more accurate about that topic, but that's not very helpful to me.
So I think the difficulty in prediction-making is this: The set {questions whose answers you can easily look up, or otherwise obtain} is a small subset of all possible questions. And the set {questions whose answers I care about} is also a small subset of all possible questions. And the intersection between those two subsets is much smaller still, and not easily identifiable. As a result, prediction-making tends to seem too effortful, or not fruitful enough to justify the effort it requires.

But the intersection's not empty. It just requires some strategic thought to determine which answerable questions have some bearing on issues you care about, or -- approaching the problem from the opposite direction -- how to take issues you care about and turn them into answerable questions.
I've been making a concerted effort to hunt for members of that intersection. Here are 16 types of predictions that I personally use to improve my judgment on issues I care about. (I'm sure there are plenty more, though, and hope you'll share your own as well.)
- Predict how long a task will take you. This one's a given, considering how common and impactful the planning fallacy is.
Examples: "How long will it take to write this blog post?" "How long until our company's profitable?" - Predict how you'll feel in an upcoming situation. Affective forecasting – our ability to predict how we'll feel – has some well known flaws.
Examples: "How much will I enjoy this party?" "Will I feel better if I leave the house?" "If I don't get this job, will I still feel bad about it two weeks later?" - Predict your performance on a task or goal.
One thing this helps me notice is when I've been trying the same kind of approach repeatedly without success. Even just the act of making the prediction can spark the realization that I need a better game plan.
Examples: "Will I stick to my workout plan for at least a month?" "How well will this event I'm organizing go?" "How much work will I get done today?" "Can I successfully convince Bob of my opinion on this issue?" - Predict how your audience will react to a particular social media post (on Facebook, Twitter, Tumblr, a blog, etc.).
This is a good way to hone your judgment about how to create successful content, as well as your understanding of your friends' (or readers') personalities and worldviews.
Examples: "Will this video get an unusually high number of likes?" "Will linking to this article spark a fight in the comments?" - When you try a new activity or technique, predict how much value you'll get out of it.
I've noticed I tend to be inaccurate in both directions in this domain. There are certain kinds of life hacks I feel sure are going to solve all my problems (and they rarely do). Conversely, I am overly skeptical of activities that are outside my comfort zone, and often end up pleasantly surprised once I try them.
Examples: "How much will Pomodoros boost my productivity?" "How much will I enjoy swing dancing?" - When you make a purchase, predict how much value you'll get out of it.
Research on money and happiness shows two main things: (1) as a general rule, money doesn't buy happiness, but also that (2) there are a bunch of exceptions to this rule. So there seems to be lots of potential to improve your prediction skill here, and spend your money more effectively than the average person.
Examples: "How much will I wear these new shoes?" "How often will I use my club membership?" "In two months, will I think it was worth it to have repainted the kitchen?" "In two months, will I feel that I'm still getting pleasure from my new car?" - Predict how someone will answer a question about themselves.
I often notice assumptions I'm been making about other people, and I like to check those assumptions when I can. Ideally I get interesting feedback both about the object-level question, and about my overall model of the person.
Examples: "Does it bother you when our meetings run over the scheduled time?" "Did you consider yourself popular in high school?" "Do you think it's okay to lie in order to protect someone's feelings?" - Predict how much progress you can make on a problem in five minutes.
I often have the impression that a problem is intractable, or that I've already worked on it and have considered all of the obvious solutions. But then when I decide (or when someone prompts me) to sit down and brainstorm for five minutes, I am surprised to come away with a promising new approach to the problem.
Example: "I feel like I've tried everything to fix my sleep, and nothing works. If I sit down now and spend five minutes thinking, will I be able to generate at least one new idea that's promising enough to try?" - Predict whether the data in your memory supports your impression.
Memory is awfully fallible, and I have been surprised at how often I am unable to generate specific examples to support a confident impression of mine (or how often the specific examples I generate actually contradict my impression).
Examples: "I have the impression that people who leave academia tend to be glad they did. If I try to list a bunch of the people I know who left academia, and how happy they are, what will the approximate ratio of happy/unhappy people be?"
"It feels like Bob never takes my advice. If I sit down and try to think of examples of Bob taking my advice, how many will I be able to come up with?" - Pick one expert source and predict how they will answer a question.
This is a quick shortcut to testing a claim or settling a dispute.
Examples: "Will Cochrane Medical support the claim that Vitamin D promotes hair growth?" "Will Bob, who has run several companies like ours, agree that our starting salary is too low?" - When you meet someone new, take note of your first impressions of him. Predict how likely it is that, once you've gotten to know him better, you will consider your first impressions of him to have been accurate.
A variant of this one, suggested to me by CFAR alum Lauren Lee, is to make predictions about someone before you meet him, based on what you know about him ahead of time.
Examples: "All I know about this guy I'm about to meet is that he's a banker; I'm moderately confident that he'll seem cocky." "Based on the one conversation I've had with Lisa, she seems really insightful – I predict that I'll still have that impression of her once I know her better." - Predict how your Facebook friends will respond to a poll.
Examples: I often post social etiquette questions on Facebook. For example, I recently did a poll asking, "If a conversation is going awkwardly, does it make things better or worse for the other person to comment on the awkwardness?" I confidently predicted most people would say "worse," and I was wrong. - Predict how well you understand someone's position by trying to paraphrase it back to him.
The illusion of transparency is pernicious.
Examples: "You said you think running a workshop next month is a bad idea; I'm guessing you think that's because we don't have enough time to advertise, is that correct?"
"I know you think eating meat is morally unproblematic; is that because you think that animals don't suffer?" - When you have a disagreement with someone, predict how likely it is that a neutral third party will side with you after the issue is explained to her.
For best results, don't reveal which of you is on which side when you're explaining the issue to your arbiter.
Example: "So, at work today, Bob and I disagreed about whether it's appropriate for interns to attend hiring meetings; what do you think?" - Predict whether a surprising piece of news will turn out to be true.
This is a good way to hone your bullshit detector and improve your overall "common sense" models of the world.
Examples: "This headline says some scientists uploaded a worm's brain -- after I read the article, will the headline seem like an accurate representation of what really happened?"
"This viral video purports to show strangers being prompted to kiss; will it turn out to have been staged?" - Predict whether a quick online search will turn up any credible sources supporting a particular claim.
Example: "Bob says that watches always stop working shortly after he puts them on – if I spend a few minutes searching online, will I be able to find any credible sources saying that this is a real phenomenon?"
I have one additional, general thought on how to get the most out of predictions:
Rationalists tend to focus on the importance of objective metrics. And as you may have noticed, a lot of the examples I listed above fail that criterion. For example, "Predict whether a fight will break out in the comments? Well, there's no objective way to say whether something officially counts as a 'fight' or not…" Or, "Predict whether I'll be able to find credible sources supporting X? Well, who's to say what a credible source is, and what counts as 'supporting' X?"
And indeed, objective metrics are preferable, all else equal. But all else isn't equal. Subjective metrics are much easier to generate, and they're far from useless. Most of the time it will be clear enough, once you see the results, whether your prediction basically came true or not -- even if you haven't pinned down a precise, objectively measurable success criterion ahead of time. Usually the result will be a common sense "yes," or a common sense "no." And sometimes it'll be "um...sort of?", but that can be an interestingly surprising result too, if you had strongly predicted the results would point clearly one way or the other.
Along similar lines, I usually don't assign numerical probabilities to my predictions. I just take note of where my confidence falls on a qualitative "very confident," "pretty confident," "weakly confident" scale (which might correspond to something like 90%/75%/60% probabilities, if I had to put numbers on it).
There's probably some additional value you can extract by writing down quantitative confidence levels, and by devising objective metrics that are impossible to game, rather than just relying on your subjective impressions. But in most cases I don't think that additional value is worth the cost you incur from turning predictions into an onerous task. In other words, don't let the perfect be the enemy of the good. Or in other other words: the biggest problem with your predictions right now is that they don't exist.
The Galileo affair: who was on the side of rationality?
Introduction
A recent survey showed that the LessWrong discussion forums mostly attract readers who are predominantly either atheists or agnostics, and who lean towards the left or far left in politics. As one of the main goals of LessWrong is overcoming bias, I would like to come up with a topic which I think has a high probability of challenging some biases held by at least some members of the community. It's easy to fight against biases when the biases belong to your opponents, but much harder when you yourself might be the one with biases. It's also easy to cherry-pick arguments which prove your beliefs and ignore those which would disprove them. It's also common in such discussions, that the side calling itself rationalist makes exactly the same mistakes they accuse their opponents of doing. Far too often have I seen people (sometimes even Yudkowsky himself) who are very good rationalists but can quickly become irrational and use several fallacies when arguing about history or religion. This most commonly manifests when we take the dumbest and most fundamentalist young Earth creationists as an example, winning easily against them, then claiming that we disproved all arguments ever made by any theist. No, this article will not be about whether God exists or not, or whether any real world religion is fundamentally right or wrong. I strongly discourage any discussion about these two topics.
This article has two main purposes:
1. To show an interesting example where the scientific method can lead to wrong conclusions
2. To overcome a certain specific bias, namely, that the pre-modern Catholic Church was opposed to the concept of the Earth orbiting the Sun with the deliberate purpose of hindering scientific progress and to keep the world in ignorance. I hope this would prove to also be an interesting challenge for your rationality, because it is easy to fight against bias in others, but not so easy to fight against bias on yourselves.
The basis of my claims is that I have read the book written by Galilei himself, and I'm very interested (and not a professional, but well read) in early modern, but especially 16-17th century history.
Geocentrism versus Heliocentrism
I assume every educated person knows the name of Galileo Galilei. I won't waste the space on the site and the time of the readers to present a full biography about his life, there are plenty of on-line resources where you can find more than enough biographic information about him.
The controversy?
What is interesting about him is how many people have severe misconceptions about him. Far too often he is celebrated as the one sane man in an era of ignorance, the sole propagator of science and rationality when the powers of that era suppressed any scientific thought and ridiculed everyone who tried to challenge the accepted theories about the physical world. Some even go as far as claiming that people believed the Earth was flat. Although the flat Earth theory was not propagated at all, it's true that the heliocentric view of the Solar System (the Earth revolving around the Sun) was not yet accepted.
However, the claim that the Church was suppressing evidence about heliocentrism "to maintain its power over the ignorant masses" can be disproved easily:
- The common people didn't go to school where they could have learned about it, and those commoners who did go to school, just learned to read and write, not much more, so they wouldn't care less about what orbits around what. This differs from 20-21th century fundamentalists who want to teach young Earth creationism in schools - back then in the 17th century, there would be no classes where either the geocentric or heliocentric views could have been taught to the masses.
- Heliocentrism was not discovered by Galilei. It was first proposed by Nicolaus Copernicus almost 100 years before Galilei. Copernicus didn't have any affairs with the Inquisition. His theories didn't gain wide acceptance, but he and his followers weren't persecuted either.
- Galilei was only sentenced to house arrest, and mostly because of insulting the pope and doing other unwise things. The political climate in 17th century Italy was quite messy, and Galilei did quite a few unfortunate choices regarding his alliances. Actually, Galilei was the one who brought religion into the debate: his opponents were citing Aristotle, not the Bible in their arguments. Galilei, however, wanted to redefine the Scripture based on his (unproven) beliefs, and insisted that he should have the authority to push his own views about how people interpret the Bible. Of course this pissed quite a few people off, and his case was not helped by publicly calling the pope an idiot.
- For a long time Galilei was a good friend of the pope, while holding heliocentric views. So were a couple of other astronomers. The heliocentrism-geocentrism debates were common among astronomers of the day, and were not hindered, but even encouraged by the pope.
- The heliocentrism-geocentrism debate was never an ateism-theism debate. The heliocentrists were committed theists, just like the defenders of geocentrism. The Church didn't suppress science, but actually funded the research of most scientists.
- The defenders of geocentrism didn't use the Bible as a basis for their claims. They used Aristotle and, for the time being, good scientific reasoning. The heliocentrists were much more prone to use the "God did it" argument when they couldn't defend the gaps in their proofs.
The birth of heliocentrism.
By the 16th century, astronomers have plotted the movements of the most important celestial bodies in the sky. Observing the motion of the Sun, the Moon and the stars, it would seem obvious that the Earth is motionless and everything orbits around it. This model (called geocentrism) had only one minor flaw: the planets would sometimes make a loop in their motion, "moving backwards". This required a lot of very complicated formulas to model their motions. Thus, by the virtue of Occam's razor, a theory was born which could better explain the motion of the planets: what if the Earth and everything else orbited around the Sun? However, this new theory (heliocentrism) had a lot of issues, because while it could explain the looping motion of the planets, there were a lot of things which it either couldn't explain, or the geocentric model could explain it much better.
The proofs, advantages and disadvantages
The heliocentric view had only a single advantage against the geocentric one: it could describe the motion of the planets by a much simper formula.
However, it had a number of severe problems:
- Gravity. Why do the objects have weight, and why are they all pulled towards the center of the Earth? Why don't objects fall off the Earth on the other side of the planet? Remember, Newton wasn't even born yet! The geocentric view had a very simple explanation, dating back to Aristotle: it is the nature of all objects that they strive towards the center of the world, and the center of the spherical Earth is the center of the world. The heliocentric theory couldn't counter this argument.
- Stellar parallax. If the Earth is not stationary, then the relative position of the stars should change as the Earth orbits the Sun. No such change was observable by the instruments of that time. Only in the first half of the 19th century did we succeed in measuring it, and only then was the movement of the Earth around the Sun finally proven.
- Galilei tried to used the tides as a proof. The geocentrists argued that the tides are caused by the Moon even if they didn't knew by what mechanisms, but Galilei said that it's just a coincidence, and the tides are not caused by the Moon: just as if we put a barrel of water onto a cart, the water would be still if the cart was stationary and the water would be sloshing around if the cart was pulled by a horse, so are the tides caused by the water sloshing around as the Earth moves. If you read Galilei's book, you will discover quite a number of such silly arguments, and you'll see that Galilei was anything but a rationalist. Instead of changing his views against overwhelming proofs, he used all possible fallacies to push his view through.
Actually the most interesting author in this topic was Riccioli. If you study his writings you will get definite proof that the heliocentrism-geocentrism debate was handled with scientific accuracy and rationality, and it was not a religious debate at all. He defended geocentrism, and presented 126 arguments in the topic (49 for heliocentrism, 77 against), and only two of them (both for heliocentrism) had any religious connotations, and he stated valid responses against both of them. This means that he, as a rationalist, presented both sides of the debate in a neutral way, and used reasoning instead of appeal to authority or faith in all cases. Actually this was what the pope expected of Galilei, and such a book was what he commissioned from Galilei. Galilei instead wrote a book where he caricatured the pope as a strawman, and instead of presenting arguments for and against both world-views in a neutral way, he wrote a book which can be called anything but scientific.
By the way, Riccioli was a Catholic priest. And a scientist. And, it seems to me, also a rationalist. Studying the works of such people like him, you might want to change your mind if you perceive a conflict between science and religion, which is part of today's public consciousness only because of a small number of very loud religious fundamentalists, helped by some committed atheists trying to suggest that all theists are like them.
Finally, I would like to copy a short summary about this book:
In 1651 the Italian astronomer Giovanni Battista Riccioli published within his Almagestum Novum, a massive 1500 page treatise on astronomy, a discussion of 126 arguments for and against the Copernican hypothesis (49 for, 77 against). A synopsis of each argument is presented here, with discussion and analysis. Seen through Riccioli's 126 arguments, the debate over the Copernican hypothesis appears dynamic and indeed similar to more modern scientific debates. Both sides present good arguments as point and counter-point. Religious arguments play a minor role in the debate; careful, reproducible experiments a major role. To Riccioli, the anti-Copernican arguments carry the greater weight, on the basis of a few key arguments against which the Copernicans have no good response. These include arguments based on telescopic observations of stars, and on the apparent absence of what today would be called "Coriolis Effect" phenomena; both have been overlooked by the historical record (which paints a picture of the 126 arguments that little resembles them). Given the available scientific knowledge in 1651, a geo-heliocentric hypothesis clearly had real strength, but Riccioli presents it as merely the "least absurd" available model - perhaps comparable to the Standard Model in particle physics today - and not as a fully coherent theory. Riccioli's work sheds light on a fascinating piece of the history of astronomy, and highlights the competence of scientists of his time.
The full article can be found under this link. I recommend it to everyone interested in the topic. It shows that geocentrists at that time had real scientific proofs and real experiments regarding their theories, and for most of them the heliocentrists had no meaningful answers.
Disclaimers:
- I'm not a Catholic, so I have no reason to defend the historic Catholic church due to "justifying my insecurities" - a very common accusation against someone perceived to be defending theists in a predominantly atheist discussion forum.
- Any discussion about any perceived proofs for or against the existence of God would be off-topic here. I know it's tempting to show off your best proofs against your carefully constructed straw-men yet again, but this is just not the place for it, as it would detract from the main purpose of this article, as summarized in its introduction.
- English is not my native language. Nevertheless, I hope that what I wrote was comprehensive enough to be understandable. If there is any part of my article which you find ambiguous, feel free to ask.
I have great hopes and expectations that the LessWrong community is suitable to discuss such ideas. I have experience with presenting these ideas on other, predominantly atheist internet communities, and most often the reactions was outright flaming, a hurricane of unexplained downvotes, and prejudicial ad hominem attacks based on what affiliations they assumed I was subscribing to. It is common for people to decide whether they believe a claim or not, based solely by whether the claim suits their ideological affiliations or not. The best quality of rationalists, however, should be to be able to change their views when confronted by overwhelming proof, instead of trying to come up with more and more convoluted explanations. In the time I spent in the LessWrong community, I became to respect that the people here can argue in a civil manner, listening to the arguments of others instead of discarding them outright.
Explaining “map and territory” and “fundamental attribution error” to a broad audience
I am working on a blog post that aims to convey the concepts of “map and territory” and the “fundamental attribution error” to a broad audience in an engaging and accessible way. Since many people here focus on these subjects, I think it would be really valuable to get your feedback on what I’ve written.
For a bit of context, the blog post is part of the efforts of Intentional Insights to promote rational thinking to a broad audience and thus raise the sanity waterline, as described here. The target audience for the blog post is reason-minded youth and young adults who are either not engaged with rationality or are at the beginning stage of becoming aspiring rationalists. Our goal is to get such people interested in exploring rationality more broadly, eventually getting them turned on to more advanced rationality, such as found on Less Wrong itself, in CFAR workshops, etc. The blog post is written in a style aimed to create cognitive ease, with a combination of personal stories and an engaging narrative, along with citations of relevant research and descriptions of strategies to manage one’s mind more effectively.
This is part of our broader practice of asking for feedback from fellow Less Wrongers on our content (this post for example). We are eager to hear from you and revise our drafts (and even published content offerings) based on your thoughtful comments, and we did so previously, as you see in the Edit to this post.
Below the line is the draft post itself. After we get your suggestions, we will find an appropriate graphic to illustrate this article and post it on the Intentional Insights website. Any and all suggestions are welcomed, and thanks for taking the time to engage with us and give your feedback – much appreciated!
_______________________________________________________________________________________________________________________
Where Do Our Mental Maps Lead Us Astray?
So imagine you are driving on autopilot, as we all do much of the time. Suddenly the car in front of you cuts you off quite unexpectedly. You slam your brakes and feel scared and indignant. Maybe you flash your lights or honk your horn at the other car. What’s your gut feeling about the other driver? I know my first reaction is that the driver is rude and obnoxious.
Now imagine a different situation. You’re driving on autopilot, minding your own business, and you suddenly realize you need to turn right at the next intersection. You quickly switch lanes and suddenly hear someone behind you honking their horn. You now realize that there was someone in your blind spot and you forgot to check it in the rush to switch lanes. So you cut them off pretty badly. Do you feel that you are a rude driver? The vast majority of us do not. After all, we did not deliberately cut that car off, we just failed to see the driver. Or let’s imagine another situation: say your friend hurt herself and you are rushing her to the emergency room. You are driving aggressively, cutting in front of others. Are you a rude driver? Not generally. You’re merely doing the right thing for the situation.
So why do we give ourselves a pass, while attributing an obnoxious status to others? Why does our gut always make us out to be the good guys, and other people bad guys? Clearly, there is a disconnect between our gut reaction and reality here. It turns out that this pattern is not a coincidence. Basically, our immediate gut reaction attributes the behavior of others to their personality and not to the situation in which the behavior occurs. The scientific name for this type of error in thinking and feeling is called the fundamental attribution error, also called the correspondence bias. So if we see someone behaving rudely, we immediately and intuitively feel that this person IS rude. We don’t automatically stop to consider whether an unusual situation may cause someone to act this way. With the driver example, maybe the person who cut you off did not see you. Or maybe they were driving their friend to the emergency room. But that’s not what our automatic reaction tells us. On the other hand, we attribute our own behavior to the situation, and not our personality. Much of the time we feel like we have valid explanations for our actions.
Learning about the fundamental attribution error helped me quite a bit. I became less judgmental about others. I realized that the people around me were not nearly as bad as my gut feelings immediately and intuitively assumed. This decreased my stress levels, and I gained more peace and calm. Moreover, I became more humble. I realized that my intuitive self-evaluation is excessively positive and that in reality I am not quite the good guy as my gut reaction tells me. Additionally, I realized that those around me who are unaware of this thinking and feeling error, are more judgmental of me than my intuition suggested. So I am striving to be more mindful and thoughtful about the impression I make on others.
The fundamental attribution error is one of many problems in our natural thinking and feeling patterns. It is certainly very helpful to learn about all of these errors, but it’s hard to focus on avoiding all of them in our daily life. A more effective strategy for evaluating reality more intentionally to have more clarity and thus gain greater agency is known as “map and territory.” This strategy involves recognizing the difference between the mental map of the world that we have in our heads and the reality of the actual world as it exists – the territory.
For myself, internalizing this concept has not been easy. It’s been painful to realize that my understanding of the world is by definition never perfect, as my map will never match the territory. At the same time, this realization was strangely freeing. It made me recognize that no one is perfect, and that I do not have to strive for perfection in my view of the world. Instead, what would most benefit me is to try to refine my map to make it more accurate. This more intentional approach made me more willing to admit to myself that though I intuitively and emotionally feel something is right, I may be mistaken. At the same time, the concept of map and territory makes me really optimistic, because it provides a constant opportunity to learn and improve my assessment of the situation.
Now, what are the strategies for most effectively learning this information, and internalizing the behaviors and mental patterns that can help you succeed? Well, educational psychology research illustrates that engaging with this information actively, personalizing it to your life, linking it to your goals, and deciding on a plan and specific next steps you will take are the best practices for this purpose. So take the time to answer the questions below to gain long-lasting benefit from reading this article:
- What do you think of the concept of map and territory?
- How can it be used to address the fundamental attribution error?
- Where can the notion of map and territory help you in your life?
- What challenges might arise in applying this concept, and how can these challenges be addressed?
- What plan can you make and what specific steps can you take to internalize these strategies?
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)