Diaspora roundup thread, 15th June 2016
This is a new experimental weekly thread.
Guidelines: Top-level comments here should be links to things written by members of the rationalist community, preferably that would be interesting specifically to this community. Self-promotion is totally fine. Including a very brief summary or excerpt is great, but not required. Generally stick to one link per top-level comment. Recent links are preferred.
Rule: Do not link to anyone who does not want to be linked to. In particular, Scott Alexander has asked people not to link to specific posts on his tumblr. As far as I know he's never rescinded that. Do not link to posts on his tumblr.
Collaborative Truth-Seeking
Summary: We frequently use debates to resolve different opinions about the truth. However, debates are not always the best course for figuring out the truth. In some situations, the technique of collaborative truth-seeking may be more optimal.
Acknowledgments: Thanks to Pete Michaud, Michael Dickens, Denis Drescher, Claire Zabel, Boris Yakubchik, Szun S. Tay, Alfredo Parra, Michael Estes, Aaron Thoma, Alex Weissenfels, Peter Livingstone, Jacob Bryan, Roy Wallace, and other readers who prefer to remain anonymous for providing feedback on this post. The author takes full responsibility for all opinions expressed here and any mistakes or oversights.
The Problem with Debates
Aspiring rationalists generally aim to figure out the truth, and often disagree about it. The usual method of hashing out such disagreements in order to discover the truth is through debates, in person or online.
Yet more often than not, people on opposing sides of a debate end up seeking to persuade rather than prioritizing truth discovery. Indeed, research suggests that debates have a specific evolutionary function – not for discovering the truth but to ensure that our perspective prevails within a tribal social context. No wonder debates are often compared to wars.
We may hope that as aspiring rationalists, we would strive to discover the truth during debates. Yet given that we are not always fully rational and strategic in our social engagements, it is easy to slip up within debate mode and orient toward winning instead of uncovering the truth. Heck, I know that I sometimes forget in the midst of a heated debate that I may be the one who is wrong – I’d be surprised if this didn’t happen with you. So while we should certainly continue to engage in debates, we should also use additional strategies – less natural and intuitive ones. These strategies could put us in a better mindset for updating our beliefs and improving our perspective on the truth. One such solution is a mode of engagement called collaborative truth-seeking.
Collaborative Truth-Seeking
Collaborative truth-seeking is one way of describing a more intentional approach in which two or more people with different opinions engage in a process that focuses on finding out the truth. Collaborative truth-seeking is a modality that should be used among people with shared goals and a shared sense of trust.
Some important features of collaborative truth-seeking, which are often not present in debates, are: focusing on a desire to change one’s own mind toward the truth; a curious attitude; being sensitive to others’ emotions; striving to avoid arousing emotions that will hinder updating beliefs and truth discovery; and a trust that all other participants are doing the same. These can contribute to increased social sensitivity, which, together with other attributes, correlate with accomplishing higher group performance on a variety of activities.
The process of collaborative truth-seeking starts with establishing trust, which will help increase social sensitivity, lower barriers to updating beliefs, increase willingness to be vulnerable, and calm emotional arousal. The following techniques are helpful for establishing trust in collaborative truth-seeking:
-
Share weaknesses and uncertainties in your own position
-
Share your biases about your position
-
Share your social context and background as relevant to the discussion
-
For instance, I grew up poor once my family immigrated to the US when I was 10, and this naturally influences me to care about poverty more than some other issues, and have some biases around it - this is one reason I prioritize poverty in my Effective Altruism engagement
-
-
Vocalize curiosity and the desire to learn
-
Ask the other person to call you out if they think you're getting emotional or engaging in emotive debate instead of collaborative truth-seeking, and consider using a safe word
Here are additional techniques that can help you stay in collaborative truth-seeking mode after establishing trust:
-
Self-signal: signal to yourself that you want to engage in collaborative truth-seeking, instead of debating
-
Empathize: try to empathize with the other perspective that you do not hold by considering where their viewpoint came from, why they think what they do, and recognizing that they feel that their viewpoint is correct
-
Keep calm: be prepared with emotional management to calm your emotions and those of the people you engage with when a desire for debate arises
-
watch out for defensiveness and aggressiveness in particular
-
-
Go slow: take the time to listen fully and think fully
-
Consider pausing: have an escape route for complex thoughts and emotions if you can’t deal with them in the moment by pausing and picking up the discussion later
-
say “I will take some time to think about this,” and/or write things down
-
-
Echo: paraphrase the other person’s position to indicate and check whether you’ve fully understood their thoughts
-
Be open: orient toward improving the other person’s points to argue against their strongest form
-
Stay the course: be passionate about wanting to update your beliefs, maintain the most truthful perspective, and adopt the best evidence and arguments, no matter if they are yours of those of others
-
Be diplomatic: when you think the other person is wrong, strive to avoid saying "you're wrong because of X" but instead to use questions, such as "what do you think X implies about your argument?"
-
Be specific and concrete: go down levels of abstraction
-
Be clear: make sure the semantics are clear to all by defining terms
-
consider tabooing terms if some are emotionally arousing, and make sure you are describing the same territory of reality
-
-
Be probabilistic: use probabilistic thinking and probabilistic language, to help get at the extent of disagreement and be as specific and concrete as possible
-
For instance, avoid saying that X is absolutely true, but say that you think there's an 80% chance it's the true position
-
Consider adding what evidence and reasoning led you to believe so, for both you and the other participants to examine this chain of thought
-
-
When people whose perspective you respect fail to update their beliefs in response to your clear chain of reasoning and evidence, update a little somewhat toward their position, since that presents evidence that your position is not very convincing
-
Confirm your sources: look up information when it's possible to do so (Google is your friend)
-
Charity mode: trive to be more charitable to others and their expertise than seems intuitive to you
-
Use the reversal test to check for status quo bias
-
If you are discussing whether to change some specific numeric parameter - say increase by 50% the money donated to charity X - state the reverse of your positions, for example decreasing the amount of money donated to charity X by 50%, and see how that impacts your perspective
-
-
Use CFAR’s double crux technique
-
In this technique, two parties who hold different positions on an argument each writes the the fundamental reason for their position (the crux of their position). This reason has to be the key one, so if it was proven incorrect, then each would change their perspective. Then, look for experiments that can test the crux. Repeat as needed. If a person identifies more than one reason as crucial, you can go through each as needed. More details are here.
-
Of course, not all of these techniques are necessary for high-quality collaborative truth-seeking. Some are easier than others, and different techniques apply better to different kinds of truth-seeking discussions. You can apply some of these techniques during debates as well, such as double crux and the reversal test. Try some out and see how they work for you.
Conclusion
Engaging in collaborative truth-seeking goes against our natural impulses to win in a debate, and is thus more cognitively costly. It also tends to take more time and effort than just debating. It is also easy to slip into debate mode even when using collaborative truth-seeking, because of the intuitive nature of debate mode.
Moreover, collaborative truth-seeking need not replace debates at all times. This non-intuitive mode of engagement can be chosen when discussing issues that relate to deeply-held beliefs and/or ones that risk emotional triggering for the people involved. Because of my own background, I would prefer to discuss poverty in collaborative truth-seeking mode rather than debate mode, for example. On such issues, collaborative truth-seeking can provide a shortcut to resolution, in comparison to protracted, tiring, and emotionally challenging debates. Likewise, using collaborative truth-seeking to resolve differing opinions on all issues holds the danger of creating a community oriented excessively toward sensitivity to the perspectives of others, which might result in important issues not being discussed candidly. After all, research shows the importance of having disagreement in order to make wise decisions and to figure out the truth. Of course, collaborative truth-seeking is well suited to expressing disagreements in a sensitive way, so if used appropriately, it might permit even people with triggers around certain topics to express their opinions.
Taking these caveats into consideration, collaborative truth-seeking is a great tool to use to discover the truth and to update our beliefs, as it can get past the high emotional barriers to altering our perspectives that have been put up by evolution. Rationality venues are natural places to try out collaborative truth-seeking.
What is up with carbon dioxide and cognition? An offer
One or two research groups have published work on carbon dioxide and cognition. The state of the published literature is confusing.
Here is one paper on the topic. The authors investigate a proprietary cognitive benchmark, and experimentally manipulate carbon dioxide levels (without affecting other measures of air quality). They find implausibly large effects from increased carbon dioxide concentrations.
If the reported effects are real and the suggested interpretation is correct, I think it would be a big deal. To put this in perspective, carbon dioxide concentrations in my room vary between 500 and 1500 ppm depending on whether I open the windows. The experiment reports on cognitive effects for moving from 600 and 1000 ppm, and finds significant effects compared to interindividual differences.
I haven't spent much time looking into this (maybe 30 minutes, and another 30 minutes to write this post). I expect that if we spent some time looking into indoor CO2 we could have a much better sense of what was going on, by some combination of better literature review, discussion with experts, looking into the benchmark they used, and just generally thinking about it.
So, here's a proposal:
- If someone looks into this and writes a post that improves our collective understanding of the issue, I will be willing to buy part of an associated certificate of impact, at a price of around $100*N, where N is my own totally made up estimate of how many hours of my own time it would take to produce a similarly useful writeup. I'd buy up to 50% of the certificate at that price.
- Whether or not they want to sell me some of the certificate, on May 1 I'll give a $500 prize to the author of the best publicly-available analysis of the issue. If the best analysis draws heavily on someone else's work, I'll use my discretion: I may split the prize arbitrarily, and may give it to the earlier post even if it is not quite as excellent.
Some clarifications:
- The metric for quality is "how useful it is to Paul." I hope that's a useful proxy for how useful it is in general, but no guarantees. I am generally a pretty skeptical person. I would care a lot about even a modest but well-established effect on performance.
- These don't need to be new analyses, either for the prize or the purchase.
- I reserve the right to resolve all ambiguities arbitrarily, and in the end to do whatever I feel like. But I promise I am generally a nice guy.
- I posted this 2 weeks ago on the EA forum and haven't had serious takers yet.
A Second Year of Spaced Repetition Software in the Classroom
This is a follow-up to last year's report. Here, I will talk about my successes and failures using Spaced Repetition Software (SRS) in the classroom for a second year. The year's not over yet, but I have reasons for reporting early that should become clear in a subsequent post. A third post will then follow, and together these will constitute a small sequence exploring classroom SRS and the adjacent ideas that bubble up when I think deeply about teaching.
Summary
I experienced net negative progress this year in my efforts to improve classroom instruction via spaced repetition software. While this is mostly attributable to shifts in my personal priorities, I have also identified a number of additional failure modes for classroom SRS, as well as additional shortcomings of Anki for this use case. My experiences also showcase some fundamental challenges to teaching-in-general that SRS depressingly spotlights without being any less susceptible to. Regardless, I am more bullish than ever about the potential for classroom SRS, and will lay out a detailed vision for what it can be in the next post.
Look for Lone Correct Contrarians
Related to: The Correct Contrarian Cluster, The General Factor of Correctness
(Content note: Explicitly about spreading rationalist memes, increasing the size of the rationalist movement, and proselytizing. I also regularly use the word 'we' to refer to the rationalist community/subculture. You might prefer not to read this if you don't like that sort of thing and/or you don't think I'm qualified to write about that sort of thing and/or you're not interested in providing constructive criticism.)
I've tried to introduce a number of people to this culture and the ideas within it, but it takes some finesse to get a random individual from the world population to keep thinking about these things and apply them. My personal efforts have been very hit-or-miss. Others have told me that they've been more successful. But I think there are many people that share my experience. This is unfortunate: we want people to be more rational and we want more rational people.
At any rate, this is not about the art of raising the sanity waterline, but the more general task of spreading rationalist memes. Some people naturally arrive at these ideas, but they usually have to find them through other people first. This is really about all of the people in the world who are like you probably were before you found this culture; the people who would care about it, and invest in it, as it is right now, if only they knew it existed.
I'm going to be vague for the sake of anonymity, but here it goes:
I was reading a book review on Amazon, and I really liked it. The writer felt like a kindred spirit. I immediately saw that they were capable of coming to non-obvious conclusions, so I kept reading. Then I checked their review history in the hope that I would find other good books and reviews. And it was very strange.
They did a bunch of stuff that very few humans do. They realized that nuclear power has risks but that the benefits heavily outweigh the risks given the appropriate alternative, and they realized that humans overestimate the risks of nuclear power for silly reasons. They noticed when people were getting confused about labels and pointed out the general mistake, as well as pointing out what everyone should really be talking about. They acknowledged individual and average IQ differences and realized the correct policy implications. They really understood evolution, they took evolutionary psychology seriously, and they didn't care if it was labeled as sociobiology. They used the word 'numerate.'
And the reviews ranged over more than a decade of time. These were persistent interests.
I don't know what other people do when they discover that a stranger like this exists, but the first thing that I try to do is talk to them. It's not like I'm going to run into them on the sidewalk.
Amazon had no messaging feature that I could find, so I looked for a website, and I found one. I found even more evidence, and that's certainly what it was. They were interested in altruism, including how it goes wrong; computer science; statistics; psychology; ethics; coordination failures; failures of academic and scientific institutions; educational reform; cryptocurrency, etc. At this point I considered it more likely than not that they already knew everything that I wanted to tell them, and that they already self-identified as a rationalist, or that they had a contrarian reason for not identifying as such.
So I found their email address. I told them that they were a great reviewer, that I was surprised that they had come to so many correct contrarian conclusions, and that, if they didn't already know, there was a whole culture of people like them.
They replied in ten minutes. They were busy, but they liked what I had to say, and as a matter of fact, a friend had already convinced them to buy Rationality: From AI to Zombies. They said they hadn't read much relative to the size of the book because it's so large, but they loved it so far and they wanted to keep reading.
(You might postulate that I found a review by a user like this on a different book because I was recommended this book and both of us were interested in Rationality: From AI to Zombies. However, the first review I read by this user was for a book on unusual gardening methods, that I found in a search for books about gardening methods. For the sake of anonymity, however, my unusual gardening methods must remain a secret. It is reasonable to postulate that there would be some sort of sampling bias like the one that I have described, but given what I know, it is likely that this is not that. You certainly could still postulate a correlation by means of books about unusual gardening methods, however.)
Maybe that extra push made the difference. Maybe if there hadn't been a friend, I would've made the difference.
Who knew that's how my morning would turn out?
As I've said in some of my other posts, but not in so many words, maybe we should start doing this accidentally effective thing deliberately!
I know there's probably controversy about whether or not rationalists should proselytize, but I've been in favor of it for awhile. And if you're like me, then I don't think this is a very special effort to make. I'm sure sometimes you see a little thread, and you think, "Wow, they're a lot like me; they're a lot like us, in fact; I wonder if there are other things too. I wonder if they would care about this."
Don't just move on! That's Bayesian evidence!
I dare you to follow that path to its destination. I dare you to reach out. It doesn't cost much.
And obviously there are ways to make yourself look creepy or weird or crazy. But I said to reach out, not to reach out badly. If you could figure out how to do it right, it could have a large impact. And these people are likely to be pretty reasonable. You should keep a look out in the future.
Speaking of the future, it's worth noting that I ended up reading the first review because of an automated Amazon book recommendation and subsequent curiosity. You know we're in the data. We are out there and there are ways to find us. In a sense, we aren't exactly low-hanging fruit. But in another sense, we are.
I've never read a word of the Methods of Rationality, but I have to shoehorn this in: we need to write the program that sends a Hogwarts acceptance letter to witches and wizards on their eleventh birthday.
Newsjacking for Rationality and Effective Altruism
Summary: This post describes the steps I took to newsjack a breaking story to promote Rationality and Effective Altruism ideas in an op-ed piece, so that anyone can take similar steps to newsjack a relevant story.
Introduction
Newsjacking is the art and science of injecting your ideas into a breaking news story. It should be done as early as possible in the life cycle of a news story for maximum impact for drawing people's attention to your ideas.

Some of you may have heard about the Wounded Warrior Project scandal that came to light five days ago or so. This nonprofit that helps wounded veterans had fired its top staff for excessively lavish spending and building Potemkin village-style programs that were showpieces for marketing but did little to help wounded veterans.
I scan the news regularly, and was lucky enough to see the story as it was just breaking, on the evening of March 10th. I decided to try to newsjack this story for the sake of Rationality and Effective Altruist ideas. With the help of some timely editing by EA and Rationality enthusiasts other than myself - props to Agnes Vishnevkin, Max Harms, Chase Roycraft, Rhema Hokama, Jacob Bryan, and Yaacov Tarko - TIME just published my piece. This is a big deal, as now one of the first news stories people see when they type "wounded warrior" into Google, as you can see from the screenshot below, is a story promoting Rationality and EA-themed ideas. Regarding Rationality proper, I talk about horns effect and scope neglect, citing Eliezer's piece on it in the post itself, probably the first link to Less Wrong from TIME. Regarding EA, I talked about about effective giving, and also EA organizations such as GiveWell, The Life You Can Save, Animal Charity Evaluators, and effective direct-action charities such as Against Malaria Foundation and GiveDirectly. Many people are searching for "wounded warrior" now that the scandal is emerging, and are getting exposure to Rationality and EA ideas.
Newsjacking a story like this and getting published in TIME may seem difficult, but it's doable. I hope that the story of how I did it and the steps I lay out, as well as the template of the actual article I wrote, will encourage you to try to do so yourself.
Specific Steps
1) The first step is to be prepared mentally to newsjack a story and be vigilant about scanning the headlines for any story that is relevant to Rationality or EA causes. The story I newsjacked was about a scandal in the nonprofit sector, a breaking news story that occurs at regular intervals. But a news story about mad cow disease spreading spreading from factory farms might be a good opportunity to write about Animal Charity Evaluators, or a news story about the Zika virus might be a good opportunity to write about how we still haven't killed off malaria (hint hint for any potential authors). While those are specifically EA-related, you can inject Rationality into almost any news story by pointing out biases, etc.
2) Once you find a story, decide what kind of angle you want to write about, write a great first draft, and get it edited. You are welcome to use my TIME piece as an inspiration and template. I can't stress getting it edited strong enough, the first draft is always going to be only the first draft. You can get friends to help out, but also tap EA resources such as the EA Editing and Review FB group, and the .impact Writing Help Slack channel. You can also get feedback on the LW Open Thread. Get multiple sets of eyes on it, and quickly. Ask more people than you anticipate you need, as some may drop out. For this piece, for example, I wrote it on the morning and early afternoon of Friday March 11th, and was lucky enough to have 6 people review it by the evening, but 10 people committed to actually reviewing it - so don't rely on all people to come through.
3) Decide what venues you will submit it to, and send out the piece to as many appropriate venues as you think are reasonable. Here is an incomplete but pretty good list of places that accept op-eds. When you decide on the venues, write up a pitch for the piece which you will use to introduce the article to editors at various venues. Your pitch should start with stating that you think the readers of the specific venue you are sending it to will be interested in the piece, so that the editor knows this is not a copy-pasted email but something you specifically customized for that editor. Then continue with 3-5 sentences summarizing the article's main points and any unique angle you're bringing to it. Your second paragraph should describe your credentials for writing the piece. Here's my successful pitch to Time:
_______________________________________________________________________________________________
Good day,
I think TIME readers will be interested in my timely piece, “Why The Wounded Warrior Fiasco Hurts Everyone (And How To Prevent It).” It analyzes the problems in the nonprofit sector that lead systematically to the kind of situation seen with Wounded Warrior. Unlike other writings on this topic, the article provides a unique angle by relying on neuroscience to clarify these challenges. The piece then gives clear suggestions for how your readers as individual donors can address these kinds of problems and avoid suffering the same kind of grief that Wounded Warrior supporters are dealing with. Finally, it talks about a nascent movement to reform and improve the nonprofit sector, Effective Altruism.
My expertise for writing the piece comes from my leadership of a nonprofit dedicated to educating people in effective giving, Intentional Insights. I also serve as a professor at Ohio State, working at the intersection of history, psychology, neuroscience, and altruism, enabling me to have credibility as a scholar of these issues. I have written for many popular venues, such as The Huffington Post, Salon, The Plain Dealer, Alternet, and others, which leads me to believe your readership will enjoy my writing style.
Hope you can use this piece!
____________________________________________________________________________________________________
4) I bet I know what at least some of you are thinking. My credentials make it much easier for me to publish in TIME than someone without those credentials. Well, trust me, you can get published somewhere :-) Your hometown paper or university paper is desperately looking for good content about breaking stories, and if you can be the someone who provides that content, you can get EA and Rationality ideas out there. Then, you can slowly build up a base of publications that will take you to the next level.
Do you think I started with publishing in The Huffington Post? No, I started with my own blog, and then guest blogging for other people, then writing op-eds for smaller local venues which I don't even list anymore, and slowly over time got the kind of prominence that leads me to be considered for TIME. And it's still a crapshoot even for me: I sent out more than 30 pitches to editors at different prominent venues, and a number turned down the piece, before TIME accepted it. When it's accepted, you have to let editors at places that prefer original content, which is most op-ed venues, who get back to you and express interest, know that you piece has already been published - they may still publish it, or they may not, but likely not. So the fourth step is to be confident in yourself, try and keep trying, if you feel that this type of writing is a skill that you can contribute to spreading Rationality/EA.
5) There's a fifth step - repurpose your content at venues that allow republication. For instance, I wrote a version of this piece for The Life You Can Save blog, for the Intentional Insights blog, and for The Huffington Post, which all allow republication of other content. Don't let your efforts go to waste :-)
Conclusion
I hope this step-by-step guide to newsjacking a breaking story for Rationality or EA will encourage you to try it. It's not as hard as it seems, though it requires effort and dedication. It helps to know how to write well for a broad public audience in promoting Rationality and EA ideas, which is what we do at Intentional Insights, so email me at gleb@intentionalinsights.org if you want training in that or to discuss any other aspects of marketing such ideas broadly. You're also welcome to get in touch with me if you'd like editing help on such a newsjacking effort. Good luck spreading these ideas broadly!
P.S. To amplify the signal and get more people into EA and Rationality modes of thinking, you are welcome to share the story I wrote for TIME.
[moderator action] The_Lion and The_Lion2 are banned
Accounts "The_Lion" and "The_Lion2" are banned now. Here is some background, mostly for the users who weren't here two years ago:
User "Eugine_Nier" was banned for retributive downvoting in July 2014. He keeps returning to the website using new accounts, such as "Azathoth123", "Voiceofra", "The_Lion", and he keeps repeating the behavior that got him banned originally.
The original ban was permanent. It will be enforced on all future known accounts of Eugine. (At random moments, because moderators sometimes feel too tired to play whack-a-mole.) This decision is not open to discussion.
Please note that the moderators of LW are the opposite of trigger-happy. Not counting spam, there is on average less than one account per year banned. I am writing this explicitly, to avoid possible misunderstanding among the new users. Just because you have read about someone being banned, it doesn't mean that you are now at risk.
Most of the time, LW discourse is regulated by the community voting on articles and comments. Stupid or offensive comments get downvoted; you lose some karma, then everyone moves on. In rare cases, moderators may remove specific content that goes against the rules. The account ban is only used in the extreme cases (plus for obvious spam accounts). Specifically, on LW people don't get banned for merely not understanding something or disagreeing with someone.
What does "retributive downvoting" mean? Imagine that in a discussion you write a comment that someone disagrees with. Then in a few hours you will find that your karma has dropped by hundreds of points, because someone went through your entire comment history and downvoted all comments you ever wrote on LW; most of them completely unrelated to the debate that "triggered" the downvoter.
Such behavior is damaging to the debate and the community. Unlike downvoting a specific comment, this kind of mass downvoting isn't used to correct a faux pas, but to drive a person away from the website. It has especially strong impact on new users, who don't know what is going on, so they may mistake it for a reaction of the whole community. But even in experienced users it creates an "ugh field" around certain topics known to invoke the reaction. Thus a single user has achieved disproportional control over the content and the user base of the website. This is not desired, and will be punished by the site owners and the moderators.
To avoid rules lawyering, there is no exact definition of how much downvoting breaks the rules. The rule of thumb is that you should upvote or downvote each comment based on the value of that specific comment. You shouldn't vote on the comments regardless of their content merely because they were written by a specific user.
The correct response to uncertainty is *not* half-speed
Related to: Half-assing it with everything you've got; Wasted motion; Say it Loud.
Once upon a time (true story), I was on my way to a hotel in a new city. I knew the hotel was many miles down this long, branchless road. So I drove for a long while.

After a while, I began to worry I had passed the hotel.

So, instead of proceeding at 60 miles per hour the way I had been, I continued in the same direction for several more minutes at 30 miles per hour, wondering if I should keep going or turn around.

- I wasn't sure if I was a good enough writer to write a given doc myself, or if I should try to outsource it. So, I sat there kind-of-writing it while also fretting about whether the task was correct.
- (Solution: Take a minute out to think through heuristics. Then, either: (1) write the post at full speed; or (2) try to outsource it; or (3) write full force for some fixed time period, and then pause and evaluate.)
- I wasn't sure (back in early 2012) that CFAR was worthwhile. So, I kind-of worked on it.
- An old friend came to my door unexpectedly, and I was tempted to hang out with her, but I also thought I should finish my work. So I kind-of hung out with her while feeling bad and distracted about my work.
- A friend of mine, when teaching me math, seems to mumble specifically those words that he doesn't expect me to understand (in a sort of compromise between saying them and not saying them)...
- Duncan reports that novice Parkour students are unable to safely undertake certain sorts of jumps, because they risk aborting the move mid-stream, after the actual last safe stopping point (apparently kind-of-attempting these jumps is more dangerous than either attempting, or not attempting the jumps)
- It is said that start-up founders need to be irrationally certain that their startup will succeed, lest they be unable to do more than kind-of work on it...

Why CFAR's Mission?
Related to:
---
Q: Why not focus exclusively on spreading altruism? Or else on "raising awareness" for some particular known cause?
Briefly put: because historical roads to hell have been powered in part by good intentions; because the contemporary world seems bottlenecked by its ability to figure out what to do and how to do it (i.e. by ideas/creativity/capacity) more than by folks' willingness to sacrifice; and because rationality skill and epistemic hygiene seem like skills that may distinguish actually useful ideas from ineffective or harmful ones in a way that "good intentions" cannot.
Q: Even given the above -- why focus extra on sanity, or true beliefs? Why not focus instead on, say, competence/usefulness as the key determinant of how much do-gooding impact a motivated person can have? (Also, have you ever met a Less Wronger? I hear they are annoying and have lots of problems with “akrasia”, even while priding themselves on their high “epistemic” skills; and I know lots of people who seem “less rational” than Less Wrongers on some axes who would nevertheless be more useful in many jobs; is this “epistemic rationality” thingy actually the thing we need for this world-impact thingy?...)
This is an interesting one, IMO.
Basically, it seems to me that epistemic rationality, and skills for forming accurate explicit world-models, become more useful the more ambitious and confusing a problem one is tackling.
For example:
Why startup founders have mood swings (and why they may have uses)
(This post was collaboratively written together with Duncan Sabien.)
Startup founders stereotypically experience some pretty serious mood swings. One day, their product seems destined to be bigger than Google, and the next, it’s a mess of incoherent, unrealistic nonsense that no one in their right mind would ever pay a dime for. Many of them spend half of their time full of drive and enthusiasm, and the other half crippled by self-doubt, despair, and guilt. Often this rollercoaster ride goes on for years before the company either finds its feet or goes under.
Well, sure, you might say. Running a startup is stressful. Stress comes with mood swings.
But that’s not really an explanation—it’s like saying stuff falls when you let it go. There’s something about the “launching a startup” situation that induces these kinds of mood swings in many people, including plenty who would otherwise be entirely stable.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)