Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.
It's fairly common for a LessWrong meetup group to get people attending for a week or two, and then never showing up again. Most of the time, there may not be a very interesting reason for that. But if someone did have a bad experience at a meetup, this would be valuable information that they'd be unlikely to volunteer to the organizers.
Thus, I've created a form to collect meetup feedback. The primary purpose is: if you have a local LessWrong meetup that you don't currently attend, we'd like to know why. However, any other feedback is also appreciated: good feedback, bad-but-not-dealbreaking feedback, and feedback from people who do currently attend. "Currently" is left up to your own interpretation.
Please fill in the form now. It should only take a couple of minutes. There are three short-answer questions and three longer ones, but all questions are optional. Better to give a quick response now than to indefinitely postpone writing a longer one.
I intend to publish the responses, both raw and with some appropriate-seeming amount of aggregation. But I'm going to strip out the "where is your meetup" field from the public data. This is so that you can give feedback to a group without worrying about embarrassing them publicly. I'll tell the organizers which responses applied to them, so that the feedback still reaches the right place. If you identify the meetup in a long-form response, I won't strip that out. I'll also strip out the "anonymous identifier" field, naturally.
If you do currently attend a meetup, but want to give feedback anyway, please do also fill in the form.
If you think your answer seems boring, don't let that stop you: for example, we'd like to know relative numbers of "came once, had a bad time" versus "came once, but it's usually not convenient", and we can't do that if the second group don't reply.
Once again: please fill in the form now! If you comment that you have done so, I will reward you with an upvote.
Less Wrong currently represents a tiny, tiny, tiny segment of the population. In its current form, it might only appeal to a tiny, tiny segment of the population. Basically, the people who have a strong need for cognition, who are INTx on the Myers-Briggs (65% of us as per 2012 survey data), etc.
Raising the sanity waterline seems like a generally good idea. Smart people who believe stupid things, and go on to invest resources in stupid ways because of it, are frustrating. Trying to learn rationality skills in my 20s, when a bunch of thought patterns are already overlearned, is even more frustrating.
I have an intuition that a better future would be one where the concept of rationality (maybe called something different, but the same idea) is normal. Where it's as obvious as the idea that you shouldn't spend more money than you earn, or that you should live a healthy lifestyle, etc. The point isn't that everyone currently lives debt-free, eats decently well and exercises; that isn't the case; but they are normal things to do if you're a minimally proactive person who cares a bit about your future. No one has ever told me that doing taekwondo to stay fit is weird and culty, or that keeping a budget will make me unhappy because I'm overthinking thing.
I think the questions of "whether we should try to do this" and "if so, how do we do it in practice?" are both valuable to discuss, and interesting.
Is making rationality general-interest a good goal?
My intuitions are far from 100% reliable. I can think of a few reasons why this might be a bad idea:
1. A little bit of rationality can be damaging; it might push people in the direction of too much contrarianism, or something else I haven't thought of. Since introspection is imperfect, knowing a bit about cognitive biases and the mistakes that other people make might make people actually less likely to change their mind–they see other people making those well-known mistakes, but not themselves. Likewise, rationality taught only as a tool or skill, without any kind of underlying philosophy of why you should want to believe true things, might cause problems of a similar nature to martial art skills taught without the traditional, often non-violent philosophies–it could result in people abusing the skill to win fights/debates, making the larger community worse off overall. (Credit to Yan Zhang for martial arts metaphor).
2. Making the concepts general-interest, or just growing too fast, might involve watering them down or changing them in some way that the value of the LW microcommunity is lost. This could be worse for the people who currently enjoy LW even if it isn't worse overall. I don't know how easy it would be to avoid, or whether
3. It turns out that rationalists don't actually win, and x-rationality, as Yvain terms it, just isn't that amazing over-and-above already being proactive and doing stuff like keeping a budget. Yeah, you can say stuff like "the definition of rationality is that it helps you win", but if in real life, all the people who deliberately try to increase their rationality do worse off overall, by their own standards (or even equally well, but with less time left over for other fun pursuits) than the people who aim for their life goals directly, I want to know that.
4. Making rationality general-interest is a good idea, but not the best thing to be spending time and energy on right now because of Mysterious Reasons X, Y, Z. Maybe I only think it is because of my personal bias towards liking community stuff (and wishing all of my friends were also friends with each other and liked the same activities, which would simplify my social life, but probably shouldn't happen for good reasons).
Obviously, if any of these are the case, I want to know about it. I also want to know about it if there are other reasons, off my radar, why this is a terrible idea.
What has to change for this to happen?
I don't really know, or I would be doing those things already (maybe, akrasia allowing). I have some ideas, though.
1. The jargon thing. I'm currently trying to compile a list of LW/CFAR jargon as a project for CFAR, and there are lots of terms I don't know. There are terms that I've realized in retrospect that I was using incorrectly all along. This presents both a large initial effort for someone interested in learning about rationality via the LW route, and also might contribute to the looking-like-a-cult thing.
2. The gender ratio thing. This has been discussed before, and it's a controversial thing to discuss, and I don't know how much arguing about it in comments will present any solutions. It seems pretty clear that if you want to appeal to the whole population, and a group that represents 50% of the general population only represents 10% of your participants (also as per 2012 survey data, see link above), there's going to be a problem somewhere down the road.
My data point: as a female on LW, I haven't experienced any discrimination, and I'm a bit baffled as to why the gender ratio is so skewed in the first place. Then again, I've already been through the filter of not caring if I'm the only girl at a meetup group. And I do hang out in female-dominated groups (i.e. the entire field of nursing), and fit in okay, but I'm probably not all that good as a typical example to generalize from.
3. LW currently appeals to intelligent people, or at least people who self-identify as intelligent; according to the 2012 survey data, the self-reported IQ median is 138. This wouldn't be surprising, and isn't a problem until you want to appeal to more than 1% of the population. But intelligence and rationality are, in theory, orthogonal, or at least not the same thing. If I suffered a brain injury that reduced my IQ significantly but didn't otherwise affects my likes and dislikes, I expect I would still be interested in improving my rationality and think it was important, perhaps even more so, but I also think I would find it frustrating. And I might feel horribly out of place.
4. Rationality in general has a bad rap; specifically, the Spock thing. And this isn't just affecting whether or not people thing Less Wrong the site is weird; it's affecting whether they want to think about their own decision-making.
This is only what I can think of in 5 minutes...
What's already happening?
Meetup groups are happening. CFAR is happening. And there are groups out there practicing skills similar or related to rationality, whether or not they call it the same thing.
Rationality, Less Wrong and CFAR have, gradually over the last 2-3 years, become a big part of my life. It's been fun, and I think it's made me stronger, and I would prefer a world where as many other people as possible have that. I'd like to know if people think that's a) a good idea, b) feasible, and c) how to do it practically.
I recently had the privilege of being a CFAR alumni volunteering at a later workshop, which is a fascinating thing to do, and put me in a position both to evaluate how much of a difference the first workshop actually made in my life, and to see how the workshops themselves have evolved.
Exactly a year ago, I attended one of the first workshops, back when they were still inexplicably called “minicamps”. I wasn't sure what to expect, and I especially wasn't sure why I had been accepted. But I bravely bullied the nursing faculty staff until they reluctantly let me switch a day of clinical around, and later stumbled off my plane into the San Francisco airport in a haze of exhaustion. The workshop spat me out three days later, twice as exhausted, with teetering piles of ideas and very little time or energy to apply them. I left with a list of annual goals, which I had never bothered to have before, and a feeling that more was possible–this included the feeling that more would have been possible if the workshop had been longer and less chaotic, if I had slept more the week before, if I hadn't had to rush out on Sunday evening to catch a plane and miss the social.
Like I frequently do on Less Wrong the website, I left the minicamp feeling a bit like an outsider, but also a bit like I had come home. As well as my written goals, I made an unwritten pre-commitment to come back to San Francisco later, for longer, and see whether I could make the "more is possible" in my head more specific. Of my thirteen written goals on my list, I fully accomplished only four and partially accomplished five, but I did make it back to San Francisco, at the opportunity cost of four weeks of sacrificed hospital shifts.
A week or so into my stay, while I shifted around between different rationalist shared houses and attempted to max out interesting-conversations-for-day, I found out that CFAR was holding another May workshop. I offered to volunteer, proved my sincerity by spending 6 hours printing and sticking nametags, and lived on site for another 4-day weekend of delightful information overload and limited sleep.
Before the May 2012 workshop, I had a low prior that any four-day workshop could be life-changing in a major way. A four-year nursing degree, okay–I've successfully retrained my social skills and my ability to react under pressure by putting myself in particular situations over and over and over and over again. Four days? Nah. Brains don't work that way.
In my experience, it's exceedingly hard for the human brain to do anything deliberately. In Kahneman-speak, habits are System 1, effortless and automatic. Doing things on purpose involves System 2, effortful and a bit aversive. I could have had a much better experience in my final intensive care clinical if I'd though to open up my workshop notes and tried to address the causes of aversions, or use offline time to train habits, or, y'know, do anything on purpose instead of floundering around trying things at random until they worked.
(The again, I didn't apply concepts like System 1 and System 2 to myself a year ago. I read 'Thinking Fast and Slow' by Kahneman and 'Rationality and the Reflective Mind' by Stanovich as part of my minicamp goal 'read 12 hard nonfiction books this year', most of which came from the CFAR recommended reading list. If my preceptor had had any idea what I was saying when I explained to her that she was running particular nursing skills on System 1, because they were engrained on the level of habit, and I was running the same tasks on System 2 in working memory because they were new and confusing to me, and that was why I appeared to have poor time management, because System 2 takes forever to do anything, this terminology might have helped. Oh, for the world where everyone knows all jargon!)
...And here I am, setting aside a month of my life to think only about rationality. I can't imagine that my counterfactual self-who-didn't-attend-in-May-2012 would be here. I can't imagine that being here now will have zero effect on what I'm doing in a year, or ten years. Bingo. I did one thing deliberately!
So what was the May 2013 workshop actually like?
The curriculum has shifted around a lot in the past year, and I think with 95% probability that it's now more concretely useful. (Speaking of probabilities, the prediction markets during the workshop seemed to flow better and be more fun and interesting this time, although this may just show that I was more averse to games in general and betting in particular. In that case, yay for partly-cured aversions!)
The classes are grouped in an order that allows them to build on each other usefully, and they've been honed by practice into forms that successfully teach skills, instead of just putting words in the air and on flipcharts. For example, having a personal productivity system like GTD came across as a culturally prestigious thing at the last workshop, but there wasn't a lot of useful curriculum on it. Of course, I left on this trip wanting to spend my offline month creating with a GTD system better than paper to-do lists taped to walls, so I have both motivation and a low threshold for improvement.
There are also some completely new classes, including "Againstness training" by Valentine, which seem to relate to some of the 'reacting under pressure' stuff in interesting ways, and gave me vocabulary and techniques for something I've been doing inefficiently by trial and error for a good part of my life.
In general, there are more classes about emotions, both how to deal with them when they're in the way and how to use them when they're the best tool available. Given that none of us are Spock, I think this is useful.
Rejection therapy has morphed into a less terrifying and more helpful form with the awesome name of CoZE (Comfort Zone Expansion). I didn't personally find the original rejection therapy all that awful, but some people did, and that problem is largely solved.
The workshops are vastly more orderly and organized. (I like to think I contributed to this slightly with my volunteer skills of keeping the fridge stocked with water bottles and calling restaurants to confirm orders and make sure food arrived on time.) Classes began and ended on time. The venue stayed tidy. The food was excellent. It was easier to get enough sleep. Etc. The May 2012 venue had a pool, and this one didn't, which made exercise harder for addicts like me. CFAR staff are talking about solving this.
The workshops still aren't an easy environment for introverts. The negative parts of my experience in May 2012 were mostly because of this. It was easier this time, because as a volunteer I could skip classes if I started to feel socially overloaded, but periods of quiet alone time had to be effortfully carved out of the day, and at an opportunity cost of missing interesting conversations. I'm not sure if this problem is solvable without either making the workshops longer, in order to space the material out, and thus less accessible for people with jobs, or by cutting out curriculum. Either would impose a cost on the extroverts who don't want an hour at lunch to meditate or go running alone or read a sci-fi book, etc.
In general, I found the May 2012 workshop too short and intense–we had material thrown at us at a rate far exceeding the usual human idea-digestion rate. Keeping in touch via Skype chats with other participants helped. CFAR now does official followups with participants for six weeks following the workshop.
Meeting the other participants was, as usual, the best part of the weekend. The group was quite diverse, although I was still the only health care professional there. (Whyyy???? The health care system needs more rationality so badly!) The conversations were engaging. Many of the participants seem eager to stay in touch. The May 2012 workshop has a total of six people still on the Skype chats list, which is a 75% attrition rate. CFAR is now working on strategies to help people who want to stay in touch do it successfully.
I thought the May 2012 workshop was awesome. I thought the May 2013 workshop was about an order of magnitude more awesome. I would say that now is a great time to attend a CFAR workshop...except that the organization is financially stable and likely to still be around in a year and producing even better workshops. So I'm not sure. Then again, rationality skills have compound interest–the value of learning some new skills now, even if they amount more to vocab words and mental labels than superpowers, compounds over the year that you spend seeing all the books you read and all the opportunities you have in that framework. I'm glad I went a year ago instead of this May. I'm even more glad I had the opportunity to see the new classes and meet the new participants a year later.
The Big Idea
Why Are We Doing This?
When Are We Doing This?
What Are We Doing?
We've had considerable interest and uptake on the Less Wrong Study Hall, especially with informal timed Pomodoro sessions for everyone to synchronize on. Working together with a number of other visible faces, and your own face visible to them, does seem effective. Keeping the social chat to the 5 off minutes prevents this from turning into just another chatroom.
In a world where 85% of doctors can't solve simple Bayesian word problems...
In a world where only 20.9% of reported results that a pharmaceutical company tries to investigate for development purposes, fully replicate...
...and where there are all sorts of amazing technologies and techniques which nobody at your hospital has ever heard of...
...there's also MetaMed. Instead of just having “evidence-based medicine” in journals that doctors don't actually read, MetaMed will provide you with actual evidence-based healthcare. Their Chairman and CTO is Jaan Tallinn (cofounder of Skype, major funder of xrisk-related endeavors), one of their major VCs is Peter Thiel (major funder of MIRI), their management includes some names LWers will find familiar, and their researchers know math and stats and in many cases have also read LessWrong. If you have a sufficiently serious problem and can afford their service, MetaMed will (a) put someone on reading the relevant research literature who understands real statistics and can tell whether the paper is trustworthy; and (b) refer you to a cooperative doctor in their network who can carry out the therapies they find.
MetaMed was partially inspired by the case of a woman who had her fingertip chopped off, was told by the hospital that she was screwed, and then read through an awful lot of literature on her own until she found someone working on an advanced regenerative therapy that let her actually grow the fingertip back. The idea behind MetaMed isn't just that they will scour the literature to find how the best experimentally supported treatment differs from the average wisdom - people who regularly read LW will be aware that this is often a pretty large divergence - but that they will also look for this sort of very recent technology that most hospitals won't have heard about.
This is a new service and it has to interact with the existing medical system, so they are currently expensive, starting at $5,000 for a research report. (Keeping in mind that a basic report involves a lot of work by people who must be good at math.) If you have a sick friend who can afford it - especially if the regular system is failing them, and they want (or you want) their next step to be more science instead of "alternative medicine" or whatever - please do refer them to MetaMed immediately. We can’t all have nice things like this someday unless somebody pays for it while it’s still new and expensive. And the regular healthcare system really is bad enough at science (especially in the US, but science is difficult everywhere) that there's no point in condemning anyone to it when they can afford better.
I also got my hands on a copy of MetaMed's standard list of citations that they use to support points to reporters. What follows isn't nearly everything on MetaMed's list, just the items I found most interesting.
The Center for Applied Rationality is running two more four-day workshops: Jan 25-28 and March 1-4 in the SF bay area. Like the previous workshop, these sessions are targeted at ambitious, analytic people who have broad intellectual interests, and who care about making real-world projects work. Less Wrong veterans and Less Wrong newcomers alike are welcome: as discussed below, we are intentionally bringing together folks with varied backgrounds and skill bases.
This is the second post of the 2012 Ritual Sequence. The Introduction post is here.
This is... the extended version, I suppose, of a speech I gave at the Solstice.
The NYC Solstice Weekprior celebration begins bright and loud, and gradually becomes somber and poignant. Our opening songs are about the end of the world, but in a funny, boisterous manner that gets people excited and ready to sing. We gradually wind down, dimming lights, extinguishing flames. We turn to songs that aren’t sad but are more quiet and pretty.
And then things get grim. We read Beyond the Reach of God. We sing songs about a world where we are alone, where there is nothing protecting us, and where we somehow need to survive and thrive, even when it looks like the light is failing.
We extinguish all but a single candle, and read an abridged version of the Gift We Give to Tomorrow, which ends like this:
Once upon a time,
far away and long ago,
there were intelligent beings who were not themselves intelligently designed.
Once upon a time,
there were lovers, created by something that did not love.
Once upon a time,
when all of civilization was a single galaxy,
A single star.
A single planet.
A place called Earth.
Once upon a time.
And then we extinguish that candle, and sit for a moment in the darkness.
This year, I took that time to tell a story.
It’s included in the 2012 Ritual Book. I was going to post it at the end of the sequence. But I realized that it’s actually pretty important to the “What Exactly is the Point of Ritual?” discussion. So I’m writing a more fleshed out version now, both for easy reference and for people who don’t feel like hunting through a large pdf to find it.
It’s a bit longer, in this version - it’s what I might have said, if time wasn’t a constraint during the ceremony.
A year ago, I started planning for tonight. In particular, for this moment, after the last candle is snuffed out and we’re left alone in the dark with the knowledge that our world is unfair and that we have nobody to help us but each other.
I wanted to talk about death.
My grandmother died two years ago. The years leading up to her death were painful. She slowly lost her mobility, until all she could do was sit in her living room and hope her family would come by to visit and talk to her.
View more: Next