Center for Modern Rationality currently hiring: Executive assistants, Teachers, Research assistants, Consultants.
Hi there,
We are still looking for:
A second executive assistant -- preferably someone who lives in the SF bay area or is willing to relocate here, but remote work will also be considered. Apply here.
Teachers / curriculum designers. This *does* need to be someone who can relocate to the SF bay area, and who has the legal ability to work in the US. Apply here. Especially apply if:
- Rationality, or similar changes in your skill set, have made a big difference in your life;
- You enjoy teaching, and helping others change their lives; you have strong interpersonal skills;
- You have exceptional analytic skills, and want to help us figure out what sort of "rationality" and "rationality training" can actually work -- by being skeptical, trying things out, measuring outcomes, etc.
Distant curriculum designers: as above, except that you don't need the interpersonal/teaching skills, and do need to be extra-exceptional in other respects. Apply here.
Programmers -- folks who can whip up simple prototype web apps quickly, to help with rationality training. Apply here.
Consultants -- folks who have relevant experience, and can spend a few hours offering suggestions for how to structure our workshops, or for how to structure rationality group more generally (after watching us teach, or by giving advice over the phone). If you've run successful workshops for adults before, of any sort (e.g., on italian cooking), consider applying to help us organize our program. Apply here.
If you live in the SF bay area, you are also very welcome to come on a Satuday and help us test out draft lessons (by being a participant as we present them): email stephenpcole at gmail dot com to be added to that email list.
Do err on the side of applying; hope to hear from you soon!
(These application forms take the place of the previous ones; but if you've applied with the previous one, you're still golden, I'm just a bit behind on processing the applications.)
Minicamps on Rationality and Awesomeness: May 11-13, June 22-24, and July 21-28
“I do not say this lightly... but if you're looking for superpowers, this is the place to start.”
--Michael Curzi, summer 2011 minicamp participant
Who: You and a class full of other aspiring rationalists and world-optimizers, from around the world.
What: Two 3-day weekend minicamps and one 8-day minicamp, filled with hands-on activities for applying rationality to your life, your goals, and the making of a better world. (See details in the FAQ.)
When and where: We're running three camps, so that we can do this for three sets of participants: May 11-13 and June 22-24 for the 3-day camps, and July 21-28 for the eight-day camp, all in the San Francisco Bay Area.
Why: Because you’re a social primate, and the best way to jump into a new way of thinking, make friends, and accomplish your goals is often to spend time with other primates who are doing just that.
- Hang out and explore the Bay Area with two dozen other people like you who are smart, interesting, and passionate about rationality
- Attend bonus sessions about style, body language, and confidence-building.
- Get help charting out career paths; and, entirely optionally for those interested, connect with folks at the Singularity Institute about optimal philanthropy.
Instructors:
![]() |
![]() |
![]() |
| Eliezer Yudkowsky | Anna Salamon | Julia Galef |
![]() |
![]() |
|
| Andrew Critch | Luke Muehlhauser | Michael Smith |
Cost: $650 for the three-day programs; $1500 for the week-long program. This includes lodging[1], meals, and tuition.
(Note that this *still* isn't quite enough to make running minicamps sustainable in the long-run; a lodging + meals at retreat centers start at around $90 per person per night, the "three-day camps" include four nights, and these workshops take a staff of about 5 full-time people for over a month each prior to each workshop, most of us at $3k/month, counting curriculum development time (plus miscellaneous expenses). We are trying to strike a compromise between "charge enough that we can run more camps" and staying affordable, especially for our start-up phase; costs will probably go up in following years.)
Three days (or a week) isn’t long enough to learn rationality, but it's long enough to learn how to learn rationality, and to get some momentum toward doing so.
Come meet us, and see what you can do.
How do you notice when you're rationalizing?
How do you notice when you're rationalizing? Like, what *actually* tips you off, in real life?
I've listed my cues below; please add your own (one idea per comment), and upvote the comments that you either: (a) use; or (b) will now try using.
I'll be using this list in a trial rationality seminar on Wednesday; it also sounds useful in general.
Urges vs. Goals: The analogy to anticipation and belief
Partially in response to: The curse of identity
Related to: Humans are not automatically strategic, That other kind of status, Approving reinforces low-effort behaviors.
Joe studies long hours, and often prides himself on how driven he is to make something of himself. But in the actual moments of his studying, Joe often looks out the window, doodles, or drags his eyes over the text while his mind wanders. Someone sent him a link to which college majors lead to the greatest lifetime earnings, and he didn't get around to reading that either. Shall we say that Joe doesn't really care about making something of himself?
The Inuit may not have 47 words for snow, but Less Wrongers do have at least two words for belief. We find it necessary to distinguish between:
- Anticipations, what we actually expect to see happen;
- Professed beliefs, the set of things we tell ourselves we “believe”, based partly on deliberate/verbal thought.
This distinction helps explain how an atheistic rationalist can still get spooked in a haunted house; how someone can “believe” they’re good at chess while avoiding games that might threaten that belief [1]; and why Eliezer had to actually crash a car before he viscerally understood what his physics books tried to tell him about stopping distance going up with the square of driving speed. (I helped Anna revise this - EY.)
A lot of our community technique goes into either (1) dealing with "beliefs" being an evolutionarily recent system, such that our "beliefs" often end up far screwier than our actual anticipations; or (2) trying to get our anticipations to align with more evidence-informed beliefs.
And analogously - this analogy is arguably obvious, but it's deep, useful, and easy to overlook in its implications - there seem to be two major kinds of wanting:
- Urges: concrete emotional pulls, produced in System 1's perceptual / autonomic processes
(my urge to drink the steaming hot cocoa in front of me; my urge to avoid embarrassment by having something to add to my accomplishments log) - Goals: things we tell ourselves we’re aiming at, within deliberate/verbal thought and planning
(I have a goal to exercise three times a week; I have a goal to reduce existential risk)
Implication 1: You can import a lot of technique for "checking for screwy beliefs" into "checking for screwy goals".
Poll results: LW probably doesn't cause akrasia
Test of: Decision Fatigue, Rationality, and Akrasia.
Shortly before the Summit, Alexandros posted a short discussion post wondering whether rationality training might cause akrasia by prompting folks to make more decisions using deliberate, conscious, "system II" reasoning (instead of rapid, automatic, "system I" heuristics) and, thereby, causing decision fatigue.
This conjecture sounded interesting to me, and I'd wondered similar things myself, so I put up a poll to gather data.
Meetup : Talk on Singularity scenarios and optimal philanthropy, followed by informal meet-up
Discussion article for the meetup : Talk on Singularity scenarios and optimal philanthropy, followed by informal meet-up
I'll be giving a 35-minute talk to some folks from Rutgers philosophy and Giving What We Can, followed by Q and A, informal discussion, and hopefully conversation that continues over pizza or something. Come meet me, Carl Shulman (another SingInst research fellow; also, by husband:)), me, and other LW-ers, discuss some ideas, and have fun! * * * The abstract that was sent out to Rutgers folk: In 1965, I.J. Good proposed that machines would one day be smart enough to make themselves smarter. Having made themselves smarter, they would spot still further opportunities for improvement, quickly leaving human intelligence far behind. He called this the "intelligence explosion". I review the argument for an intelligence explosion, and how one might seek to influence its outcome, and reduce the odds of catastrophe, should such an event occur. I also try, very briefly, to situate such interventions in relation to other paths for doing good, such as third world poverty interventions and interventions to reduce nuclear risk.
Discussion article for the meetup : Talk on Singularity scenarios and optimal philanthropy, followed by informal meet-up
[Question] Do you know a good game or demo for demonstrating sunk costs?
I'm hoping to find something that can be done in 5 minutes or so, as a classroom demonstration (for the rationality curricula).
I find sunk costs have a large effect in the board game "Go" (so that beginners are instructed "not to throw good stones after bad"), and I assume it also does in poker, but both of those games are too long and too full of distractors to be used in a simple demo.
Thanks for any suggestions!
[LINK] How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects
If you're interested in evolution, anthropics, and AI timelines -- or in what the Singularity Institute has been producing lately -- you might want to check out this new paper, by SingInst research fellow Carl Shulman and FHI professor Nick Bostrom.
The paper:
How Hard is Artificial Intelligence? The Evolutionary Argument and Observation Selection Effects
The abstract:
Several authors have made the argument that because blind evolutionary processes produced human intelligence on Earth, it should be feasible for clever human engineers to create human-level artificial intelligence in the not-too-distant future. This evolutionary argument, however, has ignored the observation selection effect that guarantees that observers will see intelligent life having arisen on their planet no matter how hard it is for intelligent life to evolve on any given Earth-like planet. We explore how the evolutionary argument might be salvaged from this objection, using a variety of considerations from observation selection theory and analysis of specific timing features and instances of convergent evolution in the terrestrial evolutionary record. We find that a probabilistic version of the evolutionary argument emerges largely intact once appropriate corrections have been made.
I'd be interested to hear LW-ers' takes on the content; Carl, too, would much appreciate feedback.
Upcoming meet-ups
There are upcoming irregularly scheduled meet-ups in:
- Bangalore: Sunday June 19th at 4pm
- Cambridge, MA: Tuesday June 14th at 7pm
- DC: Sunday June 12th at 1pm
- Edinburgh: Saturday June 11 (I think, the announcement doesn't specify) at 2pm
- Fort Collins: Wednesday June 15th at 7pm
- Houston: Sunday June 12th at 4pm
- Logan: Saturday June 18th at 4pm
- Ottawa: Thurs June 16th at 7pm (+ a Bayes study group)
- Paris: Sunday June 25th around 2pm
Cities with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Irvine, Mountain View, New York, San Francisco,Seattle, Toronto.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!
Upcoming meet-ups:
There are upcoming irregularly scheduled meet-ups in:
- DC: Sunday June 5 at 1pm
- Edinburgh: Saturday June 4 at 2pm
- Fort Collins, Colorado: Wednesday June 8 at 7pm
- Houston: Saturday June 4 at 2pm
- London: Sunday June 5 at 2pm
- Ottawa: Thurs June 9 at 7pm (+ an Ottawa Bayesian statistics group)
- West LA: Wednesday June 8th at 7pm
Cities with regularly scheduled meetups: Austin, Berkeley, Cambridge, MA, Irvine, Mountain View, New York, San Francisco, Seattle, Toronto.
If you'd like to talk with other LW-ers face to face, and there is no meetup in your area, consider starting your own meetup; it's easy (more resources here). Check one out, stretch your rationality skills, and have fun!





Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)