Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.
Now, the new and better version has arrived. We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths. Working with this crowd transformed my world; it felt like I was learning to think. I wouldn’t be surprised if it can transform yours.
A representative sample of current projects:
- Research and writing on decision theory, anthropic inference, and other non-dangerous aspects of the foundations of AI;
- The Peter Platzer Popular Book Planning Project;
- Editing and publicizing theuncertainfuture.com;
- Improving the LW wiki, and/or writing good LW posts;
- Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;
- Writing academic conference/journal papers to seed academic literatures on questions around AI risks (e.g., takeoff speed, economics of AI software engineering, genie problems, what kinds of goal systems can easily arise and what portion of such goal systems would be foreign to human values; theoretical compsci knowledge would be helpful for many of these questions).
Interested, but not sure whether to apply?
Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”. That kind of timidity destroys the world, by failing to save it. So if that’s your situation, send us an email. Let us be the one to say “no”. Glancing at an extra application is cheap, and losing out on a capable applicant is expensive.
And if you’re seriously interested in risk reduction but at a later time, or in another capacity -- send us an email anyway. Coordinated groups accomplish more than uncoordinated groups; and if you care about risk reduction, we want to know.
What we’re looking for
At bottom, we’re looking for anyone who:
- Is capable (strong ability to get things done);
- Seriously aspires to rationality; and
- Is passionate about reducing existential risk.
Bonus points for any (you don’t need them all) of the following traits:
- Experience with management, for example in a position of responsibility in a large organization;
- Good interpersonal and social skills;
- Extraversion, or interest in other people, and in forming strong communities;
- Dazzling brilliance at math or philosophy;
- A history of successful academic paper-writing; strategic understanding of journal submission processes, grant application processes, etc.
- Strong general knowledge of science or social science, and the ability to read rapidly and/or to quickly pick up new fields;
- Great writing skills and/or marketing skills;
- Organization, strong ability to keep projects going without much supervision, and the ability to get mundane stuff done in a reliable manner;
- Skill at implementing (non-AI) software projects, such as web apps for interactive technological forecasting, rapidly and reliably;
- Web programming skill, or website design skill;
- Legal background;
- A history of successfully pulling off large projects or events;
- Unusual competence of some other sort, in some domain we need, but haven’t realized we need.
- Cognitive diversity: any respect in which you're different from the typical LW-er, and in which you're more likely than average to notice something we're missing.
If you think this might be you, send a quick email to jasen@intelligence.org. Include:
- Why you’re interested;
- What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume or c.v.);
- Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.
Our application process is fairly informal, so send us a quick email as initial inquiry and we can decide whether or not to follow up with more application components.
As to logistics: we cover room, board, and, if you need it, airfare, but no other stipend.
Looking forward to hearing from you,
Anna
ETA (as of 3/25/10): We are still accepting applications, for summer and in general. Also, you may wish to check out http://www.singinst.org/grants/challenge#grantproposals for a list of some current projects.
I will come as a surprise to few people that I disagree strongly with Eliezer here; Wei should not take his word for the claim that Wei is so much more rational than all the folks he might disagree with that he can ignore their differing opinions. Where is this robust rationality test used to compare Wei to the rest of the intellectual world? Where is the evidence for this supposed mental health risk of considering the important evidence of the opinions of other? If the world is crazy, then very likely so are you. Yes it is a good sign if you can show some of your work, but you can almost never show all of your relevant work. So we must make inferences about the thought we have not seen.
Well, I think we both agree on the dangers of a wide variety of cheap talk - or to put it more humbly, you taught me on the subject. Though even before then, I had developed the unfortunate personal habit of calling people's bluffs.
So while we can certainly interpret talk about modesty and immodesty in terms of rhetoric, isn't the main testable prediction at stake, the degree to which Wei Dai should often find, on further investigation, that people who disagree with him turn out to have surprisingly good reasons to do so?
Do you think - to jump all the way... (read more)