Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.
Now, the new and better version has arrived. We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths. Working with this crowd transformed my world; it felt like I was learning to think. I wouldn’t be surprised if it can transform yours.
A representative sample of current projects:
- Research and writing on decision theory, anthropic inference, and other non-dangerous aspects of the foundations of AI;
- The Peter Platzer Popular Book Planning Project;
- Editing and publicizing theuncertainfuture.com;
- Improving the LW wiki, and/or writing good LW posts;
- Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;
- Writing academic conference/journal papers to seed academic literatures on questions around AI risks (e.g., takeoff speed, economics of AI software engineering, genie problems, what kinds of goal systems can easily arise and what portion of such goal systems would be foreign to human values; theoretical compsci knowledge would be helpful for many of these questions).
Interested, but not sure whether to apply?
Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”. That kind of timidity destroys the world, by failing to save it. So if that’s your situation, send us an email. Let us be the one to say “no”. Glancing at an extra application is cheap, and losing out on a capable applicant is expensive.
And if you’re seriously interested in risk reduction but at a later time, or in another capacity -- send us an email anyway. Coordinated groups accomplish more than uncoordinated groups; and if you care about risk reduction, we want to know.
What we’re looking for
At bottom, we’re looking for anyone who:
- Is capable (strong ability to get things done);
- Seriously aspires to rationality; and
- Is passionate about reducing existential risk.
Bonus points for any (you don’t need them all) of the following traits:
- Experience with management, for example in a position of responsibility in a large organization;
- Good interpersonal and social skills;
- Extraversion, or interest in other people, and in forming strong communities;
- Dazzling brilliance at math or philosophy;
- A history of successful academic paper-writing; strategic understanding of journal submission processes, grant application processes, etc.
- Strong general knowledge of science or social science, and the ability to read rapidly and/or to quickly pick up new fields;
- Great writing skills and/or marketing skills;
- Organization, strong ability to keep projects going without much supervision, and the ability to get mundane stuff done in a reliable manner;
- Skill at implementing (non-AI) software projects, such as web apps for interactive technological forecasting, rapidly and reliably;
- Web programming skill, or website design skill;
- Legal background;
- A history of successfully pulling off large projects or events;
- Unusual competence of some other sort, in some domain we need, but haven’t realized we need.
- Cognitive diversity: any respect in which you're different from the typical LW-er, and in which you're more likely than average to notice something we're missing.
If you think this might be you, send a quick email to jasen@intelligence.org. Include:
- Why you’re interested;
- What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume or c.v.);
- Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.
Our application process is fairly informal, so send us a quick email as initial inquiry and we can decide whether or not to follow up with more application components.
As to logistics: we cover room, board, and, if you need it, airfare, but no other stipend.
Looking forward to hearing from you,
Anna
ETA (as of 3/25/10): We are still accepting applications, for summer and in general. Also, you may wish to check out http://www.singinst.org/grants/challenge#grantproposals for a list of some current projects.
Is it just me or does this seem a bit backwards? SIAI is trying to make FAI yet so much of the time spent is spent on risks and benefits of this FAI that doesn't exist. For a task that is estimated to be so dangerous and so world changing would it not behoove SIAI to be the first to make FAI? If this be the case then I am a bit confused as to the strategy SIAI is employing to accomplish the goal of FAI.
Also if FAI is the primary goal here then it seems to me that one should be looking not at LessWrong but at gathering people from places like Google, Intel, IBM, and DARPA... Why would you choose to pull from a predominantly amateur talent pool like LW (sorry to say that but there it is)?
That's my end of the problem.
Not nearly high-end enough. International Math Olympiad, programming olympiads, young superstars of other types, older superstars with experience, and as much diversity of genius as I can manage to pack into a very small group. T... (read more)