Last summer, 15 Less Wrongers, under the auspices of SIAI, gathered in a big house in Santa Clara (in the SF bay area), with whiteboards, existential risk-reducing projects, and the ambition to learn and do.
Now, the new and better version has arrived. We’re taking folks on a rolling basis to come join in our projects, learn and strategize with us, and consider long term life paths. Working with this crowd transformed my world; it felt like I was learning to think. I wouldn’t be surprised if it can transform yours.
A representative sample of current projects:
- Research and writing on decision theory, anthropic inference, and other non-dangerous aspects of the foundations of AI;
- The Peter Platzer Popular Book Planning Project;
- Editing and publicizing theuncertainfuture.com;
- Improving the LW wiki, and/or writing good LW posts;
- Getting good popular writing and videos on the web, of sorts that improve AI risks understanding for key groups;
- Writing academic conference/journal papers to seed academic literatures on questions around AI risks (e.g., takeoff speed, economics of AI software engineering, genie problems, what kinds of goal systems can easily arise and what portion of such goal systems would be foreign to human values; theoretical compsci knowledge would be helpful for many of these questions).
Interested, but not sure whether to apply?
Past experience indicates that more than one brilliant, capable person refrained from contacting SIAI, because they weren’t sure they were “good enough”. That kind of timidity destroys the world, by failing to save it. So if that’s your situation, send us an email. Let us be the one to say “no”. Glancing at an extra application is cheap, and losing out on a capable applicant is expensive.
And if you’re seriously interested in risk reduction but at a later time, or in another capacity -- send us an email anyway. Coordinated groups accomplish more than uncoordinated groups; and if you care about risk reduction, we want to know.
What we’re looking for
At bottom, we’re looking for anyone who:
- Is capable (strong ability to get things done);
- Seriously aspires to rationality; and
- Is passionate about reducing existential risk.
Bonus points for any (you don’t need them all) of the following traits:
- Experience with management, for example in a position of responsibility in a large organization;
- Good interpersonal and social skills;
- Extraversion, or interest in other people, and in forming strong communities;
- Dazzling brilliance at math or philosophy;
- A history of successful academic paper-writing; strategic understanding of journal submission processes, grant application processes, etc.
- Strong general knowledge of science or social science, and the ability to read rapidly and/or to quickly pick up new fields;
- Great writing skills and/or marketing skills;
- Organization, strong ability to keep projects going without much supervision, and the ability to get mundane stuff done in a reliable manner;
- Skill at implementing (non-AI) software projects, such as web apps for interactive technological forecasting, rapidly and reliably;
- Web programming skill, or website design skill;
- Legal background;
- A history of successfully pulling off large projects or events;
- Unusual competence of some other sort, in some domain we need, but haven’t realized we need.
- Cognitive diversity: any respect in which you're different from the typical LW-er, and in which you're more likely than average to notice something we're missing.
If you think this might be you, send a quick email to jasen@intelligence.org. Include:
- Why you’re interested;
- What particular skills you would bring, and what evidence makes you think you have those skills (you might include a standard resume or c.v.);
- Optionally, any ideas you have for what sorts of projects you might like to be involved in, or how your skillset could help us improve humanity’s long-term odds.
Our application process is fairly informal, so send us a quick email as initial inquiry and we can decide whether or not to follow up with more application components.
As to logistics: we cover room, board, and, if you need it, airfare, but no other stipend.
Looking forward to hearing from you,
Anna
ETA (as of 3/25/10): We are still accepting applications, for summer and in general. Also, you may wish to check out http://www.singinst.org/grants/challenge#grantproposals for a list of some current projects.
I'm not so sure. You don't seem to be being downvoted for criticizing Eliezer's strategy or sparse publication record: you got upvoted earlier, as did CronoDAS for making similar points. But the hostile and belligerent tone of many of your comments does come off as kind of, well, trollish.
Incidentally, I can't help but notice that subject and style of your writing is remarkably similar to that of DS3618. Is that just a coincidence?
Not to mention mormon1 and psycho.
The same complaints and vitriol about Eliezer and LW, unsupported claims of technical experience convenient to conversational gambits (CMU graduate degree with no undergrad degree, AI and DARPA experience), and support for Intelligent Design creationism.
Plus sadly false claims of being done with Less Wrong because of his contempt for its participants.