Now is the very last minute to apply for a Summer 2010 Visiting Fellowship. If you’ve been interested in SIAI for a while, but haven’t quite managed to make contact -- or if you’re just looking for a good way to spend a week or more of your summer -- drop us a line. See what an SIAI summer might do for you and the world.
(SIAI’s Visiting Fellow program brings volunteers to SIAI for anywhere from a week to three months, to learn, teach, and collaborate. Flights and room and board are covered. We’ve been rolling since June of 2009, with good success.)
Apply because:
- SIAI is tackling the world’s most important task -- the task of shaping the Singularity. The task of averting human extinction. We aren’t the only people tackling this, but the total set is frighteningly small.
- When numbers are this small, it’s actually plausible that you can tip the balance.
- SIAI has some amazing people to learn from -- many report learning and growing more here than in any other period of their lives.
- SIAI also has major gaps, and much that desperately needs doing but that we haven’t noticed yet, or have noticed but haven’t managed to fix -- gaps where your own skills, talents, and energy can come into play.
- You have start-up experience or are otherwise an instigator: someone who can walk into an unstructured environment and create useful projects for yourself and others;
- You’re skilled at creating community; you have an open heart; you can learn rapidly, and create contexts for others to learn; you have a serious interest in pioneering more effective ways of thinking;
- You care about existential risk, and are searching for long-term career paths that might help;
- You have high analytic intelligence, a tendency to win math competitions, or background and thinking skill around AI, probability, anthropics, simulation scenarios, rationality, existential risk, and related topics; (math, compsci, physics, or analytic philosophy background is also a plus)
- You have a specific background that is likely to prove helpful: academic research experience; teaching or writing skill; strong personal productivity; programming fluency; a cognitive profile that differs from the usual LW mold; or strong talent of some other sort, in an area we need, that we may not have realized we need.
(You don’t need all of the above; some is fine.)
Don’t be intimidated -- SIAI contains most of the smartest people I’ve ever met, but we’re also a very open community. Err on the side of sending in an application; then, at least we’ll know each other. (Applications for fall and beyond are also welcome; we’re taking Fellows on a rolling basis.)
If you’d like a better idea of what SIAI is, and what we’re aimed at, check out:
1. SIAI's Brief Introduction;
2. The Challenge projects;
3. Our 2009 accomplishments;
4. Videos from past Singularity Summits (the 2010 Summit will happen during this summer’s program, Aug 14-15 in SF; visiting Fellows will assist);
5. Comments from our last Call for Visiting Fellows; and/or
6. Bios of the 2009 Summer Fellows.
Or just drop me a line. Our application process is informal -- just send me an email at anna at singinst dot org with: (1) a resume/c.v. or similar information; and (2) a few sentences on why you’re applying. And we’ll figure out where to go from there.
Looking forward to hearing from you.
As things stand, there is no guarantee that SIAI will get to make a difference, just as you have no guarantee that you will be alive in a week's time. The real question is, do you even believe that unfriendly AI is a threat to the human race, and if so, is there anyone else tackling the problem in even a semi-competent way? If you don't even think unfriendly AI is an issue, that's one sort of discussion, a back-to-basics discussion. But if you do agree it's a potentially terminal problem, then who else is there? Everyone else in AI is a dilettante on this question; AI ethics is always a problem to be solved swiftly and in passing, a distraction from the more exciting business of making machines that can think. SIAI perceive the true seriousness of the issue, and at least have a sensible plan of attack, even if they are woefully underresourced when it comes to making it happen.
I suspect that in fact you're playing devil's-advocate a bit, trying to encourage the articulation of a new and better argument in favor of SIAI, but the sort of argument you want doesn't work. SIAI can of course guarantee that there will continue to be Singularity summits and visiting fellows, and it is reasonable to think that informed people discussing the issue make it more likely to turn out for the best, but they simply cannot guarantee that theoretically and pragmatically they will be ready in time. Perhaps I can put it this way: SIAI getting on with the job is not sufficient to guarantee a friendly Singularity, but for such an outcome to be anything but blind luck, it is necessary that someone take responsibility, and no-one else comes close to doing that.
I have to admit that I should have read the "Brief Introduction" link. That answered a lot of my objections.
In the end all I can say is that I got a misleading idea about the aspirations of SIAI, and that this was my fault. With this better understanding of the goals of SIAI, though, (which are implied to be limited to the mitigation of accidents caused by commercially developed AIs) I have to say that I remain unconvinced that FAI is a high-priority matter. I am particularly unimpressed by Yudkowski's cynical opinion of their motivations behi... (read more)