Now is the very last minute to apply for a Summer 2010 Visiting Fellowship. If you’ve been interested in SIAI for a while, but haven’t quite managed to make contact -- or if you’re just looking for a good way to spend a week or more of your summer -- drop us a line. See what an SIAI summer might do for you and the world.
(SIAI’s Visiting Fellow program brings volunteers to SIAI for anywhere from a week to three months, to learn, teach, and collaborate. Flights and room and board are covered. We’ve been rolling since June of 2009, with good success.)
Apply because:
- SIAI is tackling the world’s most important task -- the task of shaping the Singularity. The task of averting human extinction. We aren’t the only people tackling this, but the total set is frighteningly small.
- When numbers are this small, it’s actually plausible that you can tip the balance.
- SIAI has some amazing people to learn from -- many report learning and growing more here than in any other period of their lives.
- SIAI also has major gaps, and much that desperately needs doing but that we haven’t noticed yet, or have noticed but haven’t managed to fix -- gaps where your own skills, talents, and energy can come into play.
- You have start-up experience or are otherwise an instigator: someone who can walk into an unstructured environment and create useful projects for yourself and others;
- You’re skilled at creating community; you have an open heart; you can learn rapidly, and create contexts for others to learn; you have a serious interest in pioneering more effective ways of thinking;
- You care about existential risk, and are searching for long-term career paths that might help;
- You have high analytic intelligence, a tendency to win math competitions, or background and thinking skill around AI, probability, anthropics, simulation scenarios, rationality, existential risk, and related topics; (math, compsci, physics, or analytic philosophy background is also a plus)
- You have a specific background that is likely to prove helpful: academic research experience; teaching or writing skill; strong personal productivity; programming fluency; a cognitive profile that differs from the usual LW mold; or strong talent of some other sort, in an area we need, that we may not have realized we need.
(You don’t need all of the above; some is fine.)
Don’t be intimidated -- SIAI contains most of the smartest people I’ve ever met, but we’re also a very open community. Err on the side of sending in an application; then, at least we’ll know each other. (Applications for fall and beyond are also welcome; we’re taking Fellows on a rolling basis.)
If you’d like a better idea of what SIAI is, and what we’re aimed at, check out:
1. SIAI's Brief Introduction;
2. The Challenge projects;
3. Our 2009 accomplishments;
4. Videos from past Singularity Summits (the 2010 Summit will happen during this summer’s program, Aug 14-15 in SF; visiting Fellows will assist);
5. Comments from our last Call for Visiting Fellows; and/or
6. Bios of the 2009 Summer Fellows.
Or just drop me a line. Our application process is informal -- just send me an email at anna at singinst dot org with: (1) a resume/c.v. or similar information; and (2) a few sentences on why you’re applying. And we’ll figure out where to go from there.
Looking forward to hearing from you.
What would be wrong with an AI based on our revealed preferences? It sounds like an easy question, but somehow I'm having a hard time coming up with an answer.
What AI is based on is what determines the way the world will actually be, so by building an AI with given preference, you are inevitably answering my question about what to do with the world. It's wrong to use revealed preference for AI to the same extent revealed preference gives the wrong answer to my question. You seem to agree that the correct answer to my question has little to do with revealed preference. This seems to be the same as seeing revealed preference a wrong thing to imprint AI with.