Now is the very last minute to apply for a Summer 2010 Visiting Fellowship. If you’ve been interested in SIAI for a while, but haven’t quite managed to make contact -- or if you’re just looking for a good way to spend a week or more of your summer -- drop us a line. See what an SIAI summer might do for you and the world.
(SIAI’s Visiting Fellow program brings volunteers to SIAI for anywhere from a week to three months, to learn, teach, and collaborate. Flights and room and board are covered. We’ve been rolling since June of 2009, with good success.)
Apply because:
- SIAI is tackling the world’s most important task -- the task of shaping the Singularity. The task of averting human extinction. We aren’t the only people tackling this, but the total set is frighteningly small.
- When numbers are this small, it’s actually plausible that you can tip the balance.
- SIAI has some amazing people to learn from -- many report learning and growing more here than in any other period of their lives.
- SIAI also has major gaps, and much that desperately needs doing but that we haven’t noticed yet, or have noticed but haven’t managed to fix -- gaps where your own skills, talents, and energy can come into play.
- You have start-up experience or are otherwise an instigator: someone who can walk into an unstructured environment and create useful projects for yourself and others;
- You’re skilled at creating community; you have an open heart; you can learn rapidly, and create contexts for others to learn; you have a serious interest in pioneering more effective ways of thinking;
- You care about existential risk, and are searching for long-term career paths that might help;
- You have high analytic intelligence, a tendency to win math competitions, or background and thinking skill around AI, probability, anthropics, simulation scenarios, rationality, existential risk, and related topics; (math, compsci, physics, or analytic philosophy background is also a plus)
- You have a specific background that is likely to prove helpful: academic research experience; teaching or writing skill; strong personal productivity; programming fluency; a cognitive profile that differs from the usual LW mold; or strong talent of some other sort, in an area we need, that we may not have realized we need.
(You don’t need all of the above; some is fine.)
Don’t be intimidated -- SIAI contains most of the smartest people I’ve ever met, but we’re also a very open community. Err on the side of sending in an application; then, at least we’ll know each other. (Applications for fall and beyond are also welcome; we’re taking Fellows on a rolling basis.)
If you’d like a better idea of what SIAI is, and what we’re aimed at, check out:
1. SIAI's Brief Introduction;
2. The Challenge projects;
3. Our 2009 accomplishments;
4. Videos from past Singularity Summits (the 2010 Summit will happen during this summer’s program, Aug 14-15 in SF; visiting Fellows will assist);
5. Comments from our last Call for Visiting Fellows; and/or
6. Bios of the 2009 Summer Fellows.
Or just drop me a line. Our application process is informal -- just send me an email at anna at singinst dot org with: (1) a resume/c.v. or similar information; and (2) a few sentences on why you’re applying. And we’ll figure out where to go from there.
Looking forward to hearing from you.
At this point, I think I can provide a definitive answer to your earlier question, and it is ... wait for it ... "It depends on what you mean by revealed preference." (Raise your hand if you saw that one coming! I'll be here all week, folks!)
Specifically: if the AI is to do the "right thing," then it has to get its information about "rightness" from somewhere, and given that moral realism is false (or however you want to talk about it), that information is going to have to come from humans, whether by scanning our brains directly or just superintelligently analyzing our behavior. Whether you call this revealed preference or Friendliness doesn't matter; the technical challenge remains the same.
One argument against using the term revealed preference in this context is that the way the term gets used in economics fails to capture some of the key subtleties of the superintelligence problem. We want the AI to preserve all the things we care about, not just the most conspicuous things. We want it to consider not just that Lucas ate this-and-such, but also that he regretted it afterwards, where it should be stressed that regret is not any less real of a phenomenon than eating is. But because economists often use their models to study big public things like the trade of money for goods and services, in the popular imagination, economic concepts are associated with those kinds of big public things, and not small private things like feeling regretful---even though you could make a case that the underlying decision-theoretic principles are actually general enough to cover everything.
If the math only says to maximize u(x) subject to x dot p equals y, there's no reason things like ethical concerns or the wish to be a better person can't be part of the x_i or p_j, but because most people think economics is about money, they're less likely to realize this when you say revealed preference. They'll object, "Oh, but what about the time I did this-and-such, but I wish I were the sort of person that did such-and-that?" You could say, "Well, you revealed your preference to do such-and-that in your other actions, at some other moments of your life," or you could just choose a different word. Again, I'm not sure it matters.