Today I was appointed the new Executive Director of Singularity Institute.
Because I care about transparency, one of my first projects as an intern was to begin work on the organization's first Strategic Plan. I researched how to write a strategic plan, tracked down the strategic plans of similar organizations, and met with each staff member, progressively iterating the document until it was something everyone could get behind.
I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board. It doesn't accomplish much by itself, but it's one important stepping stone in building an organization that is more productive, more trusted, and more likely to help solve the world's biggest problems.
I spent two months as a researcher, and was then appointed Executive Director.
In further pursuit of transparency, I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.
The Rules
1) One question per comment (to allow voting to carry more information about people's preferences).
2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov.
4) If you reference certain things that are online in your question, provide a link.
5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin recording video responses for.
I might respond to certain questions within the comments thread and not on video; for example, when there is a one-word answer.
Thanks. You didn't answer my questions directly, but it sounds like things are proceeding more or less according to expectations. I have a couple of followup questions.
At what level of talent do you think an attempt to build an FAI would start to do more (expected) good than harm? For simplicity, feel free to ignore the opportunity cost of spending financial and human resources on this project, and just consider the potential direct harmful effects, like accidentally creating an UFAI while experimenting to better understand AGI, or building a would-be FAI that turns out to be an UFAI due to a philosophical, theoretical or programming error, or leaking AGI advances that will allow others to build an UFAI, or starting an AGI arms race.
I have a serious concern that if SIAI ever manages to obtain abundant funding and a team of "pretty competent researchers" (or even "world-class talent", since I'm not convinced that even a team of world-class talent trying to build an FAI will do more good than harm), it will proceed with an FAI project without adequate analysis of the costs and benefits of doing so, or without continuously reevaluating the decision in light of new information. Do you think this concern is reasonable?
If so, I think it would help a lot if SIAI got into the habit of making its strategic thinking more transparent. It could post answers to questions like the ones I asked in the grandparent comment without having to be prompted. It could publish the reasons behind every major strategic decision, and the metrics it keeps to evaluate its initiatives. (One way to do this, if such strategic thinking often occurs or is presented at board meetings, would be to publish the meeting minutes, as I suggested in another comment.)
I'm not sure that scientific talent is the relevant variable here. More talented folk are more likely to achieve both positive and negative outcomes. I would place more weight on epistemic rationality, motivations (personality, background checks), institutional setup and culture, the strategy of first trying to get test the tractability of robust FAI theory and then advancing FAI before code (with emphasis on the more-FAI-less-AGI problems fi... (read more)