Today I was appointed the new Executive Director of Singularity Institute.
Because I care about transparency, one of my first projects as an intern was to begin work on the organization's first Strategic Plan. I researched how to write a strategic plan, tracked down the strategic plans of similar organizations, and met with each staff member, progressively iterating the document until it was something everyone could get behind.
I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board. It doesn't accomplish much by itself, but it's one important stepping stone in building an organization that is more productive, more trusted, and more likely to help solve the world's biggest problems.
I spent two months as a researcher, and was then appointed Executive Director.
In further pursuit of transparency, I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.
The Rules
1) One question per comment (to allow voting to carry more information about people's preferences).
2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov.
4) If you reference certain things that are online in your question, provide a link.
5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin recording video responses for.
I might respond to certain questions within the comments thread and not on video; for example, when there is a one-word answer.
If someone as capable as Terence Tao approached the SIAI, asking if they could work full-time and for free on friendly AI, what would you tell them to do? In other words, are there any known FAI sub-problems that demand some sort of expertise that the SIAI is currently lacking?
What message about FAI/MIRI should I take away from the fact that this very important question isn't answered?