Today I was appointed the new Executive Director of Singularity Institute.
Because I care about transparency, one of my first projects as an intern was to begin work on the organization's first Strategic Plan. I researched how to write a strategic plan, tracked down the strategic plans of similar organizations, and met with each staff member, progressively iterating the document until it was something everyone could get behind.
I quickly learned why there isn't more of this kind of thing: transparency is a lot of work! 100+ hours of work later, plus dozens of hours from others, and the strategic plan was finally finished and ratified by the board. It doesn't accomplish much by itself, but it's one important stepping stone in building an organization that is more productive, more trusted, and more likely to help solve the world's biggest problems.
I spent two months as a researcher, and was then appointed Executive Director.
In further pursuit of transparency, I'd like to answer (on video) submitted questions from the Less Wrong community just as Eliezer did two years ago.
The Rules
1) One question per comment (to allow voting to carry more information about people's preferences).
2) Try to be as clear and concise as possible. If your question can't be condensed into one paragraph, you should probably ask in a separate post. Make sure you have an actual question somewhere in there (you can bold it to make it easier to scan).
3) I will generally answer the top-voted questions, but will skip some of them. I will tend to select questions about Singularity Institute as an organization, not about the technical details of some bit of research. You can read some of the details of the Friendly AI research program in my interview with Michael Anissimov.
4) If you reference certain things that are online in your question, provide a link.
5) This thread will be open to questions and votes for 7 days, at which time I will decide which questions to begin recording video responses for.
I might respond to certain questions within the comments thread and not on video; for example, when there is a one-word answer.
Louie Helm is Singularity Institute's Director of Development. He manages donor relations, grant writing, and talent recruitment.
Here are some of the actions that I would take as a director of development:
Michael Anissimov is responsible for compiling, distributing, and promoting SIAI media materials.
What I would do:
Anna Salamon is a full-time SIAI researcher.
What is she researching right now? With due respect, but the Uncertain Future web project doesn't look like something that a researcher, who is capable of making progress on the FAI problem, could work 3 years on.
Eliezer Yudkowsky is the foremost researcher on Friendly AI and recursive self-improvement.
He's still writing his book on rationality? How is it going? Is he planning a book tour? Does he already know who he is going to send the book for free, e.g. Richards Dawkins or other people who could promote it on their blog?
Edwin Evans is the Chairman of the Singularity Institute Board of Directors
No clue what he is, or could be doing right now.
Ray Kurzweil
It looks like he's doing nothing except being part of the team page.
Amy Willey, J.D., is the Singularity Institute's Chief Operating Officer, and is responsible for institute operations and legal matters.
What I would do:
Michael Vassar is SIAI's President, and provides overall leadership of the SIAI
As a president, one of the first actions I would take is to talk with everyone about the importance of data security. I would further make sure that there are encrypted backups, of my organisations work, on different continents and under different jurisdictions to make sure that various kinds of catastrophes, including a obligation to disclosure by a government, can be mitigated or avoided.