Transcript below.
Intro
Hi everyone. I’m Luke Muehlhauser, the new Executive Director of Singularity Institute.
Literally hours after being appointed Executive Director, I posted a call for questions about the organization on the Less Wrong.com community website, saying I would answer many of them on video — and this is that video.
I’m doing this because I think transparency and communication are important.
In fact, when I began as an intern with Singularity Institute, one of my first projects was to spend over a hundred hours working with everyone in the organization to write its first strategic plan, which the board ratified and you can now read on our website.
When I was hired as a researcher, I gave a long text-only interview with Michael Anissimov, where I answered 30 questions about my personal background, the mission of Singularity Institute, about our technical research program, and about the unsolved problems we work on, and also about the value of rationality training.
After becoming Executive Director, I immediately posted that call for questions — a few of which I will now answer.
Staff Changes
First question. Less Wrong user ‘wedrifid’ asks:
The staff and leadership at [Singularity Institute] seem to be undergoing a lot of changes recently. Is instability in the organisation something to be concerned about?
On this, I should address specific staff changes that wedrifid is talking about. At the end of summer 2011, Jasen Murray — who was running the visiting fellows program — resigned in order to pursue a business opportunity related to his passion for improving people’s effectiveness. At that same time, I was hired as a researcher after working as an intern for a few months, and Louie Helm was hired as Director of Development after having done significant volunteer work for Singularity Institute for even longer than that. Carl Shulman was also hired as a researcher at this time, and had also done lots of volunteer work before that, including publishing papers like “Arms Control and Intelligence Explosions,” “Implications of a Software-Limited Singularity,” and “Basic AI Drives and Catastophic Risks," and maybe some others
Another change is that our President, Michael Vassar, is launching a personalized medicine company that we’re all pretty excited about. It has a lot of promise, so we’re excited to see him do that. He’ll still be retaining the title of President because he will, really, continue to do quite a lot of good work for us — networking and spreading our mission wherever he goes. But he will no longer take a salary from Singularity Institute, and that was his idea, several months ago.
But we needed somebody to run the organization, and I was the favorite choice for the job.
So, should you be worried about instability? Well... I'm excited about the way the organization is taking shape, but I will say that we need more people. In particular, our research team took a hit when I moved from Researcher to Executive Director. So if you care about our mission and you can work with us to write working papers and other documents, you should contact me! My email is luke@intelligence.org.
And I’ll say one other thing. Do not fall prey to the sin of underconfidence. When I was living in Los Angeles I assumed I wasn’t special enough to apply even as an unpaid visiting fellow, and Louie Helm had to call me on Skype and talk me into it. So I thought “What the hell, it can’t hurt to contact Singularity Institute,” and within 9 months of that first contact I went from intern to researcher to Executive Director. So don't underestimate your potential — contact us, and let us be the ones who say "No."
And I suppose now would be a good time to answer another question, this one asked by ‘JoshuaZ’, who asks:
Are you concerned about potential negative signaling/status issues that will occur if [Singularity Institute] has as an executive director someone who was previously just an intern?
Not really. And the problem isn’t that I used to be an unpaid Visiting Fellow, it’s just that I went from Visiting Fellow to Executive Director so quickly. But that's... one of the beauties of Singularity Institute. Singularity Institute is not a place where you need to “pay your dues,” or something. If you’re hard-working and competent and you get along with people and you’re clearly committed to rationality and to reducing existential risk, then the leadership of the organization will put you where you can do the most good and be the most effective, regardless of irrelevant factors like duration of employment.
Rigorous Research
Next question. Less Wrong user ‘quartz’ asks:
How are you going to address the perceived and actual lack of rigor associated with [Singularity Institute]?
Now, what I initially thought quartz was talking about was Singularity Institute’s relative lack of publications in academic journals like Risk Analysis or Minds and Machines, so let me respond to that interpretation of the question first.
Luckily, I am probably the perfect person to answer this question, because when I first became involved with Singularity Institute this was precisely my own largest concern with Singualrity Institute, but I changed my mind when I learned the reasons why Singularity Institute does not push harder than it does to publish in academic journals.
So. Here’s the story. In March 2011, before I was even an intern, I wrote a discussion post on Less Wrong called ‘How [Singularity Institute] could publish in mainstream cognitive science journals.’ I explained in detail not only the right style is for mainstream journals, but also why Singularity Institute should publish in mainstream journals. My four reasons were:
- Some donors will take Singularity Institute more seriously if it publishes in mainstream journals.
- Singularity Institute would look a lot more credible in general.
- Singularity Institute would spend less time answering the same questions again and again if it publishes short, well-referenced responses to such questions.
- Writing about these problems in the common style... will help other smart researchers to understand the relevant problems and perhaps contribute to solving them.
Then, in April 2011, I moved to the Bay Area and began to realize why exerting a lot of effort to publish in mainstream journals probably isn’t the right way to go for Singularity Institute, and I wrote a discussion post called ‘Reasons for [Singularity Institute] to not publish in mainstream journals.’
What are those reasons?
The first one is that more people read, for example, Yudkowsky’s thoughtful blog posts or Nick Bostrom’s pre-prints from his website... than the actual journals.
The other reason is that in many cases, most of a writer’s time is invested after the article is accepted to a journal. Which means that most of the work comes after you’ve done the most important part and written up all the core ideas. Most of the work is tweaking. Those are dozens and dozens and dozens of hours not spent on finding new safety strategies, writing new working papers, etc.
A third reason is that publishing in mainstream journals requires you to jump through lots of hoops, like reviewer bias and the normal aversion to stuff that sounds weird.
A fourth reason to not publish so much in mainstream journals is that publishing in mainstream journals requires a pretty large delay in publication, somewhere between 4 months to 2 years.
So: If you’re a mainstream academic seeking tenure, publishing in mainstream journals is what you need to do, because that’s how the system is set up. If you’re trying to solve hard problems very quickly, publishing in mainstream journals can sometimes be something of a lost purpose.
If you’re trying to hard solve problems in mathematics and philosophy, why would you spend most of your limited resources tweaking sentences rather than getting the important ideas out there for yourself or others to improve and build on? Why would you accept delays of 4 months to 2 years?
At Singularity Institute, we’re not trying to get tenure. We don’t need you to have a Ph.D. We don’t care if you work at Princeton or at Brown Community College. We need you to help us solve the most important problems in mathematics, computer science, and philosophy, and we need to do that quickly.
That said, it will sometimes be worth it to develop a working paper into something that can be published in a mainstream journal, if the effort required and the time delay are not too great.
But just to drive my point home, let me read from the opening chapter of the new book Reinventing Discovery, by Michael Nielsen, the co-author of the leading textbook on quantum computation. It's a really great passage:
Tim Gowers is not your typical blogger. A mathematician at Cambridge University, Gowers is a recipient of the highest honor in mathematics, the Fields Medal, often called the Nobel Prize of mathematics. His blog radiates mathematical ideas and insight.
In January 2009, Gowers decided to use his blog to run a very unusual social experiment. He picked out an important and difficult unsolved mathematical problem, a problem he said he’d “love to solve.” But instead of attacking the problem on his own, or with a few close colleagues, he decided to attack the problem completely in the open, using his blog to post ideas and partial progress. What’s more, he issued an open invitation asking other people to help out. Anyone could follow along and, if they had an idea, explain it in the comments section of the blog. Gowers hoped that many minds would be more powerful than one, that they would stimulate each other with different expertise and perspectives, and collectively make easy work of his hard mathematical problem. He dubbed the experiment the Polymath Project.
The Polymath Project got off to a slow start. Seven hours after Gowers opened up his blog for mathematical discussion, not a single person had commented. Then a mathematician named Jozsef Solymosi from the University of British Columbia posted a comment suggesting a variation on Gowers’s problem, a variation which was easier, but which Solymosi thought might throw light on the original problem. Fifteen minutes later, an Arizona high-school teacher named Jason Dyer chimed in with a thought of his own. And just three minutes after that, UCLA mathematician Terence Tao—like Gowers, a Fields medalist—added a comment. The comments erupted: over the next 37 days, 27 people wrote 800 mathematical comments, containing more than 170,000 words. Reading through the comments you see ideas proposed, refined, and discarded, all with incredible speed. You see top mathematicians making mistakes, going down wrong paths, getting their hands dirty following up the most mundane of details, relentlessly pursuing a solution. And through all the false starts and wrong turns, you see a gradual dawning of insight. Gowers described the Polymath process as being “to normal research as driving is to pushing a car.” Just 37 days after the project began Gowers announced that he was confident the polymaths had solved not just his original problem, but a harder problem that included the original as a special case. He described it as “one of the most exciting six weeks of my mathematical life.” Months’ more cleanup work remained to be done, but the core mathematical problem had been solved.
That is what working for rapid progress on problems rather than for tenure looks like.
And here’s the kicker. We’ve already done this at Singularity Institute! This is what happened, though not quite as fast, when Eliezer Yudkowsky made a few blog posts about open problems in decision theory, and the community rose to the challenge, proposed solutions, and iterated and iterated. That work continued with a decision theory workshop and a mailing list that is still active, where original progress in decision theory is being made quite rapidly, and with none of it going through the hoops and delays of publishing in mainstream journals.
Now, I do think that Singularity Institute needs to publish more research, both in and out of mainstream journals. But most of what we publish should be blog posts and working papers, because our goal is to solve problems quickly, not to wait 4 months to 2 years to go through a mainstream publisher and garner tenure and prestige and so on.
That said, I’m quite happy when people do publish on these subjects in mainstream journals, because prestige is useful for bringing attention to overlooked topics, and because hopefully these instances of publishing in mainstream journals are occurring when it isn’t a huge waste of time and effort to do so. For example, I love the work being done by our frequent collaborators at the Future of Humanity Institute at Oxford, and I always look forward to what they're doing next.
Now, back to quartz's original question about rigorous research. I asked for clarification on what quartz meant, and here's what he said:
In 15 years, I want to see a textbook on the mathematics of FAI that I can put on my bookshelf next to Pearl's Causality, Sipser's Introduction to the Theory of Computation and MacKay's Information Theory, Inference, and Learning Algorithms. This is not going to happen if research of sufficient quality doesn't start soon.
Now, that sounds wonderful, and I agree that the community of researchers working to reduce existential risks, including Singularity Institute, will need to ramp up their research efforts to achieve that kind of goal.
I will offer just one qualification that I don't think will be very controversial. I think most people would agree that if a scientist happened to create a synthetic virus that was airborne and could kill hundreds of millions of people if released into the wild, we wouldn't want the instructions for creating that synthetic virus to be published in the open for terrorist groups or hawkish governments to use. And for the same reasons, we wouldn't want a Friendly AI textbook to explain how to build highly dangerous AI systems. But excepting that, I would love to see a rigorously technical textbook on friendliness theory, and I agree that friendliness research will need to increase for us to see that textbook be written in 15 years. Luckily, the Future of Humanity Institute is putting a special emphasis on AI risks for the next little while, and Singularity Institute is ramping up its own research efforts.
But the most important thing I want to say is this. If you can take ideas and arguments that already exist in blog posts, emails, and human brains (for example at Singularity Institute) and turn them into working papers or maybe even journal articles, and you care about navigating the Singularity successfully, please contact me. My email address is luke@intelligence.org. If you're that kind of person who can do that kind of work, I really want to talk to you.
I’d estimate we have something like 30-40 papers just waiting to be written. The conceptual work has been done, we just need more researchers who can write this stuff up. So if you can do that, you should contact me: luke@intelligence.org.
Friendly AI Sub-Problems
Next question. Less Wrong user ‘XiXiDu’ asks:
If someone as capable as Terence Tao approached [Singularity Institute], asking if they could work full-time and for free on friendly AI, what would you tell them to do? In other words, are there any known FAI sub-problems that demand some sort of expertise that [Singularity Institute] is currently lacking?
Terence Tao is a mathematician at UCLA who was a child prodigy and is considered by some people to be one of the smartest people on the planet. He is exactly the kind of person we need to successfully navigate the Singularity, and in particular to solve open problems in Friendly AI theory.
I explained in my text-only interview with Michael Anissimov in September 2011 that the problem of Friendly AI breaks down into a large number of smaller and better-defined technical sub-problems. Some of the open problems I listed in that interview are the ones I’d love somebody like Terence Tao to work on. For example:
How can an agent make optimal decisions when it is capable of directly editing its own source code, including the source code of the decision mechanism? How can we get an AI to maintain a consistent utility function throughout updates to its ontology? How do we make an AI with preferences about the external world instead of about a reward signal? How can we generalize the theory of machine induction — called Solomonoff induction â— so that it can use higher-order logics and reason correctly about observation selection effects? How can we approximate such ideal processes such that they are computable?
(That was a quote from the text-only interview.)
But even before that, we’d really like to write up explanations of these problems in all their technical detail, but again that takes researchers and funding and we’re short on both. For now, I’ll point you to Eliezer’s talk at Singularity Summit 2011, which you can Google for.
But yeah, we have a lot of technical problems that we'd like to clarify the nature of so that we can have researchers working on them. So we do need potential researchers to contact us.
I loved watching Batman and Superman cartoons when I was a kid, but as it turns out, the heroes who can save the world are not those who have incredible strength or the power of flight. They are mathematicians and computer scientists.
Singularity Institute needs heroes. If you are a brilliant mathematician or computer scientist and you want a shot at saving the world, contact me: luke@intelligence.org.
I know it sounds corny, but I mean it. The world needs heroes.
Improved Funding
Next, Less Wrong user ‘XiXiDu’ asks:
What would [Singularity Institute] do given various amounts of money? Would it make a difference if you had 10 or 100 million dollars at your disposal...?
Yes it would. Absolutely. If Bill Gates decided tomorrow that he wanted to save not just a billion people but the entire human race, and he gave us 100 million dollars, we would hire more researchers and figure out the best way to spend that money. That's a pretty big project in itself.
But right now, my bet on how we’d end up spending that money is that we would personally argue for our mission to each of the world’s top mathematicians, AI researchers, physicists, and formal philosophers. The Terence Taos and Judea Pearls of the world. And for any of them who could be convinced, we’d be able to offer them enough money to work for us. We’d also hire several successful Oppenheimer-type research administrators who could help us bring these brilliant minds together to work on these problems.
As nice as it is to have people from all over the world solving problems in mathematics, decision theory, agent architectures, and other fields collaboratively over the internet, there are a lot of things you can make move faster when you bring the smartest people in the world into one building and allow them to do nothing else but solve the world's most important problems.
Rationality
Next. Less Wrong user ‘JoshuaZ’ asks:
A lot of Eliezer's work has been not at all related strongly to FAI but has been to popularizing rational thinking. In your view, should [Singularity Institute] focus exclusively on AI issues or should it also care about rational issues? In that context, how does Eliezer's ongoing work relate to [Singularity Institute]?
Yes, it’s a great question. Let me begin with the rationality work.
I was already very interested in rationality before I found Less Wrong and Singularity Institute, but when I first encountered the arguments about intelligence explosion, one of my first thoughts was, “Uh-oh. Rationality is much more important than I had originally thought.”
Why? Intelligence explosion is a mind-warping, emotionally dangerous, intellectually difficult, and very uncertain field in which we don’t get to do a dozen experiments so that reality can beat us over the head with the correct answer. Instead, when it comes to intelligence explosion scenarios, in order to get this right we have to transcend the normal human biases, emotions, and confusions of the human mind, and make the right predictions before we can run any experiments. We can't try an intelligence explosion and see how it turns out.
Moreover, to even understand what the problem is, you’ve got to get past a lot of usual biases and false but common beliefs. So we need a more sane world to solve these problems, and we need a saner world to have a larger community of support for addressing these issues.
And, Eliezer’s choice to work on rationality has paid off. The Sequences, and the Less Wrong community that grew out of them, have been successful. We now have a large and active community of people growing in rationality and spreading it to others, and a subset of that community contributes to progress on problems related to AI. Even Eliezer’s choice to write a rationality fanfiction, Harry Potter and the Methods of Rationality, has — contrary to my expectations — had quite an impact. It is now the most popular Harry Potter fan fiction, I think, and it was responsible for perhaps ¼ or ⅕ of the money raised during the 2011 summer matching challenge, and has brought several valuable new people into our community. Eliezer’s forthcoming rationality books might have a similar type of effect.
But we understand that many people don’t see the connection between rationality and navigating the Singularity successfully the way that we do, so in our strategic plan we explained that we’re working to spin off most of the rationality work to a separate organization. It doesn’t have a name yet, but internally we just call it ‘Rationality Org.’ That way, Singularity Institute can focus on Singularity issues, and the Rationality Org (whatever it comes to be called) can focus on rationality, and people can support them independently. That’s something else Eliezer has been working on, along with a couple of others.
Of course, Eliezer does spend some of his time on AI issues, and he plans to return full-time to AI once Rationality Org is launched. But we need more talented researchers, and other contributions, in order to succeed on AI. Rationality has been helpful in attracting and enhancing a community that helps with those things.
Changing Course
Next. Less Wrong user ‘JoshuaZ’ asks:
...are there specific sets of events (other than the advent of a Singularity) which you think will make [Singularity Institute] need to essentially reevaluate its goals and purpose at a fundamental level?
Yes, and I can give a few examples that I wrote down.
Right now we’re focused on what happens when smarter-than-human intelligence arrives, because the evidence available suggests to us that AI will be more important than other crucial considerations. But suppose we made a series of discoveries that made it unlikely that AI would arrive anytime soon, but very likely that catastrophic biological terrorism was only a decade or two away, for example. In that situation, Singularity Institute would shift its efforts quite considerably.
Another example: If other organizations were doing our work, including Friendly AI, and with better efficiency and scale, then it would make sense to fold Singularity Institute and transfer resources, donors, and staff to these other, more efficient and effective organizations.
If it could be shown that some other process was much better at mobilizing efforts to address core issues, for example if Giving What We Can (an organization focused on optimal philanthropy) continues doubling each year and spinning off large numbers of skilled people to work on existential risk reduction (as one of the targets of optimal philanthropy), then focus there for a while could make sense — or at least it might make sense to strip away outreach functions from [Singularity Institute], perhaps leaving a core FAI team, and leave outreach to the optimal philanthropy community or something like that.
So, those are just three ways that things could change or we could make some discoveries, and that would radically shift the strategy that we have at Singularity Institute.
Experimental Research
Next. User ‘XiXiDu’ asks:
Is [Singularity Institute] willing to pursue experimental AI research or does it solely focus on hypothetical aspects?
Experimental research would, at this point, be a diversion from work on the most important problems related to our mission, which are technical problems in mathematics, computer science, and philosophy. If experimental research becomes more important than those problems in math, computer science, and philosophy, and if we had the funding available to do experiments, we would do experimental research at that time, or fund somebody else to do it. But those aren't the most important or most urgent problems that we need to solve.
Winning Without Friendly AI
Next. Less Wrong user ‘Wei_Dai’ asks:
Much of [Singularity Institute’s] research [is] focused not directly on [Friendly AI] but more generally on better understanding the dynamics of various scenarios that could lead to a Singularity. Such research could help us realize a positive Singularity through means other than directly building an [Friendly AI].
Does [Singularity Institute] have any plans to expand such research activities, either in house, or by academia or independent researchers?
The answer to that question is 'Yes'.
Singularity Institute does not put all its eggs in the ‘Friendly AI’ basket. Intelligence explosion scenarios are complicated, the future is uncertain, and the feasibility of many possible strategies is unknown and uncertain. Both Singularity Institute and our friends at Future of Humanity Institute at Oxford have done quite a lot of work on these kinds of strategic considerations, things like differential technological development. It’s important work, so we plan to do more of it.
Most of this work, however, hasn’t been published. So if you want to see it published, put us in contact with people who are good at rapidly taking ideas and arguments out of different people's heads and putting them on paper. Or maybe you are that person! Right now we just don’t have enough researchers to write these things up as much as we'd like. So contact me: luke@intelligence.org.
Conclusion
Well, that’s it! I'm sorry I can’t answer all the questions. Doing this takes a lot more work than you might think, but if it is appreciated, and especially if it grows and encourages the community of people who are trying to make the world a better place and reduce existential risk, then I may try to do something like this — maybe without the video, maybe with the video — with some regularity.
Keep in mind that I do have a personal feedback form at tinyurl.com/luke-feedback, where you can send me feedback on myself and Singularity Institute. You can also check the Less Wrong page that will be dedicated to this Q&A and leave some comments there.
Thanks for listening and watching. This is Luke Muehlhauser, signing off.
You totally remind me of the "aliens guy": http://files.sharenator.com/ancient_aliens_guy_RE_Cool_Story_Bro-s553x484-241806.jpg