Geoff Anders asked me to post this introduction to Leverage Research. Several friends of the Singularity Institute are now with Leverage Research, and we have overlapping goals.
Hello Less Wrong! I'm Geoff Anders, founder of Leverage Research. Many Less Wrong readers are already familiar with Leverage. But many are not, and because of our ties to the Less Wrong community and our deep interest in rationality, I thought it would be good to formally introduce ourselves.
I founded Leverage at the beginning of 2011. At that time we had six members. Now we have a team of more than twenty. Over half of our people come from the Less Wrong / Singularity Institute community. One of our members is Jasen Murray, the leader of the Singularity Institute's recent Rationality Boot Camp. Another is Justin Shovelain, a two-year Visiting Fellow at SIAI and the former leader of their intelligence amplification research. A third is Adam Widmer, a former co-organizer of the New York Less Wrong group.
Our goal at Leverage is to make the world a much better place, using the most effective means we can. So far, our conclusion has been that the most effective way to change the world is by means of high-value projects, projects that will have extremely positive effects if they succeed and that have at least a fair probability of success.
One of our projects is existential risk reduction. We have conducted a study of the efficacy of methods for persuading people to take the risks of artificial general intelligence (AGI) seriously. We have begun a detailed analysis of AGI catastrophe scenarios. We are working with risk analysts inside and outside of academia. Ultimately, we intend to achieve a comprehensive understanding of AGI and other global risks, develop response plans, and then enact those plans.
A second project is intelligence amplification. We have reviewed the existing research and analyzed current approaches. We then created an initial list of research priorities, ranking techniques by likelihood of success, likely size of effect, safety, cost and so on. We plan to start testing novel techniques soon.
These are just two of our projects. We have several others, including the development of rationality training program, the construction and testing of theories of the human mind and an investigation of the laws of idea propagation.
Changing the world is a complex task. Thus we have a plan that guides our efforts. We know that to succeed, we need to become better than we are. So we take training and self-improvement very seriously. Finally, we know that to succeed, we need more talented people. If you want to significantly improve the world, are serious about self-improvement and believe that changing the world means we need to work together, contact us. We're looking for people who are interested in our current projects or who have ideas of their own.
We've been around for just over a year. In that time we've gotten many of our projects underway. We doubled once in our first six months and again in our second six months. And we have just set up our first physical location, in New York City.
If you want to learn more, visit our website. If you want to get involved, want to send a word of encouragement, or if you have suggestions for how we can improve, write to us.
With hope for the future,
Geoff Anders, on behalf of the Leverage Team
Hi Luke,
I'm happy to talk about these things.
First, in answer to your third question, Leverage is methodologically pluralistic. Different members of Leverage have different views on scientific methodology and philosophical methodology. We have ongoing discussions about these things. My guess is that probably two or three of our more than twenty members share my views on scientific and philosophical methodology.
If there’s anything methodological we tend to agree on, it’s a process. Writing drafts, getting feedback, paying close attention to detail, being systematic, putting in many, many hours of effort. When you imagine Leverage, don’t imagine a bunch of people thinking with a single mind. Imagine a large number of interacting parallel processes, aimed at a single goal.
Now, I’m happy to discuss my personal views on method. In a nutshell: my philosophical method is essentially Cartesian; in science, I judge theories on the basis of elegance and fit with the evidence. (“Elegance”, in my lingo, is like Occam’s razor, so in practice you and I actually both take Occam’s razor seriously.) My views aren’t the views of Leverage, though, so I’m not sure I should try to give an extended defense here. I’m going to write up some philosophical material for a blog soon, though, so people who are interested in my personal views should check that out.
As for Connection Theory, I could say a bit about where it came from. But the important thing here is why I use it. The primary reason I use CT is because I’ve used it to predict a number of antecedently unlikely phenomena, and the predictions appear to have come true at a very high rate. Of course, I recognize that I might have made some errors somewhere in collecting or assessing the evidence. This is one reason I’m continuing to test CT.
Just as with methodology, people in Leverage have different views on CT. Some people believe it is true. (Not me, actually. I believe it is false; my concern is with how useful it is.) Others believe it is useful in particular contexts. Some think it’s worth investigating, others think it’s unlikely to be useful and not worth examining. A person who thought CT was not useful and who wanted to change the world by figuring out how the mind really works would be welcome at Leverage.
So, in sum, there are many views at Leverage on methodology and CT. We discuss these topics, but no one insists on any particular view and we’re all happy to work together.
I'm glad you like that we're producing public-facing documents. Actually, we're going to be posting a lot more stuff in the relatively near future.
I do believe Peirce is either rolling over in his grave, or doing whatever the opposite of that is.