Why CFAR's Mission?
Related to:
---
Q: Why not focus exclusively on spreading altruism? Or else on "raising awareness" for some particular known cause?
Briefly put: because historical roads to hell have been powered in part by good intentions; because the contemporary world seems bottlenecked by its ability to figure out what to do and how to do it (i.e. by ideas/creativity/capacity) more than by folks' willingness to sacrifice; and because rationality skill and epistemic hygiene seem like skills that may distinguish actually useful ideas from ineffective or harmful ones in a way that "good intentions" cannot.
Q: Even given the above -- why focus extra on sanity, or true beliefs? Why not focus instead on, say, competence/usefulness as the key determinant of how much do-gooding impact a motivated person can have? (Also, have you ever met a Less Wronger? I hear they are annoying and have lots of problems with “akrasia”, even while priding themselves on their high “epistemic” skills; and I know lots of people who seem “less rational” than Less Wrongers on some axes who would nevertheless be more useful in many jobs; is this “epistemic rationality” thingy actually the thing we need for this world-impact thingy?...)
This is an interesting one, IMO.
Basically, it seems to me that epistemic rationality, and skills for forming accurate explicit world-models, become more useful the more ambitious and confusing a problem one is tackling.
For example:
Why CFAR? The view from 2015
In this post, we:
- Revisit CFAR’s mission, and why that mission matters today;
- Review our progress to date;
- Offer a look at our financial overview;
- Share our ambitions for 2016; and
- Ask your help, via donations and other means.
We are in the middle of our matching fundraiser; so if you’ve been considering donating to CFAR this year, now is an unusually good time.
Rationality Cardinality
Rationality Cardinality is a card game which takes memes and concepts from the rationality/Less Wrong sphere, and mixes them with jokes to make a game. After nearly two years of card-creation, playtesting and development, today, I'm taking the "beta" label off the web-based version of Rationality Cardinality. Go to the website and, if at least two other people visit at the same time, you can play against them.
I've put a lot of thought and a lot of work into the cards, and they're not just about humor; I also went systematically through blog posts and glossaries collecting terms and concepts that I think people should know about and be reminded of, and wrote concise explanations for them. It provides an easy way for everyone to quickly learn the jargon that's floating around, in a fun way; and it provides spaced repetition for concepts that might not otherwise have sunk in.
Rationality Cardinality will also soon have a print version. The catch is that in order to mass-produce it, I need to be sure there's enough demand. So, here's the deal: once enough people have played the online version, I'll launch a Kickstarter to sell print copies. You can speed this up by inviting people who might not otherwise see it to play.

Rationality Cardinality is somewhat inspired by Cards Against Rationality. Software for the web-based implementation is based on Cards for Humanity, with modifications.
Deliberate Grad School
Among my friends interested in rationality, effective altruism, and existential risk reduction, I often hear: "If you want to have a real positive impact on the world, grad school is a waste of time. It's better to use deliberate practice to learn whatever you need instead of working within the confines of an institution."
While I'd agree that grad school will not make you do good for the world, if you're a self-driven person who can spend time in a PhD program deliberately acquiring skills and connections for making a positive difference, I think you can make grad school a highly productive path, perhaps more so than many alternatives. In this post, I want to share some advice that I've been repeating a lot lately for how to do this:
- Find a flexible program. PhD programs in mathematics, statistics, philosophy, and theoretical computer science tend to give you a great deal of free time and flexibility, provided you can pass the various qualifying exams without too much studying. By contrast, sciences like biology and chemistry can require time-consuming laboratory work that you can't always speed through by being clever.
- Choose high-impact topics to learn about. AI safety and existential risk reduction are my favorite examples, but there are others, and I won't spend more time here arguing their case. If you can't make your thesis directly about such a topic, choosing a related more popular topic can give you valuable personal connections, and you can still learn whatever you want during the spare time a flexible program will afford you.
- Teach classes. Grad programs that let you teach undergraduate tutorial classes provide a rare opportunity to practice engaging a non-captive audience. If you just want to work on general presentation skills, maybe you practice on your friends... but your friends already like you. If you want to learn to win over a crowd that isn't particularly interested in you, try teaching calculus! I've found this skill particularly useful when presenting AI safety research that isn't yet mainstream, which requires carefully stepping through arguments that are unfamiliar to the audience.
- Use your freedom to accomplish things. I used my spare time during my PhD program to cofound CFAR, the Center for Applied Rationality. Alumni of our workshops have gone on to do such awesome things as creating the Future of Life Institute and sourcing a $10MM donation from Elon Musk to fund AI safety research. I never would have had the flexibility to volunteer for weeks at a time if I'd been working at a typical 9-to-5 or a startup.
- Organize a graduate seminar. Organizing conferences is critical to getting the word out on important new research, and in fact, running a conference on AI safety in Puerto Rico is how FLI was able to bring so many researchers together on its Open Letter on AI Safety. It's also where Elon Musk made his donation. During grad school, you can get lots of practice organizing research events by running seminars for your fellow grad students. In fact, several of the organizers of the FLI conference were grad students.
- Get exposure to experts. A top 10 US school will have professors around that are world-experts on myriad topics, and you can attend departmental colloquia to expose yourself to the cutting edge of research in fields you're curious about. I regularly attended cognitive science and neuroscience colloquia during my PhD in mathematics, which gave me many perspectives that I found useful working at CFAR.
- Learn how productive researchers get their work done. Grad school surrounds you with researchers, and by getting exposed to how a variety of researchers do their thing, you can pick and choose from their methods and find what works best for you. For example, I learned from my advisor Bernd Sturmfels that, for me, quickly passing a draft back and forth with a coauthor can get a paper written much more quickly than agonizing about each revision before I share it.
- Remember you don't have to stay in academia. If you limit yourself to only doing research that will get you good post-doc offers, you might find you aren't able to focus on what seems highest impact (because often what makes a topic high impact is that it's important and neglected, and if a topic is neglected, it might not be trendy enough land you good post-doc). But since grad school is run by professors, becoming a professor is usually the most salient path forward for most grad students, and you might end up pressuring yourself to follow that standards of that path. When I graduated, I got my top choice of post-doc, but then I decided not to take it and to instead try earning to give as an algorithmic stock trader, and now I'm a research fellow at MIRI. In retrospect, I might have done more valuable work during my PhD itself if I'd decided in advance not to do a typical post-doc.
That's all I have for now. The main sentiment behind most of this, I think, is that you have to be deliberate to get the most out of a PhD program, rather than passively expecting it to make you into anything in particular. Grad school still isn't for everyone, and far from it. But if you were seriously considering it at some point, and "do something more useful" felt like a compelling reason not to go, be sure to first consider the most useful version of grad that you could reliably make for yourself... and then decide whether or not to do it.
Please email me (lastname@thisdomain.com) if you have more ideas for getting the most out of grad school!
Probabilities Small Enough To Ignore: An attack on Pascal's Mugging
Summary: the problem with Pascal's Mugging arguments is that, intuitively, some probabilities are just too small to care about. There might be a principled reason for ignoring some probabilities, namely that they violate an implicit assumption behind expected utility theory. This suggests a possible approach for formally defining a "probability small enough to ignore", though there's still a bit of arbitrariness in it.
Future of Life Institute is hiring
I am a co-founder of the Future of Life Institute based in Boston, and we are looking to fill two job openings that some LessWrongers might be interested in. We are a mostly volunteer-run organization working to reduce catastrophic and existential risks, and increase the chances of a positive future for humanity. Please consider applying and pass this posting along to anyone you think would be a good fit!
PROJECT COORDINATOR
Technology has given life the opportunity to flourish like never before - or to self-destruct. The Future of Life Institute is a rapidly growing non-profit organization striving for the former outcome. We are fortunate to be supported by an inspiring group of people, including Elon Musk, Jaan Tallinn and Stephen Hawking, and you may have heard of our recent efforts to keep artificial intelligence beneficial.
You are idealistic, hard-working and well-organized, and want to help our core team carry out a broad range of projects, from organizing events to coordinating media outreach. Living in the greater Boston area is a major advantage, but not an absolute requirement.
If you are excited about this opportunity, then please send an email to jobs@futureoflife.org with your cv and a brief statement of why you want to work with us. The title of your email must be 'Project coordinator'.
NEWS WEBSITE EDITOR
There is currently huge public interest in the question of how upcoming technology (especially artificial intelligence) may transform our world, and what should be done to seize opportunities and reduce risks.
You are idealistic and ambitious, and want to lead our effort to transform our fledgling news site into the number one destination for anyone seeking up-to-date and in-depth information on this topic, and anybody eager to join what is emerging as one of the most important conversations of our time.
You love writing and have the know-how and drive needed to grow and promote a website. You are self-motivated and enjoy working independently rather than being closely mentored. You are passionate about this topic, and look forward to the opportunity to engage with our second-to-none global network of experts and use it to generate ideas and add value to the site. You look forward to developing and executing your vision for the website using the resources at your disposal, which include both access to experts and funds for commissioning articles, improving the website user interface, etc. You look forward to making use of these resources and making things happen rather than waiting for others to take the initiative.
If you are excited about this opportunity, then please send an email to jobs@futureoflife.org with your cv and answers to these questions:
- Briefly, what is your vision for our site? How would you improve it?
- What other site(s) (please provide URLs) have attributes that you'd like to emulate?
- How would you generate the required content?
- How would you increase traffic to the site, and what do you view as realistic traffic goals for January 2016 and January 2017?
- What budget do you need to succeed, not including your own salary?
- What past experience do you have with writing and/or website management? Please include a selection of URLs that showcase your work.
The title of your application email must be 'Editor'. You can live anywhere in the world. A science background is a major advantage, but not a strict requirement.
Why people want to die
Over and over again, someones says that living for a very long time would be a bad thing, and then some futurist tries to persuade them that their reasoning is faulty. They tell them that they think that way now, but they'll change their minds when they're older.
The thing is, I don't see that happening. I live in a small town full of retirees, and those few I've asked about it are waiting for death peacefully. When I ask them about their ambitions, or things they still want to accomplish, they have none.
Suppose that people mean what they say. Why do they want to die?
Predict - "Log your predictions" app
As an exercise on programming Android, I've made an app to log predictions you make and keep score of your results. Like PredictionBook, but taking more of a personal daily exercise feel, in line with this post.
The "statistics" right now are only a score I copied from the old Credence calibration game, and a calibration bar chart.
I'm hoping for suggestionss for features and criticism on the app design.
Here's the link for the apk (v0.4), and here's the source code repository. You can download it at Google Play Store.
Pending/Possible/Requested Features:
- Set check-in dates for predictions
- Tags (and stats by tag)
- Stats by timeframe
- Beeminder integration
- Trivia questions you can answer if you don't have any personal prediction to make
- Ring pie chart to choose probability
Edit:
2015-08-26 - Fixed bug that broke on Android 5.0.2 (thanks Bobertron)
2015-08-28 - Change layout for landscape mode, and add a better icon
2015-08-31 -
- Daily notifications
- Buttons at the expanded-item-layout (ht dutchie)
- Show points won/lost in the snackbar when a prediction is answered
- Translation to portuguese
An overview of the mental model theory
There is dispute about what exactly a “mental model” is and the concepts related to it are often aren't clarified well. One feature of them that is generally accepted is that “the structure of mental models ‘mirrors’ the perceived structure of the external system being modelled.” (Doyle & Ford, 1998, p. 17) So, as a starting definition we can say that mental models, in general, are representations in the mind of real or imaginary situations. A full definition won’t be attempted because there is too much contention about what features mental models do and do not have. The features that are accepted will be described in detail which will hopefully lead you to gain a intuitive understanding of what mental models are probably like.
MIRI's Approach
MIRI's summer fundraiser is ongoing. In the meantime, we're writing a number of blog posts to explain what we're doing and why, and to answer a number of common questions. This post is one I've been wanting to write for a long time; I hope you all enjoy it. For earlier posts in the series, see the bottom of the above link.
MIRI’s mission is “to ensure that the creation of smarter-than-human artificial intelligence has a positive impact.” How can we ensure any such thing? It’s a daunting task, especially given that we don’t have any smarter-than-human machines to work with at the moment. In a previous post to the MIRI Blog I discussed four background claims that motivate our mission; in this post I will describe our approach to addressing the challenge.
This challenge is sizeable, and we can only tackle a portion of the problem. For this reason, we specialize. Our two biggest specializing assumptions are as follows:
1. We focus on scenarios where smarter-than-human machine intelligence is first created in de novo software systems (as opposed to, say, brain emulations). This is in part because it seems difficult to get all the way to brain emulation before someone reverse-engineers the algorithms used by the brain and uses them in a software system, and in part because we expect that any highly reliable AI system will need to have at least some components built from the ground up for safety and transparency. Nevertheless, it is quite plausible that early superintelligent systems will not be human-designed software, and I strongly endorse research programs that focus on reducing risks along the other pathways.
2. We specialize almost entirely in technical research. We select our researchers for their proficiency in mathematics and computer science, rather than forecasting expertise or political acumen. I stress that this is only one part of the puzzle: figuring out how to build the right system is useless if the right system does not in fact get built, and ensuring AI has a positive impact is not simply a technical problem. It is also a global coordination problem, in the face of short-term incentives to cut corners. Addressing these non-technical challenges is an important task that we do not focus on.
In short, MIRI does technical research to ensure that de novo AI software systems will have a positive impact. We do not further discriminate between different types of AI software systems, nor do we make strong claims about exactly how quickly we expect AI systems to attain superintelligence. Rather, our current approach is to select open problems using the following question:
What would we still be unable to solve, even if the challenge were far simpler?
For example, we might study AI alignment problems that we could not solve even if we had lots of computing power and very simple goals.
We then filter on problems that are (1) tractable, in the sense that we can do productive mathematical research on them today; (2) uncrowded, in the sense that the problems are not likely to be addressed during normal capabilities research; and (3) critical, in the sense that they could not be safely delegated to a machine unless we had first solved them ourselves.1
These three filters are usually uncontroversial. The controversial claim here is that the above question — “what would we be unable to solve, even if the challenge were simpler?” — is a generator of open technical problems for which solutions will help us design safer and more reliable AI software in the future, regardless of their architecture. The rest of this post is dedicated to justifying this claim, and describing the reasoning behind it.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)