Related to:
---
Q: Why not focus exclusively on spreading altruism? Or else on "raising awareness" for some particular known cause?
Briefly put: because historical roads to hell have been powered in part by good intentions; because the contemporary world seems bottlenecked by its ability to figure out what to do and how to do it (i.e. by ideas/creativity/capacity) more than by folks' willingness to sacrifice; and because rationality skill and epistemic hygiene seem like skills that may distinguish actually useful ideas from ineffective or harmful ones in a way that "good intentions" cannot.
Q: Even given the above -- why focus extra on sanity, or true beliefs? Why not focus instead on, say, competence/usefulness as the key determinant of how much do-gooding impact a motivated person can have? (Also, have you ever met a Less Wronger? I hear they are annoying and have lots of problems with “akrasia”, even while priding themselves on their high “epistemic” skills; and I know lots of people who seem “less rational” than Less Wrongers on some axes who would nevertheless be more useful in many jobs; is this “epistemic rationality” thingy actually the thing we need for this world-impact thingy?...)
This is an interesting one, IMO.
Basically, it seems to me that epistemic rationality, and skills for forming accurate explicit world-models, become more useful the more ambitious and confusing a problem one is tackling.
If I have one floor to sweep, it would be best to hire a person who has pre-existing skill at sweeping floors.
If I have 250 floors to sweep, it would be best to have someone energetic and perceptive, who will stick to the task, notice whether they are succeeding, and improve their efficiency over time. An "all round competent human being", maybe.
If I have 10^25 floors to sweep, it would... be rather difficult to win at all, actually. But if I can win, it probably isn't by harnessing my pre-existing skill at floor-sweeping, nor even (I'd claim) my pre-existing skill at "general human competence". It's probably by using the foundations of science and/or politics to (somehow) create some totally crazy method of getting the floors swept (a process that would probably require actually accurate beliefs, and thus epistemic rationality).
The world's most important problems look to me more like that third example. And, again, it seems to me that to solve problems of that sort -- to iterate through many wrong guesses and somehow piece together an accurate model until one finds a workable pathway for doing what originally looked impossible -- without getting stuck in dead ends or wrong turns or inborn or societal prejudices -- it is damn helpful to have something like epistemic rationality. (Competence is pretty darn helpful too -- it's good to e.g. be able to go out there and get data; to be able to form networking relations with folks who already know things; etc. -- but epistemic rationality is necessary in a more fundamental way.)
For the sake of concreteness, I will claim that AI-related existential risk is among humanity's most important problems, and that it is damn confusing, damn hard, and really really needs something like epistemic rationality and not just something like altruism and competence if one is to impact it positively, rather than just, say, randomly impacting it. I'd be glad to discuss in the comments.
Q: Why suppose “sanity skill” can be increased?
Let’s start with an easier question: why suppose thinking skills (of any sort) can be increased?
The answer to that one is easy: Because we see it done all the time.
The math student who arrives at college and does math for the first time with others is absorbing a kind of thinking skill; thus mathematicians discuss a person’s “mathematical maturity”, as a property distinct from (although related to) their learning of this and that math theorem.
Similarly, the coder who hacks her way through a bunch of software projects and learns several programming languages will have a much easier time learning her 8th language than she did her first; basically because, somewhere along the line, she learned to “think like a computer scientist”...
The claim that “sanity skill” is a type of thinking skill and that it can be increased is somewhat less obvious. I am personally convinced that the LW Sequences / AI to Zombies gave me something, and gave something similar to others I know, and that hanging out in person with Eliezer Yudkowsky, Michael Vassar, Carl Shulman, Nick Bostrom, and others gave me more of that same thing; a “same thing” that included e.g. actually trying to figure it out, making beliefs pay rent in anticipated experience; using arithmetic to entangle different pieces of my beliefs; and so on.
I similarly have the strong impression that e.g. Feynman’s and Munger’s popular writings often pass on pieces of this same thing; that the convergence between the LW Sequences and Tetlock’s Superforecasting training is non-coincidental; that the convergence between CFAR’s workshop contents and a typical MBA program’s contents is non-coincidental (though we were unaware of it when creating our initial draft); and more generally that there are many types of thinking skill that are routinely learned/taught and that non-trivially aid the process of coming to accurate beliefs in tricky domains. I update toward this partly from the above convergences; from the fact that Tetlock's training seems to work; from the fact that e.g. Feynman and Munger (and for that matter Thiel, Ray Dalio, Francis Bacon, and a number of others) were shockingly conventionally successful and advocated similar things; and from the fact that there is quite a bit of "sanity" advice that is obviously correct once stated, but that we don't automatically do (advice like "bother to look at the data; and try to update if the data doesn't match your predictions").
So, yes, I suspect that there is some portion of sanity that can sometimes be learned and taught. And I suspect this portion can be increased further with work.
Q. Even if you can train skills: Why go through all the trouble and complications of trying to do this, rather than trying to find and recruit people who already have the skills?
The main goal is thinking skill. Specifically, thinking skill among those most likely to successfully use it to positively impact the world.
Competence and caring are relevant secondary goals: some of us have a conjecture that deep epistemic rationality can be useful for creating competence and caring, and of course competence and caring about the world are also directly useful for impacting the world's problems. But CFAR wants to increase competence and caring via teaching relevant pieces of thinking skill, and not via special-case hacks. For example, we want to help people stay tuned into what they care about even when this is painful, and to help people notice their aversions and sort through which of their aversions are and aren't based in accurate implicit models. We do not want to use random emotional appeals to boost specific cause areas, nor to use other special-case hacks that happen to boost efficacy in a manner opaque to participants.
Why focus primarily on thinking skill? Partly so we can have focus enough as an organization so as to actually do anything at all. (Organizations that try to accomplish several things at once risk accomplishing none -- and "epistemic rationality" is more of a single thing.) Partly so our workshop participants and other learners can similarly have focus as learners. And partly because, as discussed above, it is very very hard to intervene in global affairs in such a way as to actually have positive outcomes, and not merely outcomes one pretends will be positive; and focusing on actual thinking skill seems like a better bet for problems as confusing as e.g. existential risk.
Why include competence and caring at all, then? Because high-performing humans make use of large portions of their minds (I think), and if we focus only on "accurate beliefs" in a narrow sense (e.g., doing analogs of Tetlocks forecasting training and nothing else), we are apt to generate "straw lesswrongers" whose "rationality" applies mainly to their explicit beliefs... people who can nitpick incorrect statements and can in this way attempt accurate verbal statements, but who are not creatively generative, do not have the iterated energy/competence/rapid iteration required to launch a startup, and cannot run good fast realtime social skills. We aim to do better. And we suspect that working to hit competence and caring via what one might call "deep epistemic rationality" is a route in.
Q. Can a small organization realistically do all that without losing Pomodoro virtue? (By "Pomodoro virtue", I mean the ability to focus on one thing at a time and so to actually make progress, instead of losing oneself amidst the distraction of 20 goals.)
We think so, and we think the new core/labs division within CFAR will help. Basically, Core will be working to scale up the workshops and related infrastructure, which should give a nice trackable set of numbers to optimize -- numbers that, if grown, will enable better financial health for CFAR and will also enable a much larger set of people to train in rationality.
Labs will be focusing on impacting smaller numbers of people who are poised to impact existential risk (mainly), and on seeing whether our "impacts" on these folk do in fact seem to help with their impact on the world.
We will continue to work together on many projects and to trade ideas frequently, but I suspect that this separation into two goals will give more "pomorodo virtue" to the whole organization.
Q. What is CFAR's relationship to existential risk? And what should it be?
CFAR's mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world -- via whatever cause areas may be most important. Many of us suspect that AI-related existential risk is an area with huge potential for useful impact; and so we are focusing partly on helping meet talent gaps in that field. This focus also gives us more "pomodoro virtue" -- it is easier to track whether e.g. the MIRI Summer Fellows Program helped boost research on AI safety, than it is to track whether a workshop had "good impacts on the world" in some more general sense.
It is important to us that the focus remain on "high impact pathways, whatever those turn out to be", that we do not propagandize for particular pre-set answers (rather, that we assist folks in thinking things through in an unhindered way), and that we work toward a kind of thinking skill that may let people better assess what paths are actually high impact for having positive effects in the world, and to overcome flaws in our current thinking.
Q. Should I do “Earning to Give”? Also: I heard that there are big funders around now and so “earning to give” is no longer a sensible thing for most people to do; is that true? And what does all this have to do with CFAR?
I don't think that's true. In my experience spending time with rationalists and studying aspects of it myself, I have found that rationalists separate themselves from the general population in many ways which would make it hard to convince non-rationalists. Those aspects are things that rationalists cultivate partially in an effort to improve their thinking, but also, in order to signal membership in the rationalist tribe. (Rationalists are humans after all) Those are not things that rationalists can easily turn on and off. I can identify 3 general groups of aspects that many rationalists seem to have:
1) The use of esoteric language. Rationalists tend to use a lot of language that is unfamiliar to others. Rationalists "update" their beliefs. They fight "akrasia". They "install" new habits. If you spend any time in rationalist circles, you will have heard those terms used in those ways very frequently. This is of course not bad in and of itself. But it marks one as a member of the rationalist tribe and even someone who does not know about rationalists will be able to identify the speaker who uses this terminology as alien and "weird". My first encounter with rationalists was indeed of this type. All I knew was that they seemed to speak in a very strange manner.
2) Rationalists, at least the ones in this community, hold a variety of unusual beliefs. I actually find it hard to identify those beliefs because I hold many of them. Nonetheless, a chat with many other human beings regarding the theory of the mind, metaphysics, morality, etc will reveal gaps the size of the grand canyon between the average rationalist and the average person. Maybe at some level, there is agreement, but when it comes to object-level issues, the disagreement is immense.
3) Rationalists think very differently from the way most other people think. That is after all the point. However, it means that arguments that convince rationalists will frequently fail to convince an average person. For instance, arguing that the effects of brain damage show that there is no soul in the conventional sense will get you nowhere with an average person while many rationalists see this as a very persuasive if not conclusive argument.
I claim that to convince another human being, you must be able to model their cognitive processes. As many rationalists realize, humans have a tendency to model other humans as similar to themselves. Doing otherwise is incredibly difficult and increases in difficulty exponentially with your difference from that other human. This is after all unsurprising. If modeling an identical copy of yourself, you need only fake sensory inputs and see what the output would be. If you model someone different than yourself, you need to basically replicate their brain within your brain. This is obviously very effortful and error-prone. This is actually hard-enough that it is difficult for you to replicate the processes that led you to believe something you no longer believe. And you had access to the brain which held those now-discarded beliefs!
I do not claim it is an impossible task. But I do claim that the better you are at rationality, the worst you will be at understanding non-rationalists and how to convince them of anything. If anything, as a good rationalist, you will have learned to flinch away from lines of reasoning that are the result of common cognitive errors. But of course, cognitive errors are an integral part of the way most people live their lives. So if you flinch away from such things, you will miss lines of reasoning that would be very fruitful to convince others of the correctness of your beliefs.
Let me provide an example. I recently discussed abortion with a non-rationalist but very intelligent friend. I pointed out that within the context of fetuses being humans deserving or rights, abortion is obviously murder and that he was missing the point of his opponents. The responses I got were riddled with fallacies. Most interestingly, the idea that science has determined that fetuses are not humans. I tried to explain that science can certainly tell us what is going on at various stages of development, but that it cannot tell us what is a "human deserving of right" as that is a purely moral category. This was to no avail. People (even very intelligent people) hang their beliefs and actions of such fallacy-riddled lines of reasoning all the time. If you train yourself to avoid such lines of reasoning, you will have great difficulty in convincing others without first turning them into yourself.
If I'm chatting with other rationalists I will use a term like akrasia but in other contexts I will say procastination. I'm prefectly able to use different words in different social contexts.
There are ways of studying rationality that do have those effects. I don't think going to a CFAR workshop is going to make a person less likely to convince the average person.
... (read more)