Related to:
---
Q: Why not focus exclusively on spreading altruism? Or else on "raising awareness" for some particular known cause?
Briefly put: because historical roads to hell have been powered in part by good intentions; because the contemporary world seems bottlenecked by its ability to figure out what to do and how to do it (i.e. by ideas/creativity/capacity) more than by folks' willingness to sacrifice; and because rationality skill and epistemic hygiene seem like skills that may distinguish actually useful ideas from ineffective or harmful ones in a way that "good intentions" cannot.
Q: Even given the above -- why focus extra on sanity, or true beliefs? Why not focus instead on, say, competence/usefulness as the key determinant of how much do-gooding impact a motivated person can have? (Also, have you ever met a Less Wronger? I hear they are annoying and have lots of problems with “akrasia”, even while priding themselves on their high “epistemic” skills; and I know lots of people who seem “less rational” than Less Wrongers on some axes who would nevertheless be more useful in many jobs; is this “epistemic rationality” thingy actually the thing we need for this world-impact thingy?...)
This is an interesting one, IMO.
Basically, it seems to me that epistemic rationality, and skills for forming accurate explicit world-models, become more useful the more ambitious and confusing a problem one is tackling.
If I have one floor to sweep, it would be best to hire a person who has pre-existing skill at sweeping floors.
If I have 250 floors to sweep, it would be best to have someone energetic and perceptive, who will stick to the task, notice whether they are succeeding, and improve their efficiency over time. An "all round competent human being", maybe.
If I have 10^25 floors to sweep, it would... be rather difficult to win at all, actually. But if I can win, it probably isn't by harnessing my pre-existing skill at floor-sweeping, nor even (I'd claim) my pre-existing skill at "general human competence". It's probably by using the foundations of science and/or politics to (somehow) create some totally crazy method of getting the floors swept (a process that would probably require actually accurate beliefs, and thus epistemic rationality).
The world's most important problems look to me more like that third example. And, again, it seems to me that to solve problems of that sort -- to iterate through many wrong guesses and somehow piece together an accurate model until one finds a workable pathway for doing what originally looked impossible -- without getting stuck in dead ends or wrong turns or inborn or societal prejudices -- it is damn helpful to have something like epistemic rationality. (Competence is pretty darn helpful too -- it's good to e.g. be able to go out there and get data; to be able to form networking relations with folks who already know things; etc. -- but epistemic rationality is necessary in a more fundamental way.)
For the sake of concreteness, I will claim that AI-related existential risk is among humanity's most important problems, and that it is damn confusing, damn hard, and really really needs something like epistemic rationality and not just something like altruism and competence if one is to impact it positively, rather than just, say, randomly impacting it. I'd be glad to discuss in the comments.
Q: Why suppose “sanity skill” can be increased?
Let’s start with an easier question: why suppose thinking skills (of any sort) can be increased?
The answer to that one is easy: Because we see it done all the time.
The math student who arrives at college and does math for the first time with others is absorbing a kind of thinking skill; thus mathematicians discuss a person’s “mathematical maturity”, as a property distinct from (although related to) their learning of this and that math theorem.
Similarly, the coder who hacks her way through a bunch of software projects and learns several programming languages will have a much easier time learning her 8th language than she did her first; basically because, somewhere along the line, she learned to “think like a computer scientist”...
The claim that “sanity skill” is a type of thinking skill and that it can be increased is somewhat less obvious. I am personally convinced that the LW Sequences / AI to Zombies gave me something, and gave something similar to others I know, and that hanging out in person with Eliezer Yudkowsky, Michael Vassar, Carl Shulman, Nick Bostrom, and others gave me more of that same thing; a “same thing” that included e.g. actually trying to figure it out, making beliefs pay rent in anticipated experience; using arithmetic to entangle different pieces of my beliefs; and so on.
I similarly have the strong impression that e.g. Feynman’s and Munger’s popular writings often pass on pieces of this same thing; that the convergence between the LW Sequences and Tetlock’s Superforecasting training is non-coincidental; that the convergence between CFAR’s workshop contents and a typical MBA program’s contents is non-coincidental (though we were unaware of it when creating our initial draft); and more generally that there are many types of thinking skill that are routinely learned/taught and that non-trivially aid the process of coming to accurate beliefs in tricky domains. I update toward this partly from the above convergences; from the fact that Tetlock's training seems to work; from the fact that e.g. Feynman and Munger (and for that matter Thiel, Ray Dalio, Francis Bacon, and a number of others) were shockingly conventionally successful and advocated similar things; and from the fact that there is quite a bit of "sanity" advice that is obviously correct once stated, but that we don't automatically do (advice like "bother to look at the data; and try to update if the data doesn't match your predictions").
So, yes, I suspect that there is some portion of sanity that can sometimes be learned and taught. And I suspect this portion can be increased further with work.
Q. Even if you can train skills: Why go through all the trouble and complications of trying to do this, rather than trying to find and recruit people who already have the skills?
The main goal is thinking skill. Specifically, thinking skill among those most likely to successfully use it to positively impact the world.
Competence and caring are relevant secondary goals: some of us have a conjecture that deep epistemic rationality can be useful for creating competence and caring, and of course competence and caring about the world are also directly useful for impacting the world's problems. But CFAR wants to increase competence and caring via teaching relevant pieces of thinking skill, and not via special-case hacks. For example, we want to help people stay tuned into what they care about even when this is painful, and to help people notice their aversions and sort through which of their aversions are and aren't based in accurate implicit models. We do not want to use random emotional appeals to boost specific cause areas, nor to use other special-case hacks that happen to boost efficacy in a manner opaque to participants.
Why focus primarily on thinking skill? Partly so we can have focus enough as an organization so as to actually do anything at all. (Organizations that try to accomplish several things at once risk accomplishing none -- and "epistemic rationality" is more of a single thing.) Partly so our workshop participants and other learners can similarly have focus as learners. And partly because, as discussed above, it is very very hard to intervene in global affairs in such a way as to actually have positive outcomes, and not merely outcomes one pretends will be positive; and focusing on actual thinking skill seems like a better bet for problems as confusing as e.g. existential risk.
Why include competence and caring at all, then? Because high-performing humans make use of large portions of their minds (I think), and if we focus only on "accurate beliefs" in a narrow sense (e.g., doing analogs of Tetlocks forecasting training and nothing else), we are apt to generate "straw lesswrongers" whose "rationality" applies mainly to their explicit beliefs... people who can nitpick incorrect statements and can in this way attempt accurate verbal statements, but who are not creatively generative, do not have the iterated energy/competence/rapid iteration required to launch a startup, and cannot run good fast realtime social skills. We aim to do better. And we suspect that working to hit competence and caring via what one might call "deep epistemic rationality" is a route in.
Q. Can a small organization realistically do all that without losing Pomodoro virtue? (By "Pomodoro virtue", I mean the ability to focus on one thing at a time and so to actually make progress, instead of losing oneself amidst the distraction of 20 goals.)
We think so, and we think the new core/labs division within CFAR will help. Basically, Core will be working to scale up the workshops and related infrastructure, which should give a nice trackable set of numbers to optimize -- numbers that, if grown, will enable better financial health for CFAR and will also enable a much larger set of people to train in rationality.
Labs will be focusing on impacting smaller numbers of people who are poised to impact existential risk (mainly), and on seeing whether our "impacts" on these folk do in fact seem to help with their impact on the world.
We will continue to work together on many projects and to trade ideas frequently, but I suspect that this separation into two goals will give more "pomorodo virtue" to the whole organization.
Q. What is CFAR's relationship to existential risk? And what should it be?
CFAR's mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world -- via whatever cause areas may be most important. Many of us suspect that AI-related existential risk is an area with huge potential for useful impact; and so we are focusing partly on helping meet talent gaps in that field. This focus also gives us more "pomodoro virtue" -- it is easier to track whether e.g. the MIRI Summer Fellows Program helped boost research on AI safety, than it is to track whether a workshop had "good impacts on the world" in some more general sense.
It is important to us that the focus remain on "high impact pathways, whatever those turn out to be", that we do not propagandize for particular pre-set answers (rather, that we assist folks in thinking things through in an unhindered way), and that we work toward a kind of thinking skill that may let people better assess what paths are actually high impact for having positive effects in the world, and to overcome flaws in our current thinking.
Q. Should I do “Earning to Give”? Also: I heard that there are big funders around now and so “earning to give” is no longer a sensible thing for most people to do; is that true? And what does all this have to do with CFAR?
This is my main question. I've never seen anything to imply that multi-day workshops are effective methods of learning. Going further, I'm not sure how Less Wrong supports Spaced Repetition and Distributed Practice on one hand, while also supporting an organization that's primary outreach seems to be crash courses. It's like Less Wrong is showing a forum wide cognitive dissonance that nobody notices.
That leaves a few options:
See my reply above. It is worth noting also that there is follow-up after the workshop (emails, group Skype calls, 1-on-1 follow-up sessions, and accountability buddies), and that the workshops are for many an entry-point into the alumni community and a longer-term community of practice (with many participating in the google group; attending our weekly alumni dojo; attending yearly alumni reunions and occasional advanced workshops, etc.).
(Even so, our methodology if not what I would pick if our goal was to help participants memorize rote facts. But for ways of thinking, it seems to work better than anything else we've found. So far.)