Several free CFAR summer programs on rationality and AI safety
Consider having sparse insides
It's easier to seek true beliefs if you keep your (epistemic) identity small. (E.g., if you avoid beliefs like "I am a democrat", and say only "I am a seeker of accurate world-models, whatever those turn out to be".)
It seems analogously easier to seek effective internal architectures if you also keep non-epistemic parts of your identity small -- not "I am a person who enjoys nature", nor "I am someone who values mathematics" nor "I am a person who aims to become good at email" but only "I am a person who aims to be effective, whatever that turns out to entail (and who is willing to let much of my identity burn in the process)".
There are obviously hazards as well as upsides that come with this; still, the upsides seem worth putting out there.
The two biggest exceptions I would personally make, which seem to mitigate the downsides: "I am a person who keeps promises" and "I am a person who is loyal to [small set of people] and who can be relied upon to cooperate more broadly -- whatever that turns out to entail".
Thoughts welcome.
The correct response to uncertainty is *not* half-speed
Related to: Half-assing it with everything you've got; Wasted motion; Say it Loud.
Once upon a time (true story), I was on my way to a hotel in a new city. I knew the hotel was many miles down this long, branchless road. So I drove for a long while.

After a while, I began to worry I had passed the hotel.

So, instead of proceeding at 60 miles per hour the way I had been, I continued in the same direction for several more minutes at 30 miles per hour, wondering if I should keep going or turn around.

- I wasn't sure if I was a good enough writer to write a given doc myself, or if I should try to outsource it. So, I sat there kind-of-writing it while also fretting about whether the task was correct.
- (Solution: Take a minute out to think through heuristics. Then, either: (1) write the post at full speed; or (2) try to outsource it; or (3) write full force for some fixed time period, and then pause and evaluate.)
- I wasn't sure (back in early 2012) that CFAR was worthwhile. So, I kind-of worked on it.
- An old friend came to my door unexpectedly, and I was tempted to hang out with her, but I also thought I should finish my work. So I kind-of hung out with her while feeling bad and distracted about my work.
- A friend of mine, when teaching me math, seems to mumble specifically those words that he doesn't expect me to understand (in a sort of compromise between saying them and not saying them)...
- Duncan reports that novice Parkour students are unable to safely undertake certain sorts of jumps, because they risk aborting the move mid-stream, after the actual last safe stopping point (apparently kind-of-attempting these jumps is more dangerous than either attempting, or not attempting the jumps)
- It is said that start-up founders need to be irrationally certain that their startup will succeed, lest they be unable to do more than kind-of work on it...

Why CFAR's Mission?
Related to:
---
Q: Why not focus exclusively on spreading altruism? Or else on "raising awareness" for some particular known cause?
Briefly put: because historical roads to hell have been powered in part by good intentions; because the contemporary world seems bottlenecked by its ability to figure out what to do and how to do it (i.e. by ideas/creativity/capacity) more than by folks' willingness to sacrifice; and because rationality skill and epistemic hygiene seem like skills that may distinguish actually useful ideas from ineffective or harmful ones in a way that "good intentions" cannot.
Q: Even given the above -- why focus extra on sanity, or true beliefs? Why not focus instead on, say, competence/usefulness as the key determinant of how much do-gooding impact a motivated person can have? (Also, have you ever met a Less Wronger? I hear they are annoying and have lots of problems with “akrasia”, even while priding themselves on their high “epistemic” skills; and I know lots of people who seem “less rational” than Less Wrongers on some axes who would nevertheless be more useful in many jobs; is this “epistemic rationality” thingy actually the thing we need for this world-impact thingy?...)
This is an interesting one, IMO.
Basically, it seems to me that epistemic rationality, and skills for forming accurate explicit world-models, become more useful the more ambitious and confusing a problem one is tackling.
For example:
Why startup founders have mood swings (and why they may have uses)
(This post was collaboratively written together with Duncan Sabien.)
Startup founders stereotypically experience some pretty serious mood swings. One day, their product seems destined to be bigger than Google, and the next, it’s a mess of incoherent, unrealistic nonsense that no one in their right mind would ever pay a dime for. Many of them spend half of their time full of drive and enthusiasm, and the other half crippled by self-doubt, despair, and guilt. Often this rollercoaster ride goes on for years before the company either finds its feet or goes under.
Well, sure, you might say. Running a startup is stressful. Stress comes with mood swings.
But that’s not really an explanation—it’s like saying stuff falls when you let it go. There’s something about the “launching a startup” situation that induces these kinds of mood swings in many people, including plenty who would otherwise be entirely stable.
Two Growth Curves
Sometimes, it helps to take a model that part of you already believes, and to make a visual image of your model so that more of you can see it.
One of my all-time favorite examples of this:
I used to often hesitate to ask dumb questions, to publicly try skills I was likely to be bad at, or to visibly/loudly put forward my best guesses in areas where others knew more than me.
I was also frustrated with this hesitation, because I could feel it hampering my skill growth. So I would try to convince myself not to care about what people thought of me. But that didn't work very well, partly because what folks think of me is in fact somewhat useful/important.
Then, I got out a piece of paper and drew how I expected the growth curves to go.

In blue, I drew the apparent-coolness level that I could achieve if I stuck with the "try to look good" strategy. In brown, I drew the apparent-coolness level I'd have if I instead made mistakes as quickly and loudly as possible -- I'd look worse at first, but then I'd learn faster, eventually overtaking the blue line.
Suddenly, instead of pitting my desire to become smart against my desire to look good, I could pit my desire to look good now against my desire to look good in the future :)
I return to this image of two growth curves often when I'm faced with an apparent tradeoff between substance and short-term appearances. (E.g., I used to often find myself scurrying to get work done, or to look productive / not-horribly-behind today, rather than trying to build the biggest chunks of capital for tomorrow. I would picture these growth curves.)
CFAR-run MIRI Summer Fellows program: July 7-26
CFAR will be running a three week summer program this July for MIRI, designed to increase participants' ability to do technical research into the superintelligence alignment problem.
The intent of the program is to boost participants as far as possible in four skills:
- The CFAR “applied rationality” skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops;
- “Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems” -- i.e., the skillset taught in the core LW Sequences. (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words.)
- The long-term impacts of AI, and strategies for intervening (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
- The basics of AI safety-relevant technical research. (Decision theory, anthropics, and similar; with folks trying their hand at doing actual research, and reflecting also on the cognitive habits involved.)
The program will be offered free to invited participants, and partial or full scholarships for travel expenses will be offered to those with exceptional financial need.
If you're interested (or possibly-interested), sign up for an admissions interview ASAP at this link (takes 2 minutes): http://rationality.org/miri-summer-fellows-2015/
Also, please forward this post, or the page itself, to anyone you think should come; the skills and talent that humanity brings to bear on the superintelligence alignment problem may determine our skill at navigating it, and sharing this opportunity with good potential contributors may be a high-leverage way to increase that talent.
Attempted Telekinesis
Related to: Compartmentalization in epistemic and instrumental rationality; That other kind of status.
How to learn soft skills
Acquiring some skills is mostly about deliberate, explicit information transfer. For example, one might explicitly learn the capital of Missouri, or the number of miles one can drive before needing an oil change, or how to use the quadratic formula to solve quadratic equations.
For other skills, practitioners' skill rests largely on semi-conscious, non-explicit patterns of perception and action. I have in mind here such skills as:
- Managing your emotions and energy levels;
- Building strong relationships;
- Making robust plans;
- Finding angles of attack on a mathematical problem;
- Writing persuasively;
- Thinking through charged subjects without bias;
and so on. Experts in these skills will often be unable to accurately and explicitly describe how to do what they do, but they will be skilled nonetheless.
I'd like to share some thoughts on how to learn such "soft skills".
CFAR fundraiser far from filled; 4 days remaining
We're 4 days from the end of our matching fundraiser, and still only about 1/3rd of the way to our target (and to the point where pledged funds would cease being matched).
If you'd like to support the growth of rationality in the world, do please consider donating, or asking me about any questions/etc. you may have. I'd love to talk. I suspect funds donated to CFAR between now and Jan 31 are quite high-impact.
As a random bonus, I promise that if we meet the $120k matching challenge, I'll post at least two posts with some never-before-shared (on here) rationality techniques that we've been playing with around CFAR.
View more: Next
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)