The barriers to the task
Original post: http://bearlamp.com.au/the-barriers-to-the-task/
For about two months now I have been putting in effort to run in the mornings. To make this happen, I had to take away all the barriers to me wanting to do that. There were plenty of them, and I failed to leave my house plenty of times. Some examples are:
Making sure I don't need correct clothes - I leave my house shirtless and barefoot, and grab my key on the way out.
Pre-commitment to run - I take my shirt off when getting into bed the night before, so I don't even have to consider the action in the morning when I roll out of bed.
Being busy in the morning - I no longer plan any appointments before 11am. Depending on the sunrise (I don't use alarms), I wake up in the morning, spend some time reading things, then roll out of bed to go to the toilet and leave my house. In Sydney we just passed the depths of winter and it's beginning to get light earlier and earlier in the morning. Which is easy now; but was harder when getting up at 7 meant getting up in the dark.
There were days when I would wake up at 8am, stay in bed until 9am, then realise if I left for a run (which takes around an hour - 10am), then came back to have a shower (which takes 20mins - 10:20), then left to travel to my first meeting (which can take 30mins 10:50). That means if anything goes wrong I can be late to an 11am appointment. But also - if I have a 10am meeting I have to skip my run to get there on time.
Going to bed at a reasonable hour - I am still getting used to deciding not to work myself ragged. I decided to accept that sleep is important, and trust to let my body sleep as long as it needs. This sometimes also means that I can successfully get bonus time by keeping healthy sleep habits. But also - if I go to sleep after midnight I might not get up until later, which means I compromise my "time" to go running by shoving it into other habits.
Deciding where to run - google maps, look for local parks, plan a route with the least roads and least traffic. I did this once and then it was done. It was also exciting to measure the route and be able to run further and further each day/week/month.
What's in your way?
If you are not doing something that you think is good and right (or healthy, or otherwise desireable) there are likely things in your way. If you just found out about an action that is good, well and right and there is nothing stopping you from doing it; great. You are lucky this time - Just.Do.It.
If you are one of the rest of us; who know that:
- daily exercise is good for you
- The right amount of sleep is good for you
- Eating certain foods are better than others
- certain social habits are better than others
- certain hobbies are more fulfilling (to our needs or goals) than others
And you have known this a while but still find yourself not taking the actions you want. It's time to start asking what is in your way. You might find it on someone else's list, but you are looking for the needle in the haystack.
You are much better off doing this (System 2 exercise):
- take 15 minutes with pencil and paper.
- At the top write, "I want to ______________".
- If you know that's true you might not need this step - if you are not sure - write out why it might be true or not true.
- Write down the barriers that are in the way of you doing the thing. think;
- "can I do this right now?" (might not always be an action you can take while sitting around thinking about it - i.e. eating different foods)
- "why can't I just do this at every opportunity that arises?"
- "how do I increase the frequency of opportunities?"
- Write out the things you are doing instead of that thing.
These things are the barriers in your way as well. - For each point - consider what you are going to do about them.
Questions:
- What actions have you tried to take on?
- What barriers have you encountered in doing so?
- How did you solve that barrier?
- What are you struggling with taking on in the future?
Meta: this borrows from the Immunity to Change process, that can be best read about in the book, "right weight, right mind". It also borrows from CFAR style techniques like resolve cycles (also known as focused grit), hamming questions, murphy-jitsu.
Meta: this took one hour to write.
Cross posted to lesswrong: http://lesswrong.com/lw/nuq
Review and Thoughts on Current Version of CFAR Workshop
Outline: I will discuss my background and how I prepared for the workshop, and then how I would have prepared differently if I could go back and have the chance to do it again; I will then discuss my experience at the CFAR workshop, and what I would have done differently if I had the chance to do it again; I will then discuss what my take-aways were from the workshop, and what I am doing to integrate CFAR strategies into my life; finally, I will give my assessment of its benefits and what other folks might expect to get who attend the workshop.
Acknowledgments: Thanks to fellow CFAR alumni and CFAR staff for feedback on earlier versions of this post
Introduction
Many aspiring rationalists have heard about the Center for Applied Rationality, an organization devoted to teaching applied rationality skills to help people improve their thinking, feeling, and behavior patterns. This nonprofit does so primarily through its intense workshops, and is funded by donations and revenue from its workshop. It fulfills its social mission through conducting rationality research and through giving discounted or free workshops to those people its staff judge as likely to help make the world a better place, mainly those associated with various Effective Altruist cause areas, especially existential risk.
To be fully transparent: even before attending the workshop, I already had a strong belief that CFAR is a great organization and have been a monthly donor to CFAR for years. So keep that in mind as you read my description of my experience (you can become a donor here).
Preparation
First, some background about myself, so you know where I’m coming from in attending the workshop. I’m a professor specializing in the intersection of history, psychology, behavioral economics, sociology, and cognitive neuroscience. I discovered the rationality movement several years ago through a combination of my research and attending a LessWrong meetup in Columbus, OH, and so come from a background of both academic and LW-style rationality. Since discovering the movement, I have become an activist in the movement as the President of Intentional Insights, a nonprofit devoted to popularizing rationality and effective altruism (see here for our EA work). So I came to the workshop with some training and knowledge of rationality, including some CFAR techniques.
To help myself prepare for the workshop, I reviewed existing posts about CFAR materials, with an eye toward being careful not to assume that the actual techniques match their actual descriptions in the posts.
I also delayed a number of tasks for after the workshop, tying up loose ends. In retrospect, I wish I did not leave myself some ongoing tasks to do during the workshop. As part of my leadership of InIn, I coordinate about 50ish volunteers, and I wish I had placed those responsibilities on someone else during the workshop.
Before the workshop, I worked intensely on finishing up some projects. In retrospect, it would have been better to get some rest and come to the workshop as fresh as possible.
There were some communication snafus with logistics details before the workshop. It all worked out in the end, but I would have told myself in retrospect to get the logistics hammered out in advance to not experience anxiety before the workshop about how to get there.
Experience
The classes were well put together, had interesting examples, and provided useful techniques. FYI, my experience in the workshop was that reading these techniques in advance was not harmful, but that the techniques in the CFAR classes were quite a bit better than the existing posts about them, so don’t assume you can get the same benefits from reading posts as attending the workshop. So while I was aware of the techniques, the ones in the classes definitely had optimized versions of them - maybe because of the “broken telephone” effect or maybe because CFAR optimized them from previous workshops, not sure. I was glad to learn that CFAR considers the workshop they gave us in May as satisfactory enough to scale up their workshops, while still improving their content over time.
Just as useful as the classes were the conversations held in between and after the official classes ended. Talking about them with fellow aspiring rationalists and seeing how they were thinking about applying these to their lives was helpful for sparking ideas about how to apply them to my life. The latter half of the CFAR workshop was especially great, as it focused on pairing off people and helping others figure out how to apply CFAR techniques to themselves and how to address various problems in their lives. It was especially helpful to have conversations with CFAR staff and trained volunteers, of whom there were plenty - probably about 20 volunteers/staff for the 50ish workshop attendees.
Another super-helpful aspect of the conversations was networking and community building. Now, this may have been more useful to some participants than others, so YMMV. As an activist in the moment, I talked to many folks in the CFAR workshop about promoting EA and rationality to a broad audience. I was happy to introduce some people to EA, with my most positive conversation there being to encourage someone to switch his efforts regarding x-risk from addressing nuclear disarmament to AI safety research as a means of addressing long/medium-term risk, and promoting rationality as a means of addressing short/medium-term risk. Others who were already familiar with EA were interested in ways of promoting it broadly, while some aspiring rationalists expressed enthusiasm over becoming rationality communicators.
Looking back at my experience, I wish I was more aware of the benefits of these conversations. I went to sleep early the first couple of nights, and I would have taken supplements to enable myself to stay awake and have conversations instead.
Take-Aways and Integration
The aspects of the workshop that I think will help me most were what CFAR staff called “5-second” strategies - brief tactics and techniques that could be executed in 5 seconds or less and address various problems. The stuff that we learned at the workshops that I was already familiar with required some time to learn and practice, such as Trigger Action Plans, Goal Factoring, Murphyjitsu, Pre-Hindsight, often with pen and paper as part of the work. However, with sufficient practice, one can develop brief techniques that mimic various aspects of the more thorough techniques, and apply them quickly to in-the-moment decision-making.
Now, this doesn’t mean that the longer techniques are not helpful. They are very important, but they are things I was already generally familiar with, and already practice. The 5-second versions were more of a revelation for me, and I anticipate will be more helpful for me as I did not know about them previously.
Now, CFAR does a very nice job of helping people integrate the techniques into daily life, as this is a common failure mode of CFAR attendees, with them going home and not practicing the techniques. So they have 6 Google Hangouts with CFAR staff and all attendees who want to participate, 4 one-on-one sessions with CFAR trained volunteers or staff, and they also pair you with one attendee for post-workshop conversations. I plan to take advantage of all these, although my pairing did not work out.
For integration of CFAR techniques into my life, I found the CFAR strategy of “Overlearning” especially helpful. Overlearning refers to trying to apply a single technique intensely for a while to all aspect of one’s activities, so that it gets internalized thoroughly. I will first focus on overlearning Trigger Action Plans, following the advice of CFAR.
I also plan to teach CFAR techniques in my local rationality dojo, as teaching is a great way to learn, naturally.
Finally, I plan to integrate some CFAR techniques into Intentional Insights content, at least the more simple techniques that are a good fit for the broad audience with which InIn is communicating.
Benefits
I have a strong probabilistic belief that having attended the workshop will improve my capacity to be a person who achieves my goals for doing good in the world. I anticipate I will be able to figure out better whether the projects I am taking on are the best uses of my time and energy. I will be more capable of avoiding procrastination and other forms of akrasia. I believe I will be more capable of making better plans, and acting on them well. I will also be more in touch with my emotions and intuitions, and be able to trust them more, as I will have more alignment among different components of my mind.
Another benefit is meeting the many other people at CFAR who have similar mindsets. Here in Columbus, we have a flourishing rationality community, but it’s still relatively small. Getting to know 70ish people, attendees and staff/volunteers, passionate about rationality was a blast. It was especially great to see people who were involved in creating new rationality strategies, something that I am engaged in myself in addition to popularizing rationality - it’s really heartening to envision how the rationality movement is growing.
These benefits should resonate strongly with those who are aspiring rationalists, but they are really important for EA participants as well. I think one of the best things that EA movement members can do is studying rationality, and it’s something we promote to the EA movement as part of InIn’s work. What we offer is articles and videos, but coming to a CFAR workshop is a much more intense and cohesive way of getting these benefits. Imagine all the good you can do for the world if you are better at planning, organizing, and enacting EA-related tasks. Rationality is what has helped me and other InIn participants make the major impact that we have been able to make, and there are a number of EA movement members who have rationality training and who reported similar benefits. Remember, as an EA participant, you can likely get a scholarship with a partial or full coverage of the regular $3900 price of the workshop, as I did myself when attending it, and you are highly likely to be able to save more lives as a result of attending the workshop over time, even if you have to pay some costs upfront.
Hope these thoughts prove helpful to you all, and please contact me at gleb@intentionalinsights.org if you want to chat with me about my experience.
How to be skeptical
Community
The Center For Applied Rationality (CFAR) checklist is a heuristic for assessing the admissibility of one's own testimony.
What of the challenge of evaluating the testimony of others?
Slapping the label of a bias on a situation?
Arguing at the object level by provision of evidence to the contrary?
This risks Gish Gallop. For those who prefer to pick their battles, I commisioned this post of my time, a structural intervention into the information ecosystem.
We need not event the wheel, for legal theorists have researched this issue for years, while practitioners and courts have identified heuristics useful to lay people interested in this field.
Precedent
The Daubert standard provides a rule of evidence regarding the admissibility of expert witnesses' testimony during United States federal legal proceedings. Pursuant to this standard, a party may raise a Daubert motion, which is a special case of motion in limine raised before or during trial to exclude the presentation of unqualified evidence to the jury. The Daubert trilogy refers to the three United States Supreme Court cases that articulated the Daubert standard:
-https://en.wikipedia.org/wiki/Daubert_standard
Further reading on the case is available here on Google Scholar
Practice
How can this be applied in practice?
What is the first principle of skepticism. It's effectively synonymous: 'question'
What question? This isn't the 5 W's of primary school, after all.
I have summarized critical questions to a reading here to get the ball rolling:
Issues to consider when contesting and evaluating expert opinion evidence
A. Relevance (on the voir dire)
I accept that you are highly qualified and have extensive experience, but how do we know that your level of performance regarding . . . [the task at hand — eg, voice comparison] is actually better than that of a lay person (or the jury)?
What independent evidence... [such as published studies of your technique and its accuracy] can you direct us to that would allow us to answer this question?
What independent evidence confirms that your technique works?
Do you participate in a blind proficiency testing program?
Given that you undertake blind proficiency exercises, are these exercises also given to lay persons to determine if there are significant differences in results, such that your asserted expertise can be supported?
B. Validation
Do you accept that techniques should be validated?
Can you direct us to specific studies that have validated the technique that you used?
What precisely did these studies assess (and is the technique being used in the same way in this case)?
Have you ever had your ability formally tested in conditions where the correct answer was known? (ie, not a previous investigation or trial)
Might different analysts using your technique produce different answers?
Has there been any variation in the result on any of the validation or proficiency tests you know of or participated in?
Can you direct us to the written standard or protocol used in your analysis?
Was it followed?
C. Limitations and errors
Could you explain the limitations of this technique?
Can you tell us about the error rate or potential sources of error associated with this technique?
Can you point to specific studies that provide an error rate or an estimation of an error rate for your technique?
How did you select what to examine?
Were there any differences observed when making your comparison . . . [eg, between two fingerprints], but which you ultimately discounted? On what basis were these discounted?
Could there be differences between the samples that you are unable to observe?
Might someone using the same technique come to a different conclusion?
Might someone using a different technique come to a different conclusion?
Did any of your colleagues disagree with you?
Did any express concerns about the quality of the sample, the results, or your interpretation?
Would some analysts be unwilling to analyse this sample (or produce such a confident opinion)?
...
D Personal proficiency
...
Have you ever had your own ability... [doing the specific task/using the technique] tested in conditions where the correct answer was known?
If not, how can we be confident that you are proficient?
If so, can you provide independent empirical evidence of your performance?
E Expressions of opinion
...
Can you explain how you selected the terminology used to express your opinion? Is it based on a scale or some calculation?
If so, how was the expression selected?
Would others analyzing the same material produce similar conclusions, and a similar strength of opinion? How do you know?
Is the use of this terminology derived from validation studies?
Did you report all of your results?
You would accept that forensic science results should generally be expressed in non-absolute terms?
More
For further reading, I recommend the seminal text in cross-examination which is the 1903 The Art of Cross Examination.
The Full Text is available free here on Project Gutenberg.
Other countries use different standards, such as the Opinion Rule in Australia.
Forecasting and recursive Inhibition within a decision cycle
When we anticipate the future, we the opportunity to inhibit our behaviours which we anticipate will lead to counterfactual outcomes. Those of us with sufficiently low latencies in our decision cycles may recursively anticipate the consequences of counterfactuating (neologism) interventions to recursively intervene against our interventions.
This may be difficult for some. Try modelling that decision cycle as a nano-scale approximation of time travel. One relevant paradox from popular culture is the farther future paradox described in the tv cartoon called Family Guy.
Watch this clip: https://www.youtube.com/watch?v=4btAggXRB_Q
Relating the satire back to our abstraction of the decision cycle, one may ponder:
What is a satisfactory stopping rule for the far anticipation of self-referential consequence?
That is:
(1) what are the inherent harmful implications of inhibiting actions in and of themselves: stress?
(2) what are their inherent merits: self-determination?
and (3) what are the favourable and disfavourable consequences as x point into the future given y number of points of self reference at points z, a, b and c?
see no ready solution to this problem in terms of human rationality, and see no corresponding problem in artificial intelligence, where it would also apply. Given the relevance to MIRI (since CFAR doesn't seem work on open-problems in the same way)
I would like to also take this opportunity to open this as an experimental thread for the community to generate a list of ''open-problems'' in human rationality that are otherwise scattered across the community blog and wiki.
[Link] 10 Tips from CFAR: My Business Insider article
My Business Insider article titled 10 tips from a Silicon Valley bootcamp that aims to make smart, successful people more productive.
Speculative rationality skills and appropriable research or anecdote
Is rationality training in it's infancy? I'd like to think so, given the paucity of novel, usable information produced by rationalists since the Sequence days. I like to model the rationalist body of knowledge as superset of pertinent fields such as decision analysis, educational psychology and clinical psychology. This reductionist model enables rationalists to examine the validity of rationalist constructs while standing on the shoulders of giants.
CFAR's obscurantism (and subsequent price gouging) capitalises on our [fear of missing out](https://en.wikipedia.org/wiki/Fear_of_missing_out). They brand established techniques like mindfulness as againstness or reference class forecasting as 'hopping' as if it's of their own genesis, spiting academic tradition and cultivating an insular community. In short, Lesswrongers predictably flouts [cooperative principles](https://en.wikipedia.org/wiki/Cooperative_principle).
This thread is to encourage you to speculate on potential rationality techniques, underdetermined by existing research which might be a useful area for rationalist individuals and organisations to explore. I feel this may be a better use of rationality skills training organisations time, than gatekeeping information.
To get this thread started, I've posted a speculative rationality skill I've been working on. I'd appreciate any comments about it or experiences with it. However, this thread is about working towards the generation of rationality skills more broadly.
Min/max goal factoring and belief mapping exercise
Edit 3: Removed description of previous edits and added the following:
This thread used to contain the description of a rationality exercise.
I have removed it and plan to rewrite it better.
I will repost it here, or delete this thread and repost in the discussion.
Thank you.
CFAR-run MIRI Summer Fellows program: July 7-26
CFAR will be running a three week summer program this July for MIRI, designed to increase participants' ability to do technical research into the superintelligence alignment problem.
The intent of the program is to boost participants as far as possible in four skills:
- The CFAR “applied rationality” skillset, including both what is taught at our intro workshops, and more advanced material from our alumni workshops;
- “Epistemic rationality as applied to the foundations of AI, and other philosophically tricky problems” -- i.e., the skillset taught in the core LW Sequences. (E.g.: reductionism; how to reason in contexts as confusing as anthropics without getting lost in words.)
- The long-term impacts of AI, and strategies for intervening (e.g., the content discussed in Nick Bostrom’s book Superintelligence).
- The basics of AI safety-relevant technical research. (Decision theory, anthropics, and similar; with folks trying their hand at doing actual research, and reflecting also on the cognitive habits involved.)
The program will be offered free to invited participants, and partial or full scholarships for travel expenses will be offered to those with exceptional financial need.
If you're interested (or possibly-interested), sign up for an admissions interview ASAP at this link (takes 2 minutes): http://rationality.org/miri-summer-fellows-2015/
Also, please forward this post, or the page itself, to anyone you think should come; the skills and talent that humanity brings to bear on the superintelligence alignment problem may determine our skill at navigating it, and sharing this opportunity with good potential contributors may be a high-leverage way to increase that talent.
Against the internal locus of control
What do you think about these pairs of statements?
- People's misfortunes result from the mistakes they make
- Many of the unhappy things in people's lives are partly due to bad luck
- In the long run, people get the respect they deserve in this world.
- Unfortunately, an individual's worth often passes unrecognized no matter how hard he tries.
- Becoming a success is a matter of hard work; luck has little or nothing to do with it.
- Getting a good job mainly depends on being in the right place at the right time.
They have a similar theme: the first statement suggests that an outcome (misfortune, respect, or a good job) for a person are the result of their own action or volition. The second assigns the outcome to some external factor like bad luck.(1)
People who tend to think their own attitudes or efforts can control what happens to them are said to have an internal locus of control, those who don't, an external locus of control. (Call them 'internals' and 'externals' for short).
Internals seem to do better at life, pace obvious confounding: maybe instead of internals doing better by virtue of their internal locus of control, being successful inclines you to attribute success internal factors and so become more internal, and vice versa if you fail.(2) If you don't think the relationship is wholly confounded, then there is some prudential benefit for becoming more internal.
Yet internal versus external is not just a matter of taste, but a factual claim about the world. Do people, in general, get what their actions deserve, or is it generally thanks to matters outside their control?
Why the external view is right
Here are some reasons in favour of an external view:(3)
- Global income inequality is marked (e.g. someone in the bottom 10% of the US population by income is still richer than two thirds of the population - more here). The main predictor of your income is country of birth, it is thought to explain around 60% of the variance: not only more important than any other factor, but more important than all other factors put together.
- Of course, the 'remaining' 40% might not be solely internal factors either. Another external factor we could put in would be parental class. Include that, and the two factors explain 80% of variance in income.
- Even conditional on being born in the right country (and to the right class), success may still not be a matter of personal volition. One robust predictor of success (grades in school, job performance, income, and so on) is IQ. The precise determinants of IQ remain controversial, it is known to be highly heritable, and the 'non-genetic' factors of IQ proposed (early childhood environment, intra-uterine environment, etc.) are similarly outside one's locus of control.
On cursory examination the contours of how our lives are turned out are set by factors outside our control, merely by where we are born and who our parents are. Even after this we know various predictors, similarly outside (or mostly outside) of our control, that exert their effects on how our lives turn out: IQ is one, but we could throw in personality traits, mental health, height, attractiveness, etc.
So the answer to 'What determined how I turned out, compared to everyone else on the planet?', the answer surely has to by primarily about external factors, and our internal drive or will is relegated a long way down the list. Even if we want to look at narrower questions, like "What has made me turn out the way I am, versus all the other people who were likewise born in rich countries in comfortable circumstances?" It is still unclear whether the locus of control resides within our will: perhaps a combination of our IQ, height, gender, race, risk of mental illness and so on will still do the bulk of the explanatory work.(4)
Bringing the true and the prudentially rational together again
If it is the case that folks with an internal locus of control succeed more, yet also the external view being generally closer to the truth of the matter, this is unfortunate. What is true and what is prudentially rational seem to be diverging, such that it might be in your interests not to know about the evidence in support of an external locus of control view, as deluding yourself about an internal locus of control view would lead to your greater success.
Yet it is generally better not to believe falsehoods. Further, the internal view may have some costs. One possibility is fueling a just world fallacy: if one thinks that outcomes are generally internally controlled, then a corollary is when bad things happen to someone or they fail at something, it was primarily their fault rather than them being a victim of circumstance.
So what next? Perhaps the right view is to say that: although most important things are outside our control, not everything is. Insofar as we do the best with what things we can control, we make our lives go better. And the scope of internal factors - albeit conditional on being a rich westerner etc. - may be quite large: it might determine whether you get through medical school, publish a paper, or put in enough work to do justice to your talents. All are worth doing.

Acknowledgements
Inspired by Amanda MacAskill's remarks, and in partial response of Peter McIntyre. Neither are responsible for what I've written, and the former's agreement or the latter's disagreement with this post shouldn't be assumed.
1) Some ground-clearing: free will can begin to loom large here - after all, maybe my actions are just a result of my brain's particular physical state, and my brain's particular physical state at t depends on it's state at t-1, and so on and so forth all the way to the big bang. If so, there is no 'internal willer' for my internal locus of control to reside.
However, even if that is so, we can parse things in a compatibilist way: 'internal' factors are those which my choices can affect; external factors are those which my choices cannot affect. "Time spent training" is an internal factor as to how fast I can run, as (borrowing Hume), if I wanted to spend more time training, I could spend more time training, and vice versa. In contrast, "Hemiparesis secondary to birth injury" is an external factor, as I had no control over whether it happened to me, and no means of reversing it now. So the first set of answers imply support for the results of our choices being more important; whilst the second set assign more weight to things 'outside our control'.
2) In fairness, there's a pretty good story as to why there should be 'forward action': in the cases where outcome is a mix of 'luck' factors (which are a given to anyone), and 'volitional ones' (which are malleable), people inclined to think the internal ones matter a lot will work hard at them, and so will do better when this is mixed in with the external determinants.
3) This ignores edge cases where we can clearly see the external factors dominate - e.g. getting childhood leukaemia, getting struck by lightning etc. - I guess sensible proponents of an internal locus of control would say that there will be cases like this, but for most people, in most cases, their destiny is in their hands. Hence I focus on population level factors.
4) Ironically, one may wonder to what extent having an internal versus external view is itself an external factor.
An alarming fact about the anti-aging community
Past and Present
Ten years ago teenager me was hopeful. And stupid.
The world neglected aging as a disease, Aubrey had barely started spreading memes, to the point it was worth it for him to let me work remotely to help with Metuselah foundation. They had not even received that initial 1,000,000 donation from an anonymous donor. The Metuselah prize was running for less than 400,000 if I remember well. Still, I was a believer.
Now we live in the age of Larry Page's Calico, 100,000,000 dollars trying to tackle the problem, besides many other amazing initiatives, from the research paid for by Life Extension Foundation and Bill Faloon, to scholars in top universities like Steve Garan and Kenneth Hayworth fixing things from our models of aging to plastination techniques. Yet, I am much more skeptical now.
Individual risk
I am skeptical because I could not find a single individual who already used a simple technique that could certainly save you many years of healthy life. I could not even find a single individual who looked into it and decided it wasn't worth it, or was too pricy, or something of that sort.
That technique is freezing some of your cells now.
Freezing cells is not a far future hope, this is something that already exists, and has been possible for decades. The reason you would want to freeze them, in case you haven't thought of it, is that they are getting older every day, so the ones you have now are the youngest ones you'll ever be able to use.
Using these cells to create new organs is not something that may help you if medicine and technology continue progressing according to the law of accelerating returns in 10 or 30 years. We already know how to make organs out of your cells. Right now. Some organs live longer, some shorter, but it can be done - for instance to bladders - and is being done.
Hope versus Reason
Now, you'd think if there was an almost non-invasive technique already shown to work in humans that can preserve many years of your life and involves only a few trivial inconveniences - compared to changing diet or exercising for instance- the whole longevist/immortalist crowd would be lining up for it and keeping back up tissue samples all over the place.
Well I've asked them. I've asked some of the adamant researchers, and I've asked the superwealthy; I've asked the cryonicists and supplement gorgers; I've asked those who work on this 8 hour a day every day, and I've asked those who pay others to do so. I asked it mostly for selfish reasons, I saw the TEDs by Juan Enriquez and Anthony Atala and thought: hey look, clearly beneficial expected life length increase, yay! let me call someone who found this out before me - anyone, I'm probably the last one, silly me - and fix this.
I've asked them all, and I have nothing to show for it.
My takeaway lesson is: whatever it is that other people are doing to solve their own impending death, they are far from doing it rationally, and maybe most of the money and psychology involved in this whole business is about buying hope, not about staring into the void and finding out the best ways of dodging it. Maybe people are not in fact going to go all-in if the opportunity comes.
How to fix this?
Let me disclose first that I have no idea how to fix this problem. I don't mean the problem of getting all longevists to freeze their cells, I mean the problem of getting them to take information from the world of science and biomedicine and applying it to themselves. To become users of the technology they are boasters of. To behave rationally in a CFAR or even homo economicus sense.
I was hoping for a grandiose idea in this last paragraph, but it didn't come. I'll go with a quote from this emotional song sung by us during last year's Secular Solstice celebration
Do you realize? that everyone, you know, someday will die...
And instead of sending all your goodbyes
CFAR fundraiser far from filled; 4 days remaining
We're 4 days from the end of our matching fundraiser, and still only about 1/3rd of the way to our target (and to the point where pledged funds would cease being matched).
If you'd like to support the growth of rationality in the world, do please consider donating, or asking me about any questions/etc. you may have. I'd love to talk. I suspect funds donated to CFAR between now and Jan 31 are quite high-impact.
As a random bonus, I promise that if we meet the $120k matching challenge, I'll post at least two posts with some never-before-shared (on here) rationality techniques that we've been playing with around CFAR.
Harper's Magazine article on LW/MIRI/CFAR and Ethereum
Cover title: “Power and paranoia in Silicon Valley”; article title: “Come with us if you want to live: Among the apocalyptic libertarians of Silicon Valley” (mirrors: 1, 2, 3), by Sam Frank; Harper’s Magazine, January 2015, pg26-36 (~8500 words). The beginning/ending are focused on Ethereum and Vitalik Buterin, so I'll excerpt the LW/MIRI/CFAR-focused middle:
…Blake Masters-the name was too perfect-had, obviously, dedicated himself to the command of self and universe. He did CrossFit and ate Bulletproof, a tech-world variant of the paleo diet. On his Tumblr’s About page, since rewritten, the anti-belief belief systems multiplied, hyperlinked to Wikipedia pages or to the confoundingly scholastic website Less Wrong: “Libertarian (and not convinced there’s irreconcilable fissure between deontological and consequentialist camps). Aspiring rationalist/Bayesian. Secularist/agnostic/ ignostic . . . Hayekian. As important as what we know is what we don’t. Admittedly eccentric.” Then: “Really, really excited to be in Silicon Valley right now, working on fascinating stuff with an amazing team.” I was startled that all these negative ideologies could be condensed so easily into a positive worldview. …I saw the utopianism latent in capitalism-that, as Bernard Mandeville had it three centuries ago, it is a system that manufactures public benefit from private vice. I started CrossFit and began tinkering with my diet. I browsed venal tech-trade publications, and tried and failed to read Less Wrong, which was written as if for aliens.
…I left the auditorium of Alice Tully Hall. Bleary beside the silver coffee urn in the nearly empty lobby, I was buttonholed by a man whose name tag read MICHAEL VASSAR, METAMED research. He wore a black-and-white paisley shirt and a jacket that was slightly too big for him. “What did you think of that talk?” he asked, without introducing himself. “Disorganized, wasn’t it?” A theory of everything followed. Heroes like Elon and Peter (did I have to ask? Musk and Thiel). The relative abilities of physicists and biologists, their standard deviations calculated out loud. How exactly Vassar would save the world. His left eyelid twitched, his full face winced with effort as he told me about his “personal war against the universe.” My brain hurt. I backed away and headed home. But Vassar had spoken like no one I had ever met, and after Kurzweil’s keynote the next morning, I sought him out. He continued as if uninterrupted. Among the acolytes of eternal life, Vassar was an eschatologist. “There are all of these different countdowns going on,” he said. “There’s the countdown to the broad postmodern memeplex undermining our civilization and causing everything to break down, there’s the countdown to the broad modernist memeplex destroying our environment or killing everyone in a nuclear war, and there’s the countdown to the modernist civilization learning to critique itself fully and creating an artificial intelligence that it can’t control. There are so many different - on different time-scales - ways in which the self-modifying intelligent processes that we are embedded in undermine themselves. I’m trying to figure out ways of disentangling all of that. . . .I’m not sure that what I’m trying to do is as hard as founding the Roman Empire or the Catholic Church or something. But it’s harder than people’s normal big-picture ambitions, like making a billion dollars.” Vassar was thirty-four, one year older than I was. He had gone to college at seventeen, and had worked as an actuary, as a teacher, in nanotech, and in the Peace Corps. He’d founded a music-licensing start-up called Sir Groovy. Early in 2012, he had stepped down as president of the Singularity Institute for Artificial Intelligence, now called the Machine Intelligence Research Institute (MIRI), which was created by an autodidact named Eliezer Yudkowsky, who also started Less Wrong. Vassar had left to found MetaMed, a personalized-medicine company, with Jaan Tallinn of Skype and Kazaa, $500,000 from Peter Thiel, and a staff that included young rationalists who had cut their teeth arguing on Yudkowsky’s website. The idea behind MetaMed was to apply rationality to medicine-“rationality” here defined as the ability to properly research, weight, and synthesize the flawed medical information that exists in the world. Prices ranged from $25,000 for a literature review to a few hundred thousand for a personalized study. “We can save lots and lots and lots of lives,” Vassar said (if mostly moneyed ones at first). “But it’s the signal-it’s the ‘Hey! Reason works!’-that matters. . . . It’s not really about medicine.” Our whole society was sick - root, branch, and memeplex - and rationality was the only cure. …I asked Vassar about his friend Yudkowsky. “He has worse aesthetics than I do,” he replied, “and is actually incomprehensibly smart.” We agreed to stay in touch.
One month later, I boarded a plane to San Francisco. I had spent the interim taking a second look at Less Wrong, trying to parse its lore and jargon: “scope insensitivity,” “ugh field,” “affective death spiral,” “typical mind fallacy,” “counterfactual mugging,” “Roko’s basilisk.” When I arrived at the MIRI offices in Berkeley, young men were sprawled on beanbags, surrounded by whiteboards half black with equations. I had come costumed in a Fermat’s Last Theorem T-shirt, a summary of the proof on the front and a bibliography on the back, printed for the number-theory camp I had attended at fifteen. Yudkowsky arrived late. He led me to an empty office where we sat down in mismatched chairs. He wore glasses, had a short, dark beard, and his heavy body seemed slightly alien to him. I asked what he was working on. “Should I assume that your shirt is an accurate reflection of your abilities,” he asked, “and start blabbing math at you?” Eight minutes of probability and game theory followed. Cogitating before me, he kept grimacing as if not quite in control of his face. “In the very long run, obviously, you want to solve all the problems associated with having a stable, self-improving, beneficial-slash-benevolent AI, and then you want to build one.” What happens if an artificial intelligence begins improving itself, changing its own source code, until it rapidly becomes - foom! is Yudkowsky’s preferred expression - orders of magnitude more intelligent than we are? A canonical thought experiment devised by Oxford philosopher Nick Bostrom in 2003 suggests that even a mundane, industrial sort of AI might kill us. Bostrom posited a “superintelligence whose top goal is the manufacturing of paper-clips.” For this AI, known fondly on Less Wrong as Clippy, self-improvement might entail rearranging the atoms in our bodies, and then in the universe - and so we, and everything else, end up as office supplies. Nothing so misanthropic as Skynet is required, only indifference to humanity. What is urgently needed, then, claims Yudkowsky, is an AI that shares our values and goals. This, in turn, requires a cadre of highly rational mathematicians, philosophers, and programmers to solve the problem of “friendly” AI - and, incidentally, the problem of a universal human ethics - before an indifferent, unfriendly AI escapes into the wild.
Among those who study artificial intelligence, there’s no consensus on either point: that an intelligence explosion is possible (rather than, for instance, a proliferation of weaker, more limited forms of AI) or that a heroic team of rationalists is the best defense in the event. That MIRI has as much support as it does (in 2012, the institute’s annual revenue broke $1 million for the first time) is a testament to Yudkowsky’s rhetorical ability as much as to any technical skill. Over the course of a decade, his writing, along with that of Bostrom and a handful of others, has impressed the dangers of unfriendly AI on a growing number of people in the tech world and beyond. In August, after reading Superintelligence, Bostrom’s new book, Elon Musk tweeted, “Hope we’re not just the biological boot loader for digital superintelligence. Unfortunately, that is increasingly probable.” In 2000, when Yudkowsky was twenty, he founded the Singularity Institute with the support of a few people he’d met at the Foresight Institute, a Palo Alto nanotech think tank. He had already written papers on “The Plan to Singularity” and “Coding a Transhuman AI,” and posted an autobiography on his website, since removed, called “Eliezer, the Person.” It recounted a breakdown of will when he was eleven and a half: “I can’t do anything. That’s the phrase I used then.” He dropped out before high school and taught himself a mess of evolutionary psychology and cognitive science. He began to “neuro-hack” himself, systematizing his introspection to evade his cognitive quirks. Yudkowsky believed he could hasten the singularity by twenty years, creating a superhuman intelligence and saving humankind in the process. He met Thiel at a Foresight Institute dinner in 2005 and invited him to speak at the first annual Singularity Summit. The institute’s paid staff grew. In 2006, Yudkowsky began writing a hydra-headed series of blog posts: science-fictionish parables, thought experiments, and explainers encompassing cognitive biases, self-improvement, and many-worlds quantum mechanics that funneled lay readers into his theory of friendly AI. Rationality workshops and Meetups began soon after. In 2009, the blog posts became what he called Sequences on a new website: Less Wrong. The next year, Yudkowsky began publishing Harry Potter and the Methods of Rationality at
fanfiction.net. The Harry Potter category is the site’s most popular, with almost 700,000 stories; of these, HPMoR is the most reviewed and the second-most favorited. The last comment that the programmer and activist Aaron Swartz left on Reddit before his suicide in 2013 was on/r/hpmor. In Yudkowsky’s telling, Harry is not only a magician but also a scientist, and he needs just one school year to accomplish what takes canon-Harry seven. HPMoR is serialized in arcs, like a TV show, and runs to a few thousand pages when printed; the book is still unfinished. Yudkowsky and I were talking about literature, and Swartz, when a college student wandered in. Would Eliezer sign his copy of HPMoR? “But you have to, like, write something,” he said. “You have to write, ‘I am who I am.’ So, ‘I am who I am’ and then sign it.” “Alrighty,” Yudkowsky said, signed, continued. “Have you actually read Methods of Rationality at all?” he asked me. “I take it not.” (I’d been found out.) “I don’t know what sort of a deadline you’re on, but you might consider taking a look at that.” (I had taken a look, and hated the little I’d managed.) “It has a legendary nerd-sniping effect on some people, so be warned. That is, it causes you to read it for sixty hours straight.”The nerd-sniping effect is real enough. Of the 1,636 people who responded to a 2013 survey of Less Wrong’s readers, one quarter had found the site thanks to HPMoR, and many more had read the book. Their average age was 27.4, their average IQ 138.2. Men made up 88.8% of respondents; 78.7% were straight, 1.5% transgender, 54.7 % American, 89.3% atheist or agnostic. The catastrophes they thought most likely to wipe out at least 90% of humanity before the year 2100 were, in descending order, pandemic (bioengineered), environmental collapse, unfriendly AI, nuclear war, pandemic (natural), economic/political collapse, asteroid, nanotech/gray goo. Forty-two people, 2.6 %, called themselves futarchists, after an idea from Robin Hanson, an economist and Yudkowsky’s former coblogger, for reengineering democracy into a set of prediction markets in which speculators can bet on the best policies. Forty people called themselves reactionaries, a grab bag of former libertarians, ethno-nationalists, Social Darwinists, scientific racists, patriarchists, pickup artists, and atavistic “traditionalists,” who Internet-argue about antidemocratic futures, plumping variously for fascism or monarchism or corporatism or rule by an all-powerful, gold-seeking alien named Fnargl who will free the markets and stabilize everything else. At the bottom of each year’s list are suggestive statistical irrelevancies: “every optimizing system’s a dictator and i’m not sure which one i want in charge,” “Autocracy (important: myself as autocrat),” “Bayesian (aspiring) Rationalist. Technocratic. Human-centric Extropian Coherent Extrapolated Volition.” “Bayesian” refers to Bayes’s Theorem, a mathematical formula that describes uncertainty in probabilistic terms, telling you how much to update your beliefs when given new information. This is a formalization and calibration of the way we operate naturally, but “Bayesian” has a special status in the rationalist community because it’s the least imperfect way to think. “Extropy,” the antonym of “entropy,” is a decades-old doctrine of continuous human improvement, and “coherent extrapolated volition” is one of Yudkowsky’s pet concepts for friendly artificial intelligence. Rather than our having to solve moral philosophy in order to arrive at a complete human goal structure, C.E.V. would computationally simulate eons of moral progress, like some kind of Whiggish Pangloss machine. As Yudkowsky wrote in 2004, “In poetic terms, our coherent extrapolated volition is our wish if we knew more, thought faster, were more the people we wished we were, had grown up farther together.” Yet can even a single human’s volition cohere or compute in this way, let alone humanity’s? We stood up to leave the room. Yudkowsky stopped me and said I might want to turn my recorder on again; he had a final thought. “We’re part of the continuation of the Enlightenment, the Old Enlightenment. This is the New Enlightenment,” he said. “Old project’s finished. We actually have science now, now we have the next part of the Enlightenment project.”
In 2013, the Singularity Institute changed its name to the Machine Intelligence Research Institute. Whereas MIRI aims to ensure human-friendly artificial intelligence, an associated program, the Center for Applied Rationality, helps humans optimize their own minds, in accordance with Bayes’s Theorem. The day after I met Yudkowsky, I returned to Berkeley for one of CFAR’s long-weekend workshops. The color scheme at the Rose Garden Inn was red and green, and everything was brocaded. The attendees were mostly in their twenties: mathematicians, software engineers, quants, a scientist studying soot, employees of Google and Facebook, an eighteen-year-old Thiel Fellow who’d been paid $100,000 to leave Boston College and start a company, professional atheists, a Mormon turned atheist, an atheist turned Catholic, an Objectivist who was photographed at the premiere of Atlas Shrugged II: The Strike. There were about three men for every woman. At the Friday-night meet and greet, I talked with Benja, a German who was studying math and behavioral biology at the University of Bristol, whom I had spotted at MIRI the day before. He was in his early thirties and quite tall, with bad posture and a ponytail past his shoulders. He wore socks with sandals, and worried a paper cup as we talked. Benja had felt death was terrible since he was a small child, and wanted his aging parents to sign up for cryonics, if he could figure out how to pay for it on a grad-student stipend. He was unsure about the risks from unfriendly AI - “There is a part of my brain,” he said, “that sort of goes, like, ‘This is crazy talk; that’s not going to happen’” - but the probabilities had persuaded him. He said there was only about a 30% chance that we could make it another century without an intelligence explosion. He was at CFAR to stop procrastinating. Julia Galef, CFAR’s president and cofounder, began a session on Saturday morning with the first of many brain-as-computer metaphors. We are “running rationality on human hardware,” she said, not supercomputers, so the goal was to become incrementally more self-reflective and Bayesian: not perfectly rational agents, but “agent-y.” The workshop’s classes lasted six or so hours a day; activities and conversations went well into the night. We got a condensed treatment of contemporary neuroscience that focused on hacking our brains’ various systems and modules, and attended sessions on habit training, urge propagation, and delegating to future selves. We heard a lot about Daniel Kahneman, the Nobel Prize-winning psychologist whose work on cognitive heuristics and biases demonstrated many of the ways we are irrational. Geoff Anders, the founder of Leverage Research, a “meta-level nonprofit” funded by Thiel, taught a class on goal factoring, a process of introspection that, after many tens of hours, maps out every one of your goals down to root-level motivations-the unchangeable “intrinsic goods,” around which you can rebuild your life. Goal factoring is an application of Connection Theory, Anders’s model of human psychology, which he developed as a Rutgers philosophy student disserting on Descartes, and Connection Theory is just the start of a universal renovation. Leverage Research has a master plan that, in the most recent public version, consists of nearly 300 steps. It begins from first principles and scales up from there: “Initiate a philosophical investigation of philosophical method”; “Discover a sufficiently good philosophical method”; have 2,000-plus “actively and stably benevolent people successfully seek enough power to be able to stably guide the world”; “People achieve their ultimate goals as far as possible without harming others”; “We have an optimal world”; “Done.” On Saturday night, Anders left the Rose Garden Inn early to supervise a polyphasic-sleep experiment that some Leverage staff members were conducting on themselves. It was a schedule called the Everyman 3, which compresses sleep into three twenty-minute REM naps each day and three hours at night for slow-wave. Anders was already polyphasic himself. Operating by the lights of his own best practices, goal-factored, coherent, and connected, he was able to work 105 hours a week on world optimization. For the rest of us, for me, these were distant aspirations. We were nerdy and unperfected. There was intense discussion at every free moment, and a genuine interest in new ideas, if especially in testable, verifiable ones. There was joy in meeting peers after years of isolation. CFAR was also insular, overhygienic, and witheringly focused on productivity. Almost everyone found politics to be tribal and viscerally upsetting. Discussions quickly turned back to philosophy and math. By Monday afternoon, things were wrapping up. Andrew Critch, a CFAR cofounder, gave a final speech in the lounge: “Remember how you got started on this path. Think about what was the time for you when you first asked yourself, ‘How do I work?’ and ‘How do I want to work?’ and ‘What can I do about that?’ . . . Think about how many people throughout history could have had that moment and not been able to do anything about it because they didn’t know the stuff we do now. I find this very upsetting to think about. It could have been really hard. A lot harder.” He was crying. “I kind of want to be grateful that we’re now, and we can share this knowledge and stand on the shoulders of giants like Daniel Kahneman . . . I just want to be grateful for that. . . . And because of those giants, the kinds of conversations we can have here now, with, like, psychology and, like, algorithms in the same paragraph, to me it feels like a new frontier. . . . Be explorers; take advantage of this vast new landscape that’s been opened up to us in this time and this place; and bear the torch of applied rationality like brave explorers. And then, like, keep in touch by email.” The workshop attendees put giant Post-its on the walls expressing the lessons they hoped to take with them. A blue one read RATIONALITY IS SYSTEMATIZED WINNING. Above it, in pink: THERE ARE OTHER PEOPLE WHO THINK LIKE ME. I AM NOT ALONE.
That night, there was a party. Alumni were invited. Networking was encouraged. Post-its proliferated; one, by the beer cooler, read SLIGHTLY ADDICTIVE. SLIGHTLY MIND-ALTERING. Another, a few feet to the right, over a double stack of bound copies of Harry Potter and the Methods of Rationality: VERY ADDICTIVE. VERY MIND-ALTERING. I talked to one of my roommates, a Google scientist who worked on neural nets. The CFAR workshop was just a whim to him, a tourist weekend. “They’re the nicest people you’d ever meet,” he said, but then he qualified the compliment. “Look around. If they were effective, rational people, would they be here? Something a little weird, no?” I walked outside for air. Michael Vassar, in a clinging red sweater, was talking to an actuary from Florida. They discussed timeless decision theory (approximately: intelligent agents should make decisions on the basis of the futures, or possible worlds, that they predict their decisions will create) and the simulation argument (essentially: we’re living in one), which Vassar traced to Schopenhauer. He recited lines from Kipling’s “If-” in no particular order and advised the actuary on how to change his life: Become a pro poker player with the $100k he had in the bank, then hit the Magic: The Gathering pro circuit; make more money; develop more rationality skills; launch the first Costco in Northern Europe. I asked Vassar what was happening at MetaMed. He told me that he was raising money, and was in discussions with a big HMO. He wanted to show up Peter Thiel for not investing more than $500,000. “I’m basically hoping that I can run the largest convertible-debt offering in the history of finance, and I think it’s kind of reasonable,” he said. “I like Peter. I just would like him to notice that he made a mistake . . . I imagine a hundred million or a billion will cause him to notice . . . I’d like to have a pi-billion-dollar valuation.” I wondered whether Vassar was drunk. He was about to drive one of his coworkers, a young woman named Alyssa, home, and he asked whether I would join them. I sat silently in the back of his musty BMW as they talked about potential investors and hires. Vassar almost ran a red light. After Alyssa got out, I rode shotgun, and we headed back to the hotel.
It was getting late. I asked him about the rationalist community. Were they really going to save the world? From what? “Imagine there is a set of skills,” he said. “There is a myth that they are possessed by the whole population, and there is a cynical myth that they’re possessed by 10% of the population. They’ve actually been wiped out in all but about one person in three thousand.” It is important, Vassar said, that his people, “the fragments of the world,” lead the way during “the fairly predictable, fairly total cultural transition that will predictably take place between 2020 and 2035 or so.” We pulled up outside the Rose Garden Inn. He continued: “You have these weird phenomena like Occupy where people are protesting with no goals, no theory of how the world is, around which they can structure a protest. Basically this incredibly, weirdly, thoroughly disempowered group of people will have to inherit the power of the world anyway, because sooner or later everyone older is going to be too old and too technologically obsolete and too bankrupt. The old institutions may largely break down or they may be handed over, but either way they can’t just freeze. These people are going to be in charge, and it would be helpful if they, as they come into their own, crystallize an identity that contains certain cultural strengths like argument and reason.” I didn’t argue with him, except to press, gently, on his particular form of elitism. His rationalism seemed so limited to me, so incomplete. “It is unfortunate,” he said, “that we are in a situation where our cultural heritage is possessed only by people who are extremely unappealing to most of the population.” That hadn’t been what I’d meant. I had meant rationalism as itself a failure of the imagination. “The current ecosystem is so totally fucked up,” Vassar said. “But if you have conversations here”-he gestured at the hotel-“people change their mind and learn and update and change their behaviors in response to the things they say and learn. That never happens anywhere else.” In a hallway of the Rose Garden Inn, a former high-frequency trader started arguing with Vassar and Anna Salamon, CFAR’s executive director, about whether people optimize for hedons or utilons or neither, about mountain climbers and other high-end masochists, about whether world happiness is currently net positive or negative, increasing or decreasing. Vassar was eating and drinking everything within reach. My recording ends with someone saying, “I just heard ‘hedons’ and then was going to ask whether anyone wants to get high,” and Vassar replying, “Ah, that’s a good point.” Other voices: “When in California . . .” “We are in California, yes.”
…Back on the East Coast, summer turned into fall, and I took another shot at reading Yudkowsky’s Harry Potter fanfic. It’s not what I would call a novel, exactly, rather an unending, self-satisfied parable about rationality and trans-humanism, with jokes.
…I flew back to San Francisco, and my friend Courtney and I drove to a cul-de-sac in Atherton, at the end of which sat the promised mansion. It had been repurposed as cohousing for children who were trying to build the future: start-up founders, singularitarians, a teenage venture capitalist. The woman who coined the term “open source” was there, along with a Less Wronger and Thiel Capital employee who had renamed himself Eden. The Day of the Idealist was a day for self-actualization and networking, like the CFAR workshop without the rigor. We were to set “mega goals” and pick a “core good” to build on in the coming year. Everyone was a capitalist; everyone was postpolitical. I squabbled with a young man in a Tesla jacket about anti-Google activism. No one has a right to housing, he said; programmers are the people who matter; the protesters’ antagonistic tactics had totally discredited them.
…Thiel and Vassar and Yudkowsky, for all their far-out rhetoric, take it on faith that corporate capitalism, unchecked just a little longer, will bring about this era of widespread abundance. Progress, Thiel thinks, is threatened mostly by the political power of what he calls the “unthinking demos.”
Pointer thanks to /u/Vulture.
My experience of the recent CFAR workshop
---
I just got home from a four-day rationality workshop in England that was organized by the Center For Applied Rationality (CFAR). It covered a lot of content, but if I had to choose a single theme that united most of it, it was listening to your emotions.
That might sound like a weird focus for a rationality workshop, but cognitive science has shown that the intuitive and emotional part of the mind (”System 1”) is both in charge of most of our behavior, and also carries out a great deal of valuable information-processing of its own (it’s great at pattern-matching, for example). Much of the workshop material was aimed at helping people reach a greater harmony between their System 1 and their verbal, logical System 2. Many of people’s motivational troubles come from the goals of their two systems being somehow at odds with each other, and we were taught to have our two systems have a better dialogue with each other, harmonizing their desires and making it easier for information to cross from one system to the other and back.
To give a more concrete example, there was the technique of goal factoring. You take a behavior that you often do but aren’t sure why, or which you feel might be wasted time. Suppose that you spend a lot of time answering e-mails that aren’t actually very important. You start by asking yourself: what’s good about this activity, that makes me do it? Then you try to listen to your feelings in response to that question, and write down what you perceive. Maybe you conclude that it makes you feel productive, and it gives you a break from tasks that require more energy to do.
Next you look at the things that you came up with, and consider whether there’s a better way to accomplish them. There are two possible outcomes here. Either you conclude that the behavior is an important and valuable one after all, meaning that you can now be more motivated to do it. Alternatively, you find that there would be better ways of accomplishing all the goals that the behavior was aiming for. Maybe taking a walk would make for a better break, and answering more urgent e-mails would provide more value. If you were previously using two hours per day on the unimportant e-mails, possibly you could now achieve more in terms of both relaxation and actual productivity by spending an hour on a walk and an hour on the important e-mails.
At this point, you consider your new plan, and again ask yourself: does this feel right? Is this motivating? Are there any slight pangs of regret about giving up my old behavior? If you still don’t want to shift your behavior, chances are that you still have some motive for doing this thing that you have missed, and the feelings of productivity and relaxation aren’t quite enough to cover it. In that case, go back to the step of listing motives.
Or, if you feel happy and content about the new direction that you’ve chosen, victory!
Notice how this technique is all about moving information from one system to another. System 2 notices that you’re doing something but it isn’t sure why that is, so it asks System 1 for the reasons. System 1 answers, ”here’s what I’m trying to do for us, what do you think?” Then System 2 does what it’s best at, taking an analytic approach and possibly coming up with better ways of achieving the different motives. Then it gives that alternative approach back to System 1 and asks, would this work? Would this give us everything that we want? If System 1 says no, System 2 gets back to work, and the dialogue continues until both are happy.
Again, I emphasize the collaborative aspect between the two systems. They’re allies working for common goals, not enemies. Too many people tend towards one of two extremes: either thinking that their emotions are stupid and something to suppress, or completely disdaining the use of logical analysis. Both extremes miss out on the strengths of the system that is neglected, and make it unlikely for the person to get everything that they want.
As I was heading back from the workshop, I considered doing something that I noticed feeling uncomfortable about. Previous meditation experience had already made me more likely to just attend to the discomfort rather than trying to push it away, but inspired by the workshop, I went a bit further. I took the discomfort, considered what my System 1 might be trying to warn me about, and concluded that it might be better to err on the side of caution this time around. Finally – and this wasn’t a thing from the workshop, it was something I invited on the spot – I summoned a feeling of gratitude and thanked my System 1 for having been alert and giving me the information. That might have been a little overblown, since neither system should actually be sentient by itself, but it still felt like a good mindset to cultivate.
Although it was never mentioned in the workshop, what comes to mind is the concept of wu-wei from Chinese philosophy, a state of ”effortless doing” where all of your desires are perfectly aligned and everything comes naturally. In the ideal form, you never need to force yourself to do something you don’t want to do, or to expend willpower on an unpleasant task. Either you want to do something and do, or don’t want to do it, and don’t.
A large number of the workshop’s classes – goal factoring, aversion factoring and calibration, urge propagation, comfort zone expansion, inner simulation, making hard decisions, Hamming questions, againstness – were aimed at more or less this. Find out what System 1 wants, find out what System 2 wants, dialogue, aim for a harmonious state between the two. Then there were a smaller number of other classes that might be summarized as being about problem-solving in general.
The classes about the different techniques were interspersed with ”debugging sessions” of various kinds. In the beginning of the workshop, we listed different bugs in our lives – anything about our lives that we weren’t happy with, with the suggested example bugs being things like ”every time I talk to so-and-so I end up in an argument”, ”I think that I ‘should’ do something but don’t really want to”, and ”I’m working on my dissertation and everything is going fine – but when people ask me why I’m doing a PhD, I have a hard time remembering why I wanted to”. After we’d had a class or a few, we’d apply the techniques we’d learned to solving those bugs, either individually, in pairs, or small groups with a staff member or volunteer TA assisting us. Then a few more classes on techniques and more debugging, classes and debugging, and so on.
The debugging sessions were interesting. Often when you ask someone for help on something, they will answer with direct object-level suggestions – if your problem is that you’re underweight and you would like to gain some weight, try this or that. Here, the staff and TAs would eventually get to the object-level advice as well, but first they would ask – why don’t you want to be underweight? Okay, you say that you’re not completely sure but based on the other things that you said, here’s a stupid and quite certainly wrong theory of what your underlying reasons for it might be, how does that theory feel like? Okay, you said that it’s mostly on the right track, so now tell me what’s wrong with it? If you feel that gaining weight would make you more attractive, do you feel that this is the most effective way of achieving that?
Only after you and the facilitator had reached some kind of consensus of why you thought that something was a bug, and made sure that the problem you were discussing was actually the best way to address to reasons, would it be time for the more direct advice.
At first, I had felt that I didn’t have very many bugs to address, and that I had mostly gotten reasonable advice for them that I might try. But then the workshop continued, and there were more debugging sessions, and I had to keep coming up with bugs. And then, under the gentle poking of others, I started finding the underlying, deep-seated problems, and some things that had been motivating my actions for the last several months without me always fully realizing it. At the end, when I looked at my initial list of bugs that I’d come up with in the beginning, most of the first items on the list looked hopelessly shallow compared to the later ones.
Often in life you feel that your problems are silly, and that you are affected by small stupid things that ”shouldn’t” be a problem. There was none of that at the workshop: it was tacitly acknowledged that being unreasonably hindered by ”stupid” problems is just something that brains tend to do. Valentine, one of the staff members, gave a powerful speech about ”alienated birthrights” – things that all human beings should be capable of engaging in and enjoying, but which have been taken from people because they have internalized beliefs and identities that say things like ”I cannot do that” or ”I am bad at that”. Things like singing, dancing, athletics, mathematics, romantic relationships, actually understanding the world, heroism, tackling challenging problems. To use his analogy, we might not be good at these things at first, and may have to grow into them and master them the way that a toddler grows to master her body. And like a toddler who’s taking her early steps, we may flail around and look silly when we first start doing them, but these are capacities that – barring any actual disabilities – are a part of our birthright as human beings, which anyone can ultimately learn to master.
Then there were the people, and the general atmosphere of the workshop. People were intelligent, open, and motivated to work on their problems, help each other, and grow as human beings. After a long, cognitively and emotionally exhausting day at the workshop, people would then shift to entertainment ranging from wrestling to telling funny stories of their lives to Magic: the Gathering. (The game of ”bunny” was an actual scheduled event on the official agenda.) And just plain talk with each other, in a supportive, non-judgemental atmosphere. It was the people and the atmosphere that made me the most reluctant to leave, and I miss them already.
Would I recommend CFAR’s workshops to others? Although my above description may sound rather gushingly positive, my answer still needs to be a qualified ”mmmaybe”. The full price tag is quite hefty, though financial aid is available and I personally got a very substantial scholarship, with the agreement that I would pay it at a later time when I could actually afford it.
Still, the biggest question is, will the changes from the workshop stick? I feel like I have gained a valuable new perspective on emotions, a number of useful techniques, made new friends, strengthened my belief that I can do the things that I really set my mind on, and refined the ways by which I think of the world and any problems that I might have – but aside for the new friends, all of that will be worthless if it fades away in a week. If it does, I would have to judge even my steeply discounted price as ”not worth it”. That said, the workshops do have a money-back guarantee if you’re unhappy with the results, so if it really feels like it wasn’t worth it, I can simply choose to not pay. And if all the new things do end up sticking, it might still turn out that it would have been worth paying even the full, non-discounted price.
CFAR does have a few ways by which they try to make the things stick. There will be Skype follow-ups with their staff, for talking about how things have been going since the workshop. There is a mailing list for workshop alumni, and the occasional events, though the physical events are very US-centric (and in particular, San Francisco Bay Area-centric).
The techniques that we were taught are still all more or less experimental, and are being constantly refined and revised according to people’s experiences. I have already been thinking of a new skill that I had been playing with for a while before the workshop, and which has a bit of that ”CFAR feel” – I will aim to have it written up soon and sent to the others, and maybe it will eventually make its way to the curriculum of a future workshop. That should help keep me engaged as well.
We shall see. Until then, as they say in CFAR – to victory!
The January 2013 CFAR workshop: one-year retrospective
About a year ago, I attended my first CFAR workshop and wrote a post about it here. I mentioned in that post that it was too soon for me to tell if the workshop would have a large positive impact on my life. In the comments to that post, I was asked to follow up on that post in a year to better evaluate that impact. So here we are!
Very short summary: overall I think the workshop had a large and persistent positive impact on my life.
Important caveat
However, anyone using this post to evaluate the value of going to a CFAR workshop themselves should be aware that I'm local to Berkeley and have had many opportunities to stay connected to CFAR and the rationalist community. More specifically, in addition to the January workshop, I also
- visited the March workshop (and possibly others),
- attended various social events held by members of the community,
- taught at the July workshop, and
- taught at SPARC.
These experiences were all very helpful in helping me digest and reinforce the workshop material (which was also improving over time), and a typical workshop participant might not have these advantages.
Answering a question
pewpewlasergun wanted me to answer the following question:
I'd like to know how many techniques you were taught at the meetup you still use regularly. Also which has had the largest effect on your life.
The short answer is: in some sense very few, but a lot of the value I got out of attending the workshop didn't come from specific techniques.
In more detail: to be honest, many of the specific techniques are kind of a chore to use (at least as of January 2013). I experimented with a good number of them in the months after the workshop, and most of them haven't stuck (but that isn't so bad; the cost of trying a technique and finding that it doesn't work for you is low, while the benefit of trying a technique and finding that it does work for you can be quite high!). One that has is the idea of a next action, which I've found incredibly useful. Next actions are the things that to-do list items should be, say in the context of using Remember The Milk. Many to-do list items you might be tempted to right down are difficult to actually do because they're either too vague or too big and hence trigger ugh fields. For example, you might have an item like
- Do my taxes
that you don't get around to until right before you have to because you have an ugh field around doing your taxes. This item is both too vague and too big: instead of writing this down, write down the next physical action you need to take to make progress on this item, which might be something more like
- Find tax forms and put them on desk
which is both concrete and small. Thinking in terms of next actions has been a huge upgrade to my GTD system (as was Workflowy, which I also started using because of the workshop) and I do it constantly.
But as I mentioned, a lot of the value I got out of attending the workshop was not from specific techniques. Much of the value comes from spending time with the workshop instructors and participants, which had effects that I find hard to summarize, but I'll try to describe some of them below:
Emotional attitudes
The workshop readjusted my emotional attitudes towards several things for the better, and at several meta levels. For example, a short conversation with a workshop alum completely readjusted my emotional attitude towards both nutrition and exercise, and I started paying more attention to what I ate and going to the gym (albeit sporadically) for the first time in my life not long afterwards. I lost about 15 pounds this way (mostly from the eating part, not the gym part, I think).
At a higher meta level, I did a fair amount of experimenting with various lifestyle changes (cold showers, not shampooing) after the workshop and overall they had the effect of readjusting my emotional attitude towards change. I find it generally easier to change my behavior than I used to because I've had a lot of practice at it lately, and am more enthusiastic about the prospect of such changes.
(Incidentally, I think emotional attitude adjustment is an underrated component of causing people to change their behavior, at least here on LW.)
Using all of my strength
The workshop is the first place I really understood, on a gut level, that I could use my brain to think about something other than math. It sounds silly when I phrase it like that, but at some point in the past I had incorporated into my identity that I was good at math but absentminded and silly about real-world matters, and I used it as an excuse not to fully engage intellectually with anything that wasn't math, especially anything practical. One way or another the workshop helped me realize this, and I stopped thinking this way.
The result is that I constantly apply optimization power to situations I wouldn't have even tried to apply optimization power to before. For example, today I was trying to figure out why the water in my bathroom sink was draining so slowly. At first I thought it was because the strainer had become clogged with gunk, so I cleaned the strainer, but then I found out that even with the strainer removed the water was still draining slowly. In the past I might've given up here. Instead I looked around for something that would fit farther into the sink than my fingers and saw the handle of my plunger. I pumped the handle into the sink a few times and some extra gunk I hadn't known was there came out. The sink is fine now. (This might seem small to people who are more domestically talented than me, but trust me when I say I wasn't doing stuff like this before last year.)
Reflection and repair
Thanks to the workshop, my GTD system is now robust enough to consistently enable me to reflect on and repair my life (including my GTD system). For example, I'm quicker to attempt to deal with minor medical problems I have than I used to be. I also think more often about what I'm doing and whether I could be doing something better. In this regard I pay a lot of attention in particular to what habits I'm forming, although I don't use the specific techniques in the relevant CFAR unit.
For example, at some point I had recorded in RTM that I was frustrated by the sensation of hours going by without remembering how I had spent them (usually because I was mindlessly browsing the internet). In response, I started keeping a record of what I was doing every half hour and categorizing each hour according to a combination of how productively and how intentionally I spent it (in the first iteration it was just how productively I spent it, but I found that this was making me feel too guilty about relaxing). For example:
- a half-hour intentionally spent reading a paper is marked green.
- a half-hour half-spent writing up solutions to a problem set and half-spent on Facebook is marked yellow.
- a half-hour intentionally spent playing a video game is marked with no color.
- a half-hour mindlessly browsing the internet when I had intended to do work is marked red.
The act of doing this every half hour itself helps make me more mindful about how I spend my time, but having a record of how I spend my time has also helped me notice interesting things, like how less of my time is under my direct control than I had thought (but instead is taken up by classes, commuting, eating, etc.). It's also easier for me to get into a success spiral when I see a lot of green.
Stimulation
Being around workshop instructors and participants is consistently intellectually stimulating. I don't have a tactful way of saying what I'm about to say next, but: two effects of this are that I think more interesting thoughts than I used to and also that I'm funnier than I used to be. (I realize that these are both hard to quantify.)
etc.
I worry that I haven't given a complete picture here, but hopefully anything I've left out will be brought up in the comments one way or another. (Edit: this totally happened! Please read Anna Salamon's comment below.)
Takeaway for prospective workshop attendees
I'm not actually sure what you should take away from all this if your goal is to figure out whether you should attend a workshop yourself. My thoughts are roughly this: I think attending a workshop is potentially high-value and therefore that even talking to CFAR about any questions you might have is potentially high-value, in addition to being relatively low-cost. If you think there's even a small chance you could get a lot of value out of attending a workshop I recommend that you at least take that one step.
CFAR is looking for a videographer for next Wednesday
Hi all, CFAR is looking for a videographer in the Bay Area to shoot and edit a 1-minute video introducing us. Do you know anyone?
Developmental Thinking Shout-out to CFAR
Preamble
Before I make my main point, I want to acknowledge that curriculum development is hard. It's even harder when you're trying to teach the unteachable. And it's even harder when you're in the process of bootstrapping. I am aware of the Kahneman inside/outside curriculum design story. And, I myself have taught 200+ hours of my own computer science curricula to middle-school students. So this "open letter," is not some sort of criticism of CFAR's curriculum; It's a "Hey, check out this cool stuff eventually when you have time," letter. I just wanted to put all this out there, to possibly influence the next five years of CFAR.
Curriculum development is hard.
So, anyway, I don't personally know any of the people involved in CFAR, but I do know you're all great.
A case for developmental thinking
Below is an annotated bibliography of some of my personal touchstones in the development literature, books that are foundational or books that synthesize decades of research about the developmental aspects of entrepreneurial, executive, educational, and scientific thinking, as well as the developmental aspects of emotion and cognition. Note that this is personal, idiosyncratic, non-exhaustive list.
And, to qualify, I have epistemological and ontological issues with plenty of the stuff below. But some of these authors are brilliant, and the rest are smart, meticulous, and values-driven. Lots of these authors deeply care about empirically identifying, targeting, accelerating, and stabilizing skills ahead of schedule or helping skills manifest when they wouldn't have otherwise appeared at all. Quibbles and double-takes aside, there is lots of signal, here, even if it's not seated in a modern framework (which would of course increase the value and accessibility of what's below).
There are clues or even neon signs, here, for isolating fine-grained, trainable stuff to be incorporated into curricula. Even if an intervention was designed for kids, a lot of adults still won't perform consistently prior to said intervention. And these researchers have spent thousands of collective hours thinking about how to structure assessments, interventions, and validations which may be extendable to more advanced scenarios.
So all the material below is not only useful for thinking about remedial or grade-school situations, and is not just for adding more tools to a cognitive toolbox, but could be useful for radically transforming a person's thinking style at a deep level.
Consider:
child:adult :: adult: ?
This has everything to do with the "Outside the Box" Box. Really. One author below has been collecting data for decades to attempt to describe individuals that may represent far less than one percent of the population.
0. Protocol analysis
Everyone knows that people are poor reporters of what goes on in their heads. But this is a straw. A tremendous amount of research has gone into understanding what conditions, tasks, types of cognitive routines, and types of cognitive objects foster reliable introspective reporting. Introspective reporting can be reliable and useful. Grandaddy Herbert Simon (who coined the term "bounded rationality") devotes an entire book to it. The preface (I think) is a great overview. I wanted to mention this, first, because lots of the researchers below use verbal reports in their work.
http://www.amazon.com/Protocol-Analysis-Edition-Verbal-Reports/dp/0262550237/
1. Developmental aspects of scientific thinking
Deanna Kuhn and colleagues develop and test fine-grained interventions to promote transfer of various aspects of causal inquiry and reasoning in middle school students. In her words, she wants to "[develop] students' meta-level awareness and management of their intellectual processes." Kuhn believes that inquiry and argumentation skills, carefully defined and empirically backed, should be emphasized over specific content in public education. That sounds like vague and fluffy marketing-speak, but if you drill down to the specifics of what she's doing, her work is anything but. (That goes for all of these 50,000 foot summaries. These people are awesome.)
http://www.amazon.com/Education-Thinking-Deanna-Kuhn/dp/0674027450/
http://www.tc.columbia.edu/academics/index.htm?facid=dk100
http://www.educationforthinking.org/
David Klahr and colleagues emphasize how children and adults compare in coordinated searches of a hypothesis space and experiment space. He believes that scientific thinking is not different in kind than everyday thinking. Klahr gives an integrated account of all the current approaches to studying scientific thinking. Herbert Simon was Klahr's dissertation advisor.
http://www.amazon.com/Exploring-Science-Cognition-Development-Discovery/dp/0262611767
http://www.psy.cmu.edu/~klahr/
2. Developmental aspects of executive or instrumental thinking
Ok, I'll say it: Elliot Jacques was a psychoanalyst, among other things. And the guy makes weird analogies between thinking styles and truth tables. But his methods are rigorous. He has found possible discontinuities in how adults process information in order to achieve goals and how these differences relate to an individuals "time horizon," or maximum time length over which an individual can comfortably execute a goal. Additionally, he has explored how these factors predictably change over a lifespan.
http://www.amazon.com/Human-Capability-Individual-Potential-Application/dp/0962107077/
3. Developmental aspects of entrepreneurial thinking
Saras Sarasvathy and colleagues study the difference between novice entrepreneurs and expert entrepreneurs. Sarasvathy wants to know how people function under conditions of goal ambiguity ("We don't know the exact form of what we want"), environmental isotropy ("The levers to affect the world, in our concrete situation, are non-obvious"), and enaction ("When we act we change the world"). Herbert Simon was her advisor. Her thinking predates and goes beyond the lean startup movement.
"What effectuation is not" http://www.effectuation.org/sites/default/files/research_papers/not-effectuation.pdf
Related: http://lesswrong.com/r/discussion/lw/hcb/book_suggestion_diaminds_is_worth_reading/
4. General Cognitive Development
Jane Loevinger and colleagues' work have inspired scores of studies. Loevinger discovered potentially stepwise changes in "ego level" over a lifespan. Ego level is an archaic-sounding term that might be defined as one's ontological, epistemological, and metacognitive stance towards self and world. Loevinger's methods are rigorous, with good inter-rater reliability, bayesian scoring rules incorporating base rates, and so forth.
http://www.amazon.com/Measuring-Ego-Development-Volume-Construction/dp/0875890598/
http://www.amazon.com/Measuring-Development-Scoring-Manual-Women/dp/0875890695/
Here is a woo-woo description of the ego levels, but note that these descriptions are based on decades of experience and have a repeatedly validated empirical core. The author of this document, Susanne Cook-Greuter, received her doctorate from Harvard by extending Loevinger's model, and it's well worth reading all the way through:
http://www.cook-greuter.com/9%20levels%20of%20increasing%20embrace%20update%201%2007.pdf
Here is a recent look at the field:
http://www.amazon.com/The-Postconventional-Personality-Researching-Transpersonal/dp/1438434642/
By the way, having explicit cognitive goals predicts an increase in ego level, three years later, but not an increase in subjective well-being. (Only the highest ego levels are discontinuously associated with increased wellbeing.) Socio-emotional goals do predict an increase in subjective well-being, three years later. Great study:
Bauer, Jack J., and Dan P. McAdams. "Eudaimonic growth: Narrative growth goals predict increases in ego development and subjective well-being 3 years later." Developmental Psychology 46.4 (2010): 761.
5. Bridging symbolic and non-symbolic cognition
[Related: http://wiki.lesswrong.com/wiki/A_Human's_Guide_to_Words]
Eugene Gendlin and colleagues developed a "[...] theory of personality change [...] which involved a fundamental shift from looking at content [to] process [...]. From examining hundreds of transcripts and hours of taped psychotherapy interviews, Gendlin and Zimring formulated the Experiencing Level variable. [...]"
The "focusing" technique was designed as a trainable intervention to influence an individual's Experiencing Level.
Marion N. Hendricks reviews 89 studies, concluding that [I quote]:
- Clients who process in a High Experiencing manner or focus do better in therapy according to client, therapist and objective outcome measures.
- Clients and therapists judge sessions in which focusing takes place as more successful.
- Successful short term therapy clients focus in every session.
- Some clients focus immediately in therapy; Others require training.
- Clients who process in a Low Experiencing manner can be taught to focus and increase in Experiencing manner, either in therapy or in a separate training.
- Therapist responses deepen or flatten client Experiencing. Therapists who focus effectively help their clients do so.
- Successful training in focusing is best maintained by those clients who are the strongest focusers during training.
http://www.focusing.org/research_basis.html
http://www.amazon.com/Focusing-Eugene-T-Gendlin/dp/0553278339/
http://www.amazon.com/Focusing-Oriented-Psychotherapy-Manual-Experiential-Method/dp/157230376X/
http://www.amazon.com/Self-Therapy-Step-By-Step-Wholeness-Cutting-Edge-Psychotherapy/dp/0984392777/ [IFS is very similar to focusing]
http://www.amazon.com/Emotion-Focused-Therapy-Coaching-Clients-Feelings/dp/1557988811/ [more references, similar to focusing]
http://www.amazon.com/Experiencing-Creation-Meaning-Philosophical-Psychological/dp/0810114275/ [favorite book of all time, by the way]
6. Rigorous Instructional Design
Siegfried Engelmann (http://www.zigsite.com/) and colleagues are dedicated to dramatically accelerating cognitive skill acquisition in disadvantaged children. In addition to his peer-reviewed research, he specializes in unambiguously decomposing cognitive learning tasks and designing curricula. Engelmann's methods were validated as part of Project Follow Through, the "largest and most expensive experiment in education funded by the U.S. federal government that has ever been conducted," according to Wikipedia. Engelmann contends that the data show that Direct Instruction outperformed all other methods:
http://www.zigsite.com/prologue_NeedyKids_chapter_5.html
http://en.wikipedia.org/wiki/Project_Follow_Through
Here, he systematically eviscerates an example of educational material that doesn't meet his standards:
http://www.zigsite.com/RubricPro.htm
And this is his instructional design philosophy:
http://www.amazon.com/Theory-Instruction-Applications-Siegfried-Engelmann/dp/1880183803/
Conclusion
In conclusion, lots of scientists have cared for decades about describing the cognitive differences between children, adults, and expert or developmentally advanced adults. And lots of scientists care about making those differences happen ahead of schedule or happen when they wouldn't have otherwise happened at all. This is a valuable and complementary perspective to what seems to be CFAR's current approach. I hope CFAR will eventually consider digging into this line of thinking, though maybe they're already on top of it or up to something even better.
Book Suggestion: "Diaminds" is worth reading (CFAR-esque)
The reason for this submission is that I don't think anyone who visits this website will ever read the book described below, otherwise. And that's a shame.
Simply stated, I think CFAR curriculum designers and people who like CFAR's approach should check out this book:
Diaminds: Decoding the Mental Habits of Successful Thinkers by Mihnea Moldoveanu
I claim that you will find illustrations of high-utility thinking styles and potentially useful exercises within. Yes, I am attempting to promote some random, highly questionable book to your attention.
You contemptuously object:
- beware of other optimizing,
- does Moldeveanu even have a secret identity?,
- "decoding mental habits"?! People can't introspect,
- anyone who entitles their book "Diaminds" can't be that smart,
- and, what are you selling?
- If you dig around a little bit online you'll see that the second author writes highly rated popular business books.
- If you read a little bit of the book, you'll hear a lot about Nicholas Nassim Taleb, black swans, poorly justified claims about how the mind uses branching tree searches, and other assorted suspicious physical, mathematical, and computational analogies for how the mind works.
- He even asserts that "death is inevitable" (or something like that) in the introduction. *Gasp!*
- "There are 65 million titles out there. What are the chances that this particular crackpot book will be useful to me or CFAR?"
CFAR is hiring a logistics manager
CFAR is hiring an additional logistics manager. Please click on our form for more information, or to fill out an application:
https://docs.google.com/forms/d/1ACTvM1oYsw1zzHMumrLzffCVVak3eA5A-5uJzyIYOKM/viewform
We hope to choose a candidate within the next week or so, so if you're interested, do apply ASAP.
Rationality Habits I Learned at the CFAR Workshop
Recently Leah Libresco asked attendees at the January CFAR Workshop, "What habits have people installed after workshops?" and that got me thinking that now was a good time to write up and review what I learned (or learned and already forgot). I thought that might be of some interest to folks here, and this is what follows.
What I Learned and Implemented
The most immediately useful thing I learned was the Pomodoro Technique, as I've written about here before. In addition to that, there were a number of small items that I'm continuing to work on.
First, I've become quite fond of the question "Does future me have a comparative advantage?" Especially for small items, if the answer is "No" (and it's no far more often than it's yes) then just do it right now. The more trivial the task, the more useful it is. For instance, today I asked myself that while standing in the bedroom wondering whether to take 30 seconds to move my ExOfficio Bugproof socks from the dresser to the correct box in the closet. (Answer from a few minutes ago: if I don't take my dog for a walk right now, he's going to pee all over the floor. Future me does have a comparative advantage of not having to clean up pee on the floor. The socks can wait.)
I've begun to notice my confusion and call it to conscious attention more often, though I suspect I learned this first from HpMOR and the sequences before the workshop. Example: when Leonard Susskind states that conservation of information is a fundamental principle of quantum mechanics, I notice that I am confused because A) I have never heard of any such fundamental law of physics as information conservation B) Every definition of information I have ever heard indicates that information most certainly can be destroyed. So just what the heck is he talking about anyway? I am now making a conscious effort to research this topic rather than letting it slide by.
The workshop introduced me to the concepts of System 1 and System 2. System 1 is the faster, reactive, intuitive mind that uses heuristics and experience to react quickly. System 2 is the slower, analytical, logical, mathematical mind. I didn't immediately grok this or see how to apply it. However the workshop did convince me to read Daniel Kahneman's Thinking Fast and Slow, and I'm beginning to follow this. It could be useful going forward. I particularly like the examples given at the end of each chapter.
Similarly I completely did not understand the concepts of inside view vs. outside view at the workshop; and worse yet I don't think that I even realized that I didn't understand these. However now that I've read Thinking Fast and Slow, the lightbulb has gone on. Inside view is simply me deciding how likely I (or my team) is likely to accomplish something based on my judgement of the problem and our capabilities. Outside view is a statistical question about how people and teams like us have done when confronted with similar problems in the past. As long as there are similar teams and similar problems to compare with, the outside view is likely to be much more accurate.
During conversation, Julia Galef and I came up with the idea of *********. It turned out it already exists, and I'm planning to start attending these events locally soon. I've also joined my local LessWrong meetup group.
Stare into Ugh fields. Difficult conversations are an Ugh field for me. Recognizing this and bringing it to conscious attention has made it somewhat easier to manage these conversations. Example: when I went to the workshop I had been putting off contacting my dentist for months, not because of the usual reasons people don't like going to the dentist, but simply because I was uncomfortable telling her that the second (and third) opinion I had gotten on a dental issue disagreed with her about the proper course of treatment. Post-workshop, I finally called her (though it still took me two more weeks to do this. Clearly I have a lot of work left to do here.)
Consider whether the sources of my information may be correlated and by how much. I.e. Evaluating Advice. For instance, if two dentists who share an office give me the same advice, even assuming no prior disposition to agree with each other simply out of friendship, how likely is it that they share the same background and information that dentists in a different office do not?
COZE (Comfort Zone Expansion) exercises have pushed me to talk more to "strangers" and be intentionally more extroverted. On a recent trip to Latin America, I even made an effort to use what little Spanish I possess. I've had some small success, though this has led to no obvious major improvements in my life yet.
Thought experiments conducted at the workshop were very helpful in untangling some of my goals and plans. Going forward though this hasn't made a huge difference in my day-to-day life. That is, it hasn't led me to seek different paths than what I'm on right now.
What I Learned and Forgot
Going over my notes now, there was a lot of material; some of it potentially useful, that has fallen by the wayside; and may be worth a second look. This includes:
- Geoff Anders introduced us to yEd, a nice open source diagram editor. I still prefer StencilIt or Omnigraffle though. He also used it to show us a really neat way of graphing, well, something. Goals maybe? I remember it seemed really useful and significant at the time, but for the life of me I can't remember exactly what it was or what it was supposed to show us. I'll have to go back to my notes. This is why we write things down. (Update: I suspect this was about Goal Factoring.)
- Anticipation vs. Profession (though from time to time I do find myself asking what odds I'd be willing to bet on certain beliefs)
- The Planning Kata.
What I Learned But Didn't Implement
Value of Information calculations seem too meta and too wishy-washy to be of much use. They attempt to put quantitative numbers based on information that's far too imprecise to allow even order of magnitude accuracy. I'm better off just keeping things I need to consider in my GTD system, and periodically reviewing it.
Similarly opportunities for Bayesian Strength of Evidence calculations, just don't seem to come up in my day-to-day life. The question for me is more commonly "Given that the situation is what it is, what actions should I take to accomplish my goals?" The outside view is useful for this. Figuring out why the situation is what it is rarely seems to be especially helpful.
Turbocharging Training may be helpful but the evidence seems to me to be lacking. I'd like to see some strong proof that this works in particular areas; e.g. foreign languages, sports, or mathematics. Furthermore, it's not clear that it's applicable to anything I'm working on learning at this time. It seems very System 1 focused, and not especially helpful with the sort of fundamentally System 2 tasks I take on.
I have begun to declare "Victory!" at the end of a meeting/discussion. it's a bit of fun, but has limited effect. Beyond that I don't seem to reward myself for noticing things, or as a means of installing habits.
What I Didn't Learn
Getting Things Done (GTD), Remember the Milk, BeeMinder, Anki, Cultivating Curiosity, Overcoming Procrastination, and Winning at Arguments.
GTD I didn't learn because I've used it for years now or at least the parts of it that really work for me (lists and calendars mostly, and to a lesser extent filing).
Remember the Milk because my employer's security policy prohibits us from using it, and too much of my life happens at my day job to make maintaining two separate systems worthwhile.
BeeMinder and Anki because I just don't have anything that seems it could benefit from being stored in those systems right now. All of these might be more beneficial to someone in different circumstances.
Cultivating Curiosity because I am already a very naturally curious person, and have been for as long as I can remember. I don't need help with this. Indeed if anything I need to tamp down on this tendency and focus more on accomplishing things rather than merely learning them.
Similarly, Overcoming Procrastination didn't help a lot because I don't have a big procrastination problem, at least not compared to what I had when I was younger. Of course, I do say that in full knowledge that right this minute writing this article is a form of structured procrastination to avoid doing my taxes. :-)
Winning at Arguments, I am already very, very good at when I want to be, which is rare these days. It took me many years too realize that even though I "won" almost every argument I cared about, winning the argument wasn't usually all that useful. Winning an argument is the wrong goal to have for almost any purpose, and rarely leads to the outcomes I desire.
Unofficial ideas from fellow attendees:
Polyphasic sleep: I'm going to let the younger, more pioneering attendees experiment with this one. Even if it does work (which seems far from obvious) I don't see how one could integrate it into a conventional day job and family.
At breakfast one morning, a fellow attendee (Hunter?) suggested putting unsalted butter in my coffee to add more fat to my diet. It's not as crazy as it sounds. After all butter is little more than clarified cream, which I do like in my coffee. I tried this once and I still prefer cream, but I may give it another shot.
Finally, I've referred two workshop attendees to my employer as potential hires. If anyone else from the workshop is looking for a job, especially in tech, sales, or legal, drop me a line privately. For that matter if any Less Wronger is looking for a job, drop me a line privately. We have hundreds of open positions in major cities around the world. Quite a few LessWrongers already work there, and there's room for many more.
What the workshop didn't teach
There were a few techniques that were conspicuous by their absence. In particular I think the CFAR/LessWrong and Agile/XP communities have a lot to teach each other. I was surprised that no one at the workshop seemed to have heard of Kanban or Scrum, much less practice it. Burndown charts and point-based estimation are a really interesting modification of the outside view by comparing your team to your team in the past, rather than to other teams.
Pairing is also a useful technique beyond programming as at least Eliezer (not present at the workshop) has discovered. Pairing is an incredibly effective way to overcome akrasia and procrastination.
In reverse, I am considering what the craft of software development has to learn from CFAR style rationality, more specifically epistemic rationality. I have begun to notice my confusion during conversations with users, product managers, and tech leads and call it to conscious attention. I less frequently let unclear specs and goals pass without comment. Rather, I ask for examples and drill down into them until I feel my confusion has been conquered.
So far these techniques seem very useful in analysis and requirements gathering. I've found them less obviously useful (though certainly not harmful in any way) during coding, debugging, and testing. In these stages there's simply too much to be confused by to address it all, and whatever I'm confused by that's relevant to the task at hand rapidly calls itself to my attention. For instance, when a bug shows up in a production system, the very first and natural question to ask is "How the hell did the system do that?!" On the other hand, the planning kata may be very helpful with the early stages of system design, though I haven't yet had an opportunity to try that out.
Was it Worth $3900?
Overall, I found the workshop to be a worthwhile experience, if an expensive one; and I recommend it to you if you have the opportunity and resources to attend. There are a lot of practical techniques to be learned, and you only need one or two of them to pay off to cover the cost and time. Even if the primary value is simply introducing you to books and techniques you explore further after the workshop such as Getting Things Done or Thinking Fast and Slow, that may be enough. Most knowledge workers are operating far below the level of which we're capable, and expanding our effectiveness can pay for itself.
Before attending, it is worth asking yourself whether there's an opportunity to learn this material at lower cost. For instance, did I really need to spend $3900 and 4 days to learn about Pomodoro? Apparently so, since I'd heard about Pomodoro for years and paid no attention to it until January. On the other hand, a $20 book I read on the subway was fully sufficient for me to learn and implement Getting Things Done. You'll have to judge this one for yourself.
The Singularity Wars
(This is a introduction, for those not immersed in the Singularity world, into the history of and relationships between SU, SIAI [SI, MIRI], SS, LW, CSER, FHI, and CFAR. It also has some opinions, which are strictly my own.)
The good news is that there were no Singularity Wars.
The Bay Area had a Singularity University and a Singularity Institute, each going in a very different direction. You'd expect to see something like the People's Front of Judea and the Judean People's Front, burning each other's grain supplies as the Romans moved in.
Thoughts on the January CFAR workshop
So, the Center for Applied Rationality just ran another workshop, which Anna kindly invited me to. Below I've written down some thoughts on it, both to organize those thoughts and because it seems other LWers might want to read them. I'll also invite other participants to write down their thoughts in the comments. Apologies if what follows isn't particularly well-organized.
Feelings and other squishy things
The workshop was totally awesome. This is admittedly not strong evidence that it accomplished its goals (cf. Yvain's comment here), but being around people motivated to improve themselves and the world was totally awesome, and learning with and from them was also totally awesome, and that seems like a good thing.
Also, the venue was fantastic. CFAR instructors reported that this workshop was more awesome than most, and while I don't want to discount improvements in CFAR's curriculum and its selection process for participants, I think the venue counted for a lot. It was uniformly beautiful and there were a lot of soft things to sit down or take naps on, and I think that helped everybody be more comfortable with and relaxed around each other.
Main takeaways
Here are some general insights I took away from the workshop. Some of them I had already been aware of on some abstract intellectual level but hadn't fully processed and/or gotten drilled into my head and/or seen the implications of.
- Epistemic rationality doesn't have to be about big things like scientific facts or the existence of God, but can be about much smaller things like the details of how your particular mind works. For example, it's quite valuable to understand what your actual motivations for doing things are.
- Introspection is unreliable. Consequently, you don't have direct access to information like your actual motivations for doing things. However, it's possible to access this information through less direct means. For example, if you believe that your primary motivation for doing X is that it brings about Y, you can perform a thought experiment: imagine a world in which Y has already been brought about. In that world, would you still feel motivated to do X? If so, then there may be reasons other than Y that you do X.
- The mind is embodied. If you consistently model your mind as separate from your body (I have in retrospect been doing this for a long time without explicitly realizing it), you're probably underestimating the powerful influence of your mind on your body and vice versa. For example, dominance of the sympathetic nervous system (which governs the fight-or-flight response) over the parasympathetic nervous system is unpleasant, unhealthy, and can prevent you from explicitly modeling other people. If you can notice and control it, you'll probably be happier, and if you get really good, you can develop aikido-related superpowers.
- You are a social animal. Just as your mind should be modeled as a part of your body, you should be modeled as a part of human society. For example, if you don't think you care about social approval, you are probably wrong, and thinking that will cause you to have incorrect beliefs about things like your actual motivations for doing things.
- Emotions are data. Your emotional responses to stimuli give you information about what's going on in your mind that you can use. For example, if you learn that a certain stimulus reliably makes you angry and you don't want to be angry, you can remove that stimulus from your environment. (This point should be understood in combination with point 2 so that it doesn't sound trivial: you don't have direct access to information like what stimuli make you angry.)
- Emotions are tools. You can trick your mind into having specific emotions, and you can trick your mind into having specific emotions in response to specific stimuli. This can be very useful; for example, tricking your mind into being more curious is a great way to motivate yourself to find stuff out, and tricking your mind into being happy in response to doing certain things is a great way to condition yourself to do certain things. Reward your inner pigeon.
Here are some specific actions I am going to take / have already taken because of what I learned at the workshop.
- Write a lot more stuff down. What I can think about in my head is limited by the size of my working memory, but a piece of paper or a WorkFlowy document don't have this limitation.
- Start using a better GTD system. I was previously using RTM, but badly. I was using it exclusively from my iPhone, and when adding something to RTM from an iPhone the due date defaults to "today." When adding something to RTM from a browser the due date defaults to "never." I had never done this, so I didn't even realize that "never" was an option. That resulted in having due dates attached to RTM items that didn't actually have due dates, and it also made me reluctant to add items to RTM that really didn't look like they had due dates (e.g. "look at this interesting thing sometime"), which was bad because that meant RTM wasn't collecting a lot of things and I stopped trusting my own due dates.
- Start using Boomerang to send timed email reminders to future versions of myself. I think this might work better than using, say, calendar alerts because it should help me conceptualize past versions of myself as people I don't want to break commitments to.
I'm also planning to take various actions that I'm not writing above but instead putting into my GTD system, such as practicing specific rationality techniques (the workshop included many useful worksheets for doing this) and investigating specific topics like speed-reading and meditation.
The arc word (TVTropes warning) of this workshop was "agentiness." ("Agentiness" is more funtacular than "agency.") The CFAR curriculum as a whole could be summarized as teaching a collection of techniques to be more agenty.
Miscellaneous
A distinguishing feature the people I met at the workshop seemed to have in common was the ability to go meta. This is not a skill which was explicitly mentioned or taught (although it was frequently implicit in the kind of jokes people told), but it strikes me as an important foundation for rationality: it seems hard to progress with rationality unless the thought of using your brain to improve how you use your brain, and also to improve how you improve how you use your brain, is both understandable and appealing to you. This probably eliminates most people as candidates for rationality training unless it's paired with or maybe preceded by meta training, whatever that looks like.
One problem with the workshop was lack of sleep, which seemed to wear out both participants and instructors by the last day (classes started early in the day and conversations often continued late into the night because they were unusually fun / high-value). Offering everyone modafinil or something at the beginning of future workshops might help with this.
Overall
Overall, while it's too soon to tell how big an impact the workshop will have on my life, I anticipate a big impact, and I strongly recommend that aspiring rationalists attend future workshops.
CFAR and SI MOOCs: a Great Opportunity
Massive open online courses seem to be marching towards total world domination like some kind of educational singularity (at least in the case of Coursera). At the same time, there are still relatively few courses available, and each new added course is a small happening in the growing MOOC community.
Needless to say, this seems like a perfect opportunity for SI and CFAR to advance their goals via this new education medium. Some people seem to have already seen the potential and taken advantage of it:
One interesting trend that can be seen is companies offering MOOCs to increase the adoption of their tools/technologies. We have seem this with 10gen offering Mongo courses and to a lesser extent with Coursera’s ‘Functional Programming in Scala’ taught by Martin Odersky
(from the above link to the Class Central Blog)
So the question is, are there any online courses already planned by CFAR and/or SI? And if not, when will it happen?
Edit: This is not a "yes or no" question, albeit formulated as one. I've searched the archives and did not find any mention of MOOCs as a potentially crucial device for spreading our views. If any such courses are already being developed or at least planned, I'll be happy to move this post to the open thread, as some have requested, or delete it entirely. If not, please view this as a request for discussion and brainstorming.
P.S.: Sorry, I don't have the time to write a good article on this topic.
[Link] Article about rationality and CFAR
http://issuu.com/nervemag/docs/issue-2?mode=window&pageNumber=18
A friend of mine runs Nerve, the new science magazine at the university where I work, and I offered to write about rationality for their second issue. The article is just out, with some quotes from some people you might recognise! Enjoy.
EDIT: the Wordpress version is now up, for those allergic to Flash.
http://nervemag.wordpress.com/2012/09/11/why-are-smart-people-so-stupid/
Take Part in CFAR Rationality Surveys
Posted By: Dan Keys, CFAR Survey Coordinator
The Center for Applied Rationality is trying to develop better methods for measuring and studying the benefits of rationality. We want to be able to test if this rationality stuff actually works.
One way that the Less Wrong community can help us with this process is by taking part in online surveys, which we can use for a variety of purposes including:
- seeing what rationality techniques people actually use in their day-to-day lives
- developing & testing measures of how rational people are, and seeing if potential rationality measures correlate with the other variables that you'd expect them to
- comparing people who attend a minicamp with others in the LW community, so that we can learn what value-added the minicamps provide beyond what you get elsewhere
- trying out some of the rationality techniques that we are trying to teach, so we can see how they work
We have a couple of surveys ready to go now which cover some of these bullet points, and will be developing other surveys over the coming months.
If you're interested in taking part in online surveys for CFAR, please go here to fill out a brief form with your contact info; then we will contact you about participating in specific surveys.
If you have previously filled out a form like this one to participate in CFAR surveys, then we already have your information so you don't need to sign up again.
Questions/Issues can be posted in the comments here, PMed to me, or emailed to us at CFARsurveys@gmail.com.
= 783df68a0f980790206b9ea87794c5b6)
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)