Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 10:21AM
@@ -8,7 +8,7 @@
<li>Ask your help, via donations and other means.</li>
</ul>
<div>
-<p>We are in the middle of our&nbsp;<a href="http://rationality.org/fundraiser2015/">matching fundraiser</a>; so if you&rsquo;ve been considering donating to CFAR this year, now is an unusually good time.</p>
+<p>We are in the middle of our&nbsp;<a href="http://rationality.org/fundraiser2015/">matching fundraiser</a>; so if you&rsquo;ve been considering donating to CFAR this year, now is an unusually good time.<a id="more"></a></p>
</div>
<h2><a id="CFARs_mission_and_why_that_mission_matters_today_18"></a>CFAR&rsquo;s mission, and why that mission matters today</h2>
<p>CFAR&rsquo;s mission is to help people develop the abilities that let them meaningfully assist with the world&rsquo;s most important problems, by improving their ability to arrive at accurate beliefs, act effectively in the real world, and sustainably care about that world.</p>

Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 10:20AM
@@ -128,7 +128,7 @@
<p>To plan and manage all these alumni events, we&rsquo;re looking for a capable community manager.</p>
<h4><a id="Directly_Addressing_Talent_Gaps_200"></a>Directly Addressing Talent Gaps</h4>
<p>In addition to our classic workshops and general education alumni programs, we&rsquo;ll also be attempting to ramp up our targeted workshops meant to fill talent gaps for specific organizations.</p>
-<p>For example, we&rsquo;ll run our second MIRI Summer Fellows Program, as well as a grant funded by the Future of Life Institute to help promising upcoming AI researchers think about AI safety. We&rsquo;re in conversation with other organizations, and it&rsquo;s our intention to have an increasing number of these workshops that focus on thinking skills needed for particular tasks in order to help fill critical gaps in important organizations on very small time horizons.</p>
+<p>For example, we&rsquo;ll run our second MIRI Summer Fellows Program, as well as a <a href="https://drive.google.com/file/d/0B3oM_ZsIBBwEWVg4RkxVZkJxMUE/view?usp=sharing">grant</a> funded by the Future of Life Institute to help promising upcoming AI researchers think about AI safety. We&rsquo;re in conversation with other organizations, and it&rsquo;s our intention to have an increasing number of these workshops that focus on thinking skills needed for particular tasks in order to help fill critical gaps in important organizations on very small time horizons.</p>
<p>If funding permits and our experiments in this area go well, we intend to make these types of workshops more frequent, and perhaps expand on past success with programs like a European SPARC, and possible &ldquo;summer camp&rdquo; style events where we try to identify particularly talented high school students for training and recruitment into existential risk research.</p>
<h4>Labs: &nbsp;Informal experimentation toward a better "Applied Rationality"</h4>
<p>The split between Core and Labs doesn't only allow focus on operations--it also allows our Lab folk to invest in the informal experiments, arguments, data-gathering, etc. that seems, over time, to conduce to a better applied rationality.</p>

Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 10:03AM
@@ -155,7 +155,7 @@
<p>There are at least four major ways to help:</p>
<ol>
<li>Donate directly to our <a href="http://rationality.org/fundraiser2015/">winter fundraising drive</a>. This is the most straightforward way to help, and makes a categorical difference in our ability to execute the mission. &nbsp;(A large majority of our funding comes from small donors.)</li>
-<li>If you&rsquo;re interested in rationality, or in the larger questions of humanity&rsquo;s future and existential risk, try exploring the <a href="https://intelligence.org/rationality-ai-zombies/">Sequences</a> or Harry Potter and the Methods of Rationality.</li>
+<li>If you&rsquo;re interested in rationality, or in the larger questions of humanity&rsquo;s future and existential risk, consider reading the&nbsp;<a href="https://intelligence.org/rationality-ai-zombies/">Sequences</a>, or otherwise working to improve your thinking and world-modeling skill. &nbsp;(Strong community epistemology is extremely helpful.)</li>
<li>We&rsquo;re always looking for new alumni, particularly those who care about both rationality and the world. If you haven&rsquo;t been, consider <a href="http://rationality.org/workshops/">applying to a CFAR workshop</a>; and if you have been, consider mentioning it to people who fit said description.</li>
<li>If you&rsquo;re interested in joining us for the long haul, we&rsquo;re currently <a href="http://rationality.org/hiring">looking to hire</a> a sales manager, a community manager, and an office assistant (funding permitting). We&rsquo;ve identified these three roles as the highest-impact additions to the CFAR staff, and are eager to hear from enthusiastic and qualified candidates.</li>
</ol>

Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 09:52AM
@@ -150,12 +150,13 @@
<h2><a id="The_path_forward_and_how_you_can_help_226"></a>The path forward, and how you can help</h2>
<p>CFAR&rsquo;s mission is to gather together people with the potential for real and meaningful impact, and to cause them to come closer to meeting that potential. It doesn&rsquo;t much matter whether you think we&rsquo;re under a ticking clock of existential risk, or you&rsquo;re concerned about a million humans dying every week, or you&rsquo;re simply grumpy that we haven&rsquo;t gotten a human past low earth orbit since 1972&mdash;our individual and collective thinking skill is a key bottleneck on our future.</p>
<p>Applied rationality, more than almost anything else, has a shot at being a <em>truly</em> all-purpose tool in humanity&rsquo;s toolkit, and the bigger the problems on the horizon, the more vital that tool becomes.</p>
-<p>2016 will be a particularly critical year in CFAR&rsquo;s history. We&rsquo;re restructuring our team in pretty major ways, and finding the right team members (or not) will determine our ability to get the right character and culture from the beginning. The world of AI risk is changing rapidly, and decisions made over the coming months will shape the future of the field. The momentum we will have going into the spring is likely to be the difference between a CFAR that actually matters, and one that sounds good but is ultimately irrelevant.</p>
+<p>2016 will be a particularly critical year in CFAR&rsquo;s history. We&rsquo;re restructuring our team in pretty major ways, and finding the right team members (or not) will determine our ability to get the right character and culture from this new beginning; and we've had at least three good people in the last eight months who we wanted to hire, and who wanted to work for us, but who required salaries we couldn't afford. &nbsp;Beginnings are far easier times in which to make change, and this is the closest we've come to a fresh beginning -- and the time we've most expected differential impact from marginal donation -- since our inaugural fundraiser of late 2012.&nbsp;</p>
+<p>The world of AI risk is changing rapidly, and decisions made over the coming months will shape the future of the field -- it would be well to get relevant training programs going <em>now, </em>and not to wait for some later additional hard-won new beginning for CFAR in 2018 or something. &nbsp;The strategic competence we will have going into the spring is likely to be the difference between a CFAR that actually matters, and one that sounds good but is ultimately irrelevant.</p>
<p>There are at least four major ways to help:</p>
<ol>
-<li>Donate directly to our <a href="http://rationality.org/fundraiser2015/">winter fundraising drive</a>. This is the most straightforward way to help, and makes a categorical difference in our ability to execute the mission.</li>
+<li>Donate directly to our <a href="http://rationality.org/fundraiser2015/">winter fundraising drive</a>. This is the most straightforward way to help, and makes a categorical difference in our ability to execute the mission. &nbsp;(A large majority of our funding comes from small donors.)</li>
<li>If you&rsquo;re interested in rationality, or in the larger questions of humanity&rsquo;s future and existential risk, try exploring the <a href="https://intelligence.org/rationality-ai-zombies/">Sequences</a> or Harry Potter and the Methods of Rationality.</li>
<li>We&rsquo;re always looking for new alumni, particularly those who care about both rationality and the world. If you haven&rsquo;t been, consider <a href="http://rationality.org/workshops/">applying to a CFAR workshop</a>; and if you have been, consider mentioning it to people who fit said description.</li>
<li>If you&rsquo;re interested in joining us for the long haul, we&rsquo;re currently <a href="http://rationality.org/hiring">looking to hire</a> a sales manager, a community manager, and an office assistant (funding permitting). We&rsquo;ve identified these three roles as the highest-impact additions to the CFAR staff, and are eager to hear from enthusiastic and qualified candidates.</li>
</ol>
-<p>This is the mission; these are the steps. CFAR has made substantial progress on building a talent pipeline for clear thinkers and world changers, in large part thanks to generous contributions of time, money, energy, and insight from people like you. We&rsquo;d like to see a world where this goal has been achieved, and your support is what gets us there. Thanks for reading, and thanks for your help.</p>
+<p>This is the mission; these are the steps. CFAR has made substantial progress on building a talent pipeline for clear thinkers and world changers, in large part thanks to generous contributions of time, money, energy, and insight from people like you. We&rsquo;d like to see a world where this goal has been achieved, and your support is what gets us there. Thanks for reading; do send us any thoughts; and do please consider <a href="http://rationality.org/donate-2015/">donating now</a>.</p>

Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 08:38AM
@@ -130,15 +130,20 @@
<p>In addition to our classic workshops and general education alumni programs, we&rsquo;ll also be attempting to ramp up our targeted workshops meant to fill talent gaps for specific organizations.</p>
<p>For example, we&rsquo;ll run our second MIRI Summer Fellows Program, as well as a grant funded by the Future of Life Institute to help promising upcoming AI researchers think about AI safety. We&rsquo;re in conversation with other organizations, and it&rsquo;s our intention to have an increasing number of these workshops that focus on thinking skills needed for particular tasks in order to help fill critical gaps in important organizations on very small time horizons.</p>
<p>If funding permits and our experiments in this area go well, we intend to make these types of workshops more frequent, and perhaps expand on past success with programs like a European SPARC, and possible &ldquo;summer camp&rdquo; style events where we try to identify particularly talented high school students for training and recruitment into existential risk research.</p>
-<h4><a id="Increasingly_High_Quality_Instruction_208"></a>Increasingly High Quality Instruction</h4>
-<p>The split between Core and Labs doesn&rsquo;t only allow focus on operations&ndash;it also allows our researchers to focus on developing the more advanced aspects of the art, more thoughtful public communication about the nature and nuances of rationality, and time to run higher variance pedagogical experiments.</p>
-<p>To those ends, Labs is currently developing:</p>
+<h4>Labs: &nbsp;Informal experimentation toward a better "Applied Rationality"</h4>
+<p>The split between Core and Labs doesn't only allow focus on operations--it also allows our Lab folk to invest in the informal experiments, arguments, data-gathering, etc. that seems, over time, to conduce to a better applied rationality.</p>
+<p>(This process is messy. &nbsp;Rationality today is not at the level of Newton. &nbsp;It isn't even at the level of Ptolemy, who, despite the mockability of the nested-epicycles method, could predict the motions of the planets with great precision. &nbsp;Rationality is more at the level of a toddler running around, putting everything in its mouth, and ending up thereby with a more integrated informal&nbsp;world-model by having examined many example-objects through several senses each. &nbsp;Our aim this year in Labs is basically to put many many things in our mouths rapidly, and to argue about models in between, and to especially expose ourselves to people who are working on issues that matter in already-very-competent ways who we can nevertheless try to make better, and to try in this way to get a better sense of the higher-end parts of "rationality".)</p>
+<p>Toward this end, Labs is currently:</p>
<ul>
-<li>The next generation of theory on rationality, with more robust and explicit models of the underlying mechanisms that create drive, scientific and epistemic skill, and relevant real-world competence;</li>
-<li>New written rationality sequences meant to expand upon, augment, and improve the original sequences that brought so many people into the culture of being &ldquo;less wrong,&rdquo; and oriented them around audacious goals that actually make a difference;</li>
-<li>Experimental workshops with new material and novel training approaches that may create major breakthroughs in our ability to transmit the core of what we mean when we say &ldquo;rationality.&rdquo;</li>
+<li>Offering one-on-one coaching to quite a few individuals who seem to be contributing to the world in a high-end way; and trying to figure out how they're doing what they're doing, and what pieces may help them contribute more;</li>
+<li>Working toward more robust and explicit models of the underlying mechanisms that create drive, scientific and epistemic skill, and relevant real-world competence (and how to intervene upon them);</li>
+<li>Creating new written rationality sequences meant to expand upon, augment, and improve the original sequences that brought so many people into the culture of being &ldquo;less wrong,&rdquo; and oriented them around audacious goals that actually make a difference;</li>
+<li>Planning experimental workshops of varied sorts, aiming to boost people further toward "actually useful skill-levels in applied rationality".&nbsp;</li>
</ul>
+<div>We are very excited, and expect that art development will be much easier now that we have a subteam who is free to just actually focus on it. &nbsp;(Last year, we were all doing workshop admissions, logistics, accounting, ...)</div>
+<div>
<h4><a id="Limitations_and_Updates_218"></a>Limitations and Updates</h4>
+</div>
<p>The primary limiting factor in these plans is our ability to attract a truly excellent sales person or team. With a sufficient workshop participation, cashflow bottlenecks are broken and we&lsquo;ll achieve economies of scale that will fundamentally transform our operations.</p>
<p>Failing that recruitment, the next best alternative is to grow organically through the MTP and other community programs. That is a much slower process, but pushes us in the same fundamental direction.</p>
<p>And as always, our plans coming into contact with the reality of 2016 will correctly cause us to update, iterate, and potentially pivot given new evidence and insight.</p>

Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 07:26AM
@@ -46,10 +46,10 @@
<p>Here are some brief highlights of the new <em>Art of Rationality</em> that we&rsquo;re currently seeing:</p>
<ul>
<li><strong>One pillar, not three.</strong> CFAR has long talked about wanting to boost three distinct things in our participants (competence, epistemic rationality, and do-gooding). But we&rsquo;ve had the strong sense that there were ways to strengthen all three through the practice of a single, unified art of &ldquo;applied rationality&rdquo; (for instance, a deep understanding of reductionism seems to help with all three). Recently, we&rsquo;ve gotten better at articulating <em>how</em> this link works. For example:</li>
-<li><strong><a href="https://docs.google.com/presentation/d/1CDA5GWVvM0ioIpRTJ83N4QNeXSV7l1TRPA50xVXEmUU/edit?usp=gmail">Double Crux</a></strong> is a structured format for collaboratively finding the truth in cases where two people disagree. Instead of non-interactively offering pieces of their respective platforms, people jointly seek the actual question at the crux of the disagreement&mdash;the root uncertainty that has the potential to affect <em>both</em> of their beliefs. &nbsp;We introduced this as an epistemic rationality technique, and used in in this way at e.g. EA Global, where people argued about cause priortization; it then made its way also into our material on competence and on how to sustainably care deeply about the world. &nbsp;(See the next two bullet points.)</li>
+<li><strong><a href="https://docs.google.com/presentation/d/1CDA5GWVvM0ioIpRTJ83N4QNeXSV7l1TRPA50xVXEmUU/edit?usp=gmail">Double Crux</a></strong> is a structured format for collaboratively finding the truth in cases where two people disagree. Instead of non-interactively offering pieces of their respective platforms, people jointly seek the actual question at the crux of the disagreement&mdash;the root uncertainty that has the potential to affect <em>both</em> of their beliefs. &nbsp;We introduced this as an epistemic rationality technique, and used in in this way at e.g. EA Global, where people argued about cause prioritization; it then made its way also into our material on competence and on how to sustainably care deeply about the world. &nbsp;(See the next two bullet points.)</li>
<li><strong>Competence <em>as</em> &ldquo;deep/internal epistemic rationality.&rdquo;</strong>&nbsp; If I am frequently late to appointments and &ldquo;don&rsquo;t want to be,&rdquo; one can frame this as stemming from an inaccurate anticipation somewhere in my mind&mdash;perhaps I mis-anticipate whether my actions will make me late, or perhaps I disagree with myself as to whether lateness in fact harms my goals. Either way, it can be helpful (in our experience) to &ldquo;internally double crux&rdquo; the apparent disagreement (i.e., to play the double crux game between two different models within my own head, working until I have both a better model and a better actual outcome). More generally, we are increasingly making headway on &ldquo;competence&rdquo; or &ldquo;instrumental rationality&rdquo; problems via techniques aimed at integrating accurate beliefs into all parts of one&rsquo;s psyche. &nbsp;</li>
<li><strong>Do-gooding and epistemic rationality.</strong>&nbsp;&ldquo;Do-gooding&rdquo; would seem to be a goal that some have and others don&rsquo;t, and it would seem odd to try to shift <em>goals</em> by learning epistemic rationality. But it seems to many of us (informally, anecdotally) that there is a kind of &ldquo;deep epistemic rationality&rdquo; that doesn&rsquo;t <em>change</em> one&rsquo;s goals, but <em>does</em> help one make actual contact with what is at stake in the world, and with the parts of one's psyche that <em>already</em>&nbsp;care about those stakes... and this can sometimes help in practice to build deep, sustainable caring. &nbsp;The idea is again to e.g. notice a part of you that thinks the world matters, and a part of you that is afraid to look in that direction, and help these parts trade model-pieces and update back and forth (double crux, again). For an early attempt to articulate pieces of this "art of connecting to deep caring", see Val&rsquo;s <a href="/lw/n2x/the_art_of_grieving_well/">recent post on grieving</a>.</li>
-<li><strong>Teaching the synthesis.</strong> Workshops are made of techniques, which are like sounding out words a letter at a time (C-A-T&hellip;C&hellip;Ca&hellip;Cat!). This year, we stuffed the After years of pointing at the deeper skill (Cat! Hat! Antidisestablishmentarianism!), we&rsquo;ve finally found framings and explanations (like this one) that actually bridge the gap. Those, plus an explicit emphasis on synthesis and the addition of peer-to-peer tutoring, have successfully transformed the techniques into stepping stones toward the actual art.</li>
+<li><strong>Teaching the synthesis.</strong> Our pre-2015 workshops were made of techniques, which was like sounding out words a letter at a time (C-A-T&hellip;C&hellip;Ca&hellip;Cat!). After years of trying to use these techniques to point at the deeper skill (Cat! Hat! Antidisestablishmentarianism!), we&rsquo;ve finally found framings and explanations (like this one) that actually bridge the gap. Those framings, plus an explicit emphasis on synthesis and the addition of peer-to-peer tutoring, have successfully transformed the techniques into stepping stones toward the actual art. &nbsp;(The techniques are now stuffed into the first two days; the synthesis, and the rhythms of using applied rationality in practice, now occupy the second half of the workshop and give people a better sense of the lived feeling of the art. &nbsp;We think.)</li>
</ul>
<p>This is the beginning of work that we&rsquo;re poised to expand and improve in the coming year via our new Labs group.</p>
<h2><a id="Financial_Retrospective_for_2015_86"></a>Financial Retrospective for 2015</h2>

Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 07:03AM
@@ -49,7 +49,7 @@
<li><strong><a href="https://docs.google.com/presentation/d/1CDA5GWVvM0ioIpRTJ83N4QNeXSV7l1TRPA50xVXEmUU/edit?usp=gmail">Double Crux</a></strong> is a structured format for collaboratively finding the truth in cases where two people disagree. Instead of non-interactively offering pieces of their respective platforms, people jointly seek the actual question at the crux of the disagreement&mdash;the root uncertainty that has the potential to affect <em>both</em> of their beliefs. &nbsp;We introduced this as an epistemic rationality technique, and used in in this way at e.g. EA Global, where people argued about cause priortization; it then made its way also into our material on competence and on how to sustainably care deeply about the world. &nbsp;(See the next two bullet points.)</li>
<li><strong>Competence <em>as</em> &ldquo;deep/internal epistemic rationality.&rdquo;</strong>&nbsp; If I am frequently late to appointments and &ldquo;don&rsquo;t want to be,&rdquo; one can frame this as stemming from an inaccurate anticipation somewhere in my mind&mdash;perhaps I mis-anticipate whether my actions will make me late, or perhaps I disagree with myself as to whether lateness in fact harms my goals. Either way, it can be helpful (in our experience) to &ldquo;internally double crux&rdquo; the apparent disagreement (i.e., to play the double crux game between two different models within my own head, working until I have both a better model and a better actual outcome). More generally, we are increasingly making headway on &ldquo;competence&rdquo; or &ldquo;instrumental rationality&rdquo; problems via techniques aimed at integrating accurate beliefs into all parts of one&rsquo;s psyche. &nbsp;</li>
<li><strong>Do-gooding and epistemic rationality.</strong>&nbsp;&ldquo;Do-gooding&rdquo; would seem to be a goal that some have and others don&rsquo;t, and it would seem odd to try to shift <em>goals</em> by learning epistemic rationality. But it seems to many of us (informally, anecdotally) that there is a kind of &ldquo;deep epistemic rationality&rdquo; that doesn&rsquo;t <em>change</em> one&rsquo;s goals, but <em>does</em> help one make actual contact with what is at stake in the world, and with the parts of one's psyche that <em>already</em>&nbsp;care about those stakes... and this can sometimes help in practice to build deep, sustainable caring. &nbsp;The idea is again to e.g. notice a part of you that thinks the world matters, and a part of you that is afraid to look in that direction, and help these parts trade model-pieces and update back and forth (double crux, again). For an early attempt to articulate pieces of this "art of connecting to deep caring", see Val&rsquo;s <a href="/lw/n2x/the_art_of_grieving_well/">recent post on grieving</a>.</li>
-<li><strong>Teaching the synthesis.</strong> Workshops are made of techniques, which are like sounding out words a letter at a time (C-A-T&hellip;C&hellip;Ca&hellip;Cat!). After years of pointing at the deeper skill (Cat! Hat! Antidisestablishmentarianism!), we&rsquo;ve finally found framings and explanations (like this one) that actually bridge the gap. Those, plus an explicit emphasis on synthesis and the addition of peer-to-peer tutoring, have successfully transformed the techniques into stepping stones toward the actual art.</li>
+<li><strong>Teaching the synthesis.</strong> Workshops are made of techniques, which are like sounding out words a letter at a time (C-A-T&hellip;C&hellip;Ca&hellip;Cat!). This year, we stuffed the After years of pointing at the deeper skill (Cat! Hat! Antidisestablishmentarianism!), we&rsquo;ve finally found framings and explanations (like this one) that actually bridge the gap. Those, plus an explicit emphasis on synthesis and the addition of peer-to-peer tutoring, have successfully transformed the techniques into stepping stones toward the actual art.</li>
</ul>
<p>This is the beginning of work that we&rsquo;re poised to expand and improve in the coming year via our new Labs group.</p>
<h2><a id="Financial_Retrospective_for_2015_86"></a>Financial Retrospective for 2015</h2>
@@ -145,7 +145,7 @@
<h2><a id="The_path_forward_and_how_you_can_help_226"></a>The path forward, and how you can help</h2>
<p>CFAR&rsquo;s mission is to gather together people with the potential for real and meaningful impact, and to cause them to come closer to meeting that potential. It doesn&rsquo;t much matter whether you think we&rsquo;re under a ticking clock of existential risk, or you&rsquo;re concerned about a million humans dying every week, or you&rsquo;re simply grumpy that we haven&rsquo;t gotten a human past low earth orbit since 1972&mdash;our individual and collective thinking skill is a key bottleneck on our future.</p>
<p>Applied rationality, more than almost anything else, has a shot at being a <em>truly</em> all-purpose tool in humanity&rsquo;s toolkit, and the bigger the problems on the horizon, the more vital that tool becomes.</p>
-<p>2016 will be a particularly critical year in CFAR&rsquo;s history. We&rsquo;re restructuring our team in pretty major ways, and finding the right people (or not) will determine our ability to get the right character and culture from the beginning. The world of AI risk is changing rapidly, and decisions made over the coming months will shape the future of the field. The momentum we will have going into the spring is likely to be the difference between a CFAR that actually matters, and one that sounds good but is ultimately irrelevant.</p>
+<p>2016 will be a particularly critical year in CFAR&rsquo;s history. We&rsquo;re restructuring our team in pretty major ways, and finding the right team members (or not) will determine our ability to get the right character and culture from the beginning. The world of AI risk is changing rapidly, and decisions made over the coming months will shape the future of the field. The momentum we will have going into the spring is likely to be the difference between a CFAR that actually matters, and one that sounds good but is ultimately irrelevant.</p>
<p>There are at least four major ways to help:</p>
<ol>
<li>Donate directly to our <a href="http://rationality.org/fundraiser2015/">winter fundraising drive</a>. This is the most straightforward way to help, and makes a categorical difference in our ability to execute the mission.</li>

Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 06:03AM
@@ -47,8 +47,8 @@
<ul>
<li><strong>One pillar, not three.</strong> CFAR has long talked about wanting to boost three distinct things in our participants (competence, epistemic rationality, and do-gooding). But we&rsquo;ve had the strong sense that there were ways to strengthen all three through the practice of a single, unified art of &ldquo;applied rationality&rdquo; (for instance, a deep understanding of reductionism seems to help with all three). Recently, we&rsquo;ve gotten better at articulating <em>how</em> this link works. For example:</li>
<li><strong><a href="https://docs.google.com/presentation/d/1CDA5GWVvM0ioIpRTJ83N4QNeXSV7l1TRPA50xVXEmUU/edit?usp=gmail">Double Crux</a></strong> is a structured format for collaboratively finding the truth in cases where two people disagree. Instead of non-interactively offering pieces of their respective platforms, people jointly seek the actual question at the crux of the disagreement&mdash;the root uncertainty that has the potential to affect <em>both</em> of their beliefs. &nbsp;We introduced this as an epistemic rationality technique, and used in in this way at e.g. EA Global, where people argued about cause priortization; it then made its way also into our material on competence and on how to sustainably care deeply about the world. &nbsp;(See the next two bullet points.)</li>
-<li><strong>Competence <em>as</em> &ldquo;deep/internal epistemic rationality.&rdquo;</strong>&nbsp; If I am frequently late to appointments and &ldquo;don&rsquo;t want to be,&rdquo; one can frame this as stemming from an inaccurate anticipation somewhere in my mind&mdash;perhaps I mis-anticipate whether my actions will make me late, or perhaps I disagree with myself as to whether lateness in fact harms my goals. Either way, it can be helpful to &ldquo;internally double crux&rdquo; the apparent disagreement, leading to both a better model and a better actual outcome. More generally, we are increasingly making headway on &ldquo;competence&rdquo; or &ldquo;instrumental rationality&rdquo; problems via techniques aimed at integrating accurate beliefs into all parts of one&rsquo;s psyche. &nbsp;</li>
-<li><strong>Do-gooding and epistemic rationality.</strong>&nbsp;&ldquo;Do-gooding&rdquo; would seem to be a goal that some have and others don&rsquo;t, and it would seem odd to try to shift <em>goals</em> by learning epistemic rationality. But it seems to many of us that there is a kind of &ldquo;deep epistemic rationality&rdquo; that doesn&rsquo;t <em>change</em> one&rsquo;s goals, but <em>does</em> help one make actual contact with the deep caring that already exists within a person. Empirically, it also seems that when real humans do this, many of them end up caring more about the state of the world. For an early attempt to articulate pieces of this "art of connecting to deep caring", see Val&rsquo;s <a href="/lw/n2x/the_art_of_grieving_well/">recent post on grieving</a>.</li>
+<li><strong>Competence <em>as</em> &ldquo;deep/internal epistemic rationality.&rdquo;</strong>&nbsp; If I am frequently late to appointments and &ldquo;don&rsquo;t want to be,&rdquo; one can frame this as stemming from an inaccurate anticipation somewhere in my mind&mdash;perhaps I mis-anticipate whether my actions will make me late, or perhaps I disagree with myself as to whether lateness in fact harms my goals. Either way, it can be helpful (in our experience) to &ldquo;internally double crux&rdquo; the apparent disagreement (i.e., to play the double crux game between two different models within my own head, working until I have both a better model and a better actual outcome). More generally, we are increasingly making headway on &ldquo;competence&rdquo; or &ldquo;instrumental rationality&rdquo; problems via techniques aimed at integrating accurate beliefs into all parts of one&rsquo;s psyche. &nbsp;</li>
+<li><strong>Do-gooding and epistemic rationality.</strong>&nbsp;&ldquo;Do-gooding&rdquo; would seem to be a goal that some have and others don&rsquo;t, and it would seem odd to try to shift <em>goals</em> by learning epistemic rationality. But it seems to many of us (informally, anecdotally) that there is a kind of &ldquo;deep epistemic rationality&rdquo; that doesn&rsquo;t <em>change</em> one&rsquo;s goals, but <em>does</em> help one make actual contact with what is at stake in the world, and with the parts of one's psyche that <em>already</em>&nbsp;care about those stakes... and this can sometimes help in practice to build deep, sustainable caring. &nbsp;The idea is again to e.g. notice a part of you that thinks the world matters, and a part of you that is afraid to look in that direction, and help these parts trade model-pieces and update back and forth (double crux, again). For an early attempt to articulate pieces of this "art of connecting to deep caring", see Val&rsquo;s <a href="/lw/n2x/the_art_of_grieving_well/">recent post on grieving</a>.</li>
<li><strong>Teaching the synthesis.</strong> Workshops are made of techniques, which are like sounding out words a letter at a time (C-A-T&hellip;C&hellip;Ca&hellip;Cat!). After years of pointing at the deeper skill (Cat! Hat! Antidisestablishmentarianism!), we&rsquo;ve finally found framings and explanations (like this one) that actually bridge the gap. Those, plus an explicit emphasis on synthesis and the addition of peer-to-peer tutoring, have successfully transformed the techniques into stepping stones toward the actual art.</li>
</ul>
<p>This is the beginning of work that we&rsquo;re poised to expand and improve in the coming year via our new Labs group.</p>

Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 05:33AM
@@ -46,9 +46,9 @@
<p>Here are some brief highlights of the new <em>Art of Rationality</em> that we&rsquo;re currently seeing:</p>
<ul>
<li><strong>One pillar, not three.</strong> CFAR has long talked about wanting to boost three distinct things in our participants (competence, epistemic rationality, and do-gooding). But we&rsquo;ve had the strong sense that there were ways to strengthen all three through the practice of a single, unified art of &ldquo;applied rationality&rdquo; (for instance, a deep understanding of reductionism seems to help with all three). Recently, we&rsquo;ve gotten better at articulating <em>how</em> this link works. For example:</li>
-<li><strong><a href="https://docs.google.com/presentation/d/1CDA5GWVvM0ioIpRTJ83N4QNeXSV7l1TRPA50xVXEmUU/edit?usp=gmail">Double Crux</a></strong> is a structured format for collaboratively finding the truth in cases where two people disagree. Instead of non-interactively offering pieces of their respective platforms, people jointly seek the actual question at the crux of the disagreement&mdash;the root uncertainty that has the potential to affect <em>both</em> of their beliefs.</li>
-<li><strong>Competence <em>as</em> &ldquo;deep/internal epistemic rationality.&rdquo;</strong>&nbsp;if I am frequently late to appointments and &ldquo;don&rsquo;t want to be,&rdquo; one can frame this as stemming from an inaccurate anticipation somewhere in my mind&mdash;perhaps I mis-anticipate whether my actions will make me late, or perhaps I disagree with myself as to whether lateness in fact harms my goals. Either way, it can be helpful to &ldquo;internally double crux&rdquo; the apparent disagreement, leading to both a better model and a better actual outcome. In general, we are increasingly making headway on &ldquo;competence&rdquo; or &ldquo;instrumental rationality&rdquo; problems via techniques aimed at integrating accurate beliefs into all parts of one&rsquo;s psyche.</li>
-<li><strong>Do-gooding and epistemic rationality.</strong>&nbsp;&ldquo;Do-gooding&rdquo; would seem to be a goal that some have and others don&rsquo;t, and it would seem odd to try to shift <em>goals</em> by learning epistemic rationality. But it seems to many of us that there is a kind of &ldquo;deep epistemic rationality&rdquo; that doesn&rsquo;t <em>change</em> one&rsquo;s goals, but <em>does</em> help one make actual contact with the deep caring that already exists within a person. Empirically, it also seems that when real humans do this, many of them end up caring more about the state of the world. For an early attempt to articulate pieces of this art, see Val&rsquo;s <a href="/lw/n2x/the_art_of_grieving_well/">recent post on Grieving</a>.</li>
+<li><strong><a href="https://docs.google.com/presentation/d/1CDA5GWVvM0ioIpRTJ83N4QNeXSV7l1TRPA50xVXEmUU/edit?usp=gmail">Double Crux</a></strong> is a structured format for collaboratively finding the truth in cases where two people disagree. Instead of non-interactively offering pieces of their respective platforms, people jointly seek the actual question at the crux of the disagreement&mdash;the root uncertainty that has the potential to affect <em>both</em> of their beliefs. &nbsp;We introduced this as an epistemic rationality technique, and used in in this way at e.g. EA Global, where people argued about cause priortization; it then made its way also into our material on competence and on how to sustainably care deeply about the world. &nbsp;(See the next two bullet points.)</li>
+<li><strong>Competence <em>as</em> &ldquo;deep/internal epistemic rationality.&rdquo;</strong>&nbsp; If I am frequently late to appointments and &ldquo;don&rsquo;t want to be,&rdquo; one can frame this as stemming from an inaccurate anticipation somewhere in my mind&mdash;perhaps I mis-anticipate whether my actions will make me late, or perhaps I disagree with myself as to whether lateness in fact harms my goals. Either way, it can be helpful to &ldquo;internally double crux&rdquo; the apparent disagreement, leading to both a better model and a better actual outcome. More generally, we are increasingly making headway on &ldquo;competence&rdquo; or &ldquo;instrumental rationality&rdquo; problems via techniques aimed at integrating accurate beliefs into all parts of one&rsquo;s psyche. &nbsp;</li>
+<li><strong>Do-gooding and epistemic rationality.</strong>&nbsp;&ldquo;Do-gooding&rdquo; would seem to be a goal that some have and others don&rsquo;t, and it would seem odd to try to shift <em>goals</em> by learning epistemic rationality. But it seems to many of us that there is a kind of &ldquo;deep epistemic rationality&rdquo; that doesn&rsquo;t <em>change</em> one&rsquo;s goals, but <em>does</em> help one make actual contact with the deep caring that already exists within a person. Empirically, it also seems that when real humans do this, many of them end up caring more about the state of the world. For an early attempt to articulate pieces of this "art of connecting to deep caring", see Val&rsquo;s <a href="/lw/n2x/the_art_of_grieving_well/">recent post on grieving</a>.</li>
<li><strong>Teaching the synthesis.</strong> Workshops are made of techniques, which are like sounding out words a letter at a time (C-A-T&hellip;C&hellip;Ca&hellip;Cat!). After years of pointing at the deeper skill (Cat! Hat! Antidisestablishmentarianism!), we&rsquo;ve finally found framings and explanations (like this one) that actually bridge the gap. Those, plus an explicit emphasis on synthesis and the addition of peer-to-peer tutoring, have successfully transformed the techniques into stepping stones toward the actual art.</li>
</ul>
<p>This is the beginning of work that we&rsquo;re poised to expand and improve in the coming year via our new Labs group.</p>

Why CFAR? The view from 2015

Edited by AnnaSalamon ( PeteMichaud ) 21 December 2015 05:08AM
@@ -39,13 +39,13 @@
<p>There is the process by which we improve a workshop, and there is the process by which we improve our understanding of how rationality works at its core. The two processes don&rsquo;t always help one another, but this year they did.</p>
<p>How we got there:</p>
<ul>
-<li>As it turns out, attempting to create AI risk scientists (as opposed to boosting the scientist-nature of everyday people) put a subtle but very different spin on the teaching of Sequences-style epistemic rationality. The MIRI Summer Fellows Program was a catalyst for the development of new rationality theory, both because the researchers were themselves trying to model mind-like processes and because they stubbornly insisted on actual clarity and cohesion in ways that forced us to find it even for the less-settled parts of our curriculum.</li>
+<li>As it turns out, attempting to create AI risk scientists (as opposed to boosting the scientist-nature of everyday people) put a subtle but very different spin on the teaching of Sequences-style epistemic rationality. &nbsp;It helped that the researchers were themselves trying to model mind-like processes and that they stubbornly insisted on building related models of what the heck we were trying to convey.</li>
<li>MIRI Summer Fellows was also a project we could just actually see mattered, and there's nothing quite like&nbsp;<a href="/lw/nb/something_to_protect/">actual stakes</a>&nbsp;when it comes to creating a sense of drive and purpose, and being willing to update. &nbsp;</li>
<li>Improving organizational capital created a positive feedback loop. Working to make our workshops &ldquo;crisp&rdquo;&mdash;to clean up the methods and metaphors that weren&rsquo;t pulling their weight&mdash;helped make more of what we knew more visible.</li>
</ul>
<p>Here are some brief highlights of the new <em>Art of Rationality</em> that we&rsquo;re currently seeing:</p>
<ul>
-<li><strong>One pillar, not three.</strong> CFAR has long talked about wanting to boost three distinct things in our participants (competence, epistemic rationality, and do-gooding). But we&rsquo;ve had the strong sense that there were ways to strengthen all three through the practice of a single, unified art of &ldquo;applied rationality&rdquo; (for instance, a deep understanding of reductionism seems to help with all three). Recently, we&rsquo;ve gotten much better at articulating <em>how</em> this link works. For example:</li>
+<li><strong>One pillar, not three.</strong> CFAR has long talked about wanting to boost three distinct things in our participants (competence, epistemic rationality, and do-gooding). But we&rsquo;ve had the strong sense that there were ways to strengthen all three through the practice of a single, unified art of &ldquo;applied rationality&rdquo; (for instance, a deep understanding of reductionism seems to help with all three). Recently, we&rsquo;ve gotten better at articulating <em>how</em> this link works. For example:</li>
<li><strong><a href="https://docs.google.com/presentation/d/1CDA5GWVvM0ioIpRTJ83N4QNeXSV7l1TRPA50xVXEmUU/edit?usp=gmail">Double Crux</a></strong> is a structured format for collaboratively finding the truth in cases where two people disagree. Instead of non-interactively offering pieces of their respective platforms, people jointly seek the actual question at the crux of the disagreement&mdash;the root uncertainty that has the potential to affect <em>both</em> of their beliefs.</li>
<li><strong>Competence <em>as</em> &ldquo;deep/internal epistemic rationality.&rdquo;</strong>&nbsp;if I am frequently late to appointments and &ldquo;don&rsquo;t want to be,&rdquo; one can frame this as stemming from an inaccurate anticipation somewhere in my mind&mdash;perhaps I mis-anticipate whether my actions will make me late, or perhaps I disagree with myself as to whether lateness in fact harms my goals. Either way, it can be helpful to &ldquo;internally double crux&rdquo; the apparent disagreement, leading to both a better model and a better actual outcome. In general, we are increasingly making headway on &ldquo;competence&rdquo; or &ldquo;instrumental rationality&rdquo; problems via techniques aimed at integrating accurate beliefs into all parts of one&rsquo;s psyche.</li>
<li><strong>Do-gooding and epistemic rationality.</strong>&nbsp;&ldquo;Do-gooding&rdquo; would seem to be a goal that some have and others don&rsquo;t, and it would seem odd to try to shift <em>goals</em> by learning epistemic rationality. But it seems to many of us that there is a kind of &ldquo;deep epistemic rationality&rdquo; that doesn&rsquo;t <em>change</em> one&rsquo;s goals, but <em>does</em> help one make actual contact with the deep caring that already exists within a person. Empirically, it also seems that when real humans do this, many of them end up caring more about the state of the world. For an early attempt to articulate pieces of this art, see Val&rsquo;s <a href="/lw/n2x/the_art_of_grieving_well/">recent post on Grieving</a>.</li>

View more: Prev | Next