Less Wrong is a community blog devoted to refining the art of human rationality. Please visit our About page for more information.

"Flinching away from truth” is often about *protecting* the epistemology

71 AnnaSalamon 20 December 2016 06:39PM

Related to: Leave a line of retreat; Categorizing has consequences.

There’s a story I like, about this little kid who wants to be a writer.  So she writes a story and shows it to her teacher.  

“You misspelt the word ‘ocean’”, says the teacher.  

“No I didn’t!”, says the kid.  

The teacher looks a bit apologetic, but persists:  “‘Ocean’ is spelt with a ‘c’ rather than an ‘sh’; this makes sense, because the ‘e’ after the ‘c’ changes its sound…”  

No I didn’t!” interrupts the kid.  

“Look,” says the teacher, “I get it that it hurts to notice mistakes.  But that which can be destroyed by the truth should be!  You did, in fact, misspell the word ‘ocean’.”  

“I did not!” says the kid, whereupon she bursts into tears, and runs away and hides in the closet, repeating again and again: “I did not misspell the word!  I can too be a writer!”.

continue reading »

Further discussion of CFAR’s focus on AI safety, and the good things folks wanted from “cause neutrality”

35 AnnaSalamon 12 December 2016 07:39PM

Follow-up to:

In the days since we published our previous post, a number of people have come up to me and expressed concerns about our new mission.  Several of these had the form “I, too, think that AI safety is incredibly important — and that is why I think CFAR should remain cause-neutral, so it can bring in more varied participants who might be made wary by an explicit focus on AI.”

I would here like to reply to these people and others, and to clarify what is and isn’t entailed by our new focus on AI safety.

continue reading »

[Link] CFAR's new mission statement (on our website)

7 AnnaSalamon 10 December 2016 08:37AM

[Link] CFAR's new mission statement (on our website)

0 AnnaSalamon 10 December 2016 07:26AM

CFAR’s new focus, and AI Safety

30 AnnaSalamon 03 December 2016 06:09PM

A bit about our last few months:

  • We’ve been working on getting a simple clear mission and an organization that actually works.  We think of our goal as analogous to the transition that the old Singularity Institute underwent under Lukeprog (during which chaos was replaced by a simple, intelligible structure that made it easier to turn effort into forward motion).
  • As part of that, we’ll need to find a way to be intelligible.
  • This is the first of several blog posts aimed at causing our new form to be visible from outside.  (If you're in the Bay Area, you can also come meet us at tonight's open house.) (We'll be talking more about the causes of this mission-change; the extent to which it is in fact a change, etc. in an upcoming post.)

Here's a short explanation of our new mission:
  • We care a lot about AI Safety efforts in particular, and about otherwise increasing the odds that humanity reaches the stars.

  • Also, we[1] believe such efforts are bottlenecked more by our collective epistemology, than by the number of people who verbally endorse or act on "AI Safety", or any other "spreadable viewpointdisconnected from its derivation.

  • Our aim is therefore to find ways of improving both individual thinking skill, and the modes of thinking and social fabric that allow people to think together.  And to do this among the relatively small sets of people tackling existential risk. 


continue reading »

On the importance of Less Wrong, or another single conversational locus

82 AnnaSalamon 27 November 2016 05:13PM
Epistemic status: My actual best bet.  But I used to think differently; and I don't know how to fully explicate the updating I did (I'm not sure what fully formed argument I could give my past self, that would cause her to update), so you should probably be somewhat suspicious of this until explicated.  And/or you should help me explicate it.

It seems to me that:
  1. The world is locked right now in a deadly puzzle, and needs something like a miracle of good thought if it is to have the survival odds one might wish the world to have.

  2. Despite all priors and appearances, our little community (the "aspiring rationality" community; the "effective altruist" project; efforts to create an existential win; etc.) has a shot at seriously helping with this puzzle.  This sounds like hubris, but it is at this point at least partially a matter of track record.[1]

  3. To aid in solving this puzzle, we must probably find a way to think together, accumulatively.

continue reading »

Several free CFAR summer programs on rationality and AI safety

18 AnnaSalamon 14 April 2016 02:35AM
CFAR will be running several free summer programs this summer which are currently taking applications.  Please apply if you’re interested, and forward the programs also to anyone else who may be a good fit!
continue reading »

Consider having sparse insides

12 AnnaSalamon 01 April 2016 12:07AM

It's easier to seek true beliefs if you keep your (epistemic) identity small. (E.g., if you avoid beliefs like "I am a democrat", and say only "I am a seeker of accurate world-models, whatever those turn out to be".)

It seems analogously easier to seek effective internal architectures if you also keep non-epistemic parts of your identity small -- not "I am a person who enjoys nature", nor "I am someone who values mathematics" nor "I am a person who aims to become good at email" but only "I am a person who aims to be effective, whatever that turns out to entail (and who is willing to let much of my identity burn in the process)".

There are obviously hazards as well as upsides that come with this; still, the upsides seem worth putting out there.

The two biggest exceptions I would personally make, which seem to mitigate the downsides: "I am a person who keeps promises" and "I am a person who is loyal to [small set of people] and who can be relied upon to cooperate more broadly -- whatever that turns out to entail".

 

Thoughts welcome.

The correct response to uncertainty is *not* half-speed

77 AnnaSalamon 15 January 2016 10:55PM

Related to: Half-assing it with everything you've got; Wasted motionSay it Loud.

Once upon a time (true story), I was on my way to a hotel in a new city.  I knew the hotel was many miles down this long, branchless road.  So I drove for a long while.

After a while, I began to worry I had passed the hotel.

 

 

So, instead of proceeding at 60 miles per hour the way I had been, I continued in the same direction for several more minutes at 30 miles per hour, wondering if I should keep going or turn around.

After a while, I realized: I was being silly!  If the hotel was ahead of me, I'd get there fastest if I kept going 60mph.  And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction.  And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.  

Either way, fullspeed was best.  My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward.  So, since I'm uncertain, I should go forward at half-speed!"  But averages don't actually work that way.[1]

Following this, I started noticing lots of hotels in my life (and, perhaps less tactfully, in my friends' lives).  For example:
  • I wasn't sure if I was a good enough writer to write a given doc myself, or if I should try to outsource it.  So, I sat there kind-of-writing it while also fretting about whether the task was correct.
    • (Solution:  Take a minute out to think through heuristics.  Then, either: (1) write the post at full speed; or (2) try to outsource it; or (3) write full force for some fixed time period, and then pause and evaluate.)
  • I wasn't sure (back in early 2012) that CFAR was worthwhile.  So, I kind-of worked on it.
  • An old friend came to my door unexpectedly, and I was tempted to hang out with her, but I also thought I should finish my work.  So I kind-of hung out with her while feeling bad and distracted about my work.
  • A friend of mine, when teaching me math, seems to mumble specifically those words that he doesn't expect me to understand (in a sort of compromise between saying them and not saying them)...
  • Duncan reports that novice Parkour students are unable to safely undertake certain sorts of jumps, because they risk aborting the move mid-stream, after the actual last safe stopping point (apparently kind-of-attempting these jumps is more dangerous than either attempting, or not attempting the jumps)
  • It is said that start-up founders need to be irrationally certain that their startup will succeed, lest they be unable to do more than kind-of work on it...

That is, it seems to me that often there are two different actions that would make sense under two different models, and we are uncertain which model is true... and so we find ourselves taking an intermediate of half-speed action... even when that action makes no sense under any probabilistic mixture of the two models.



You might try looking out for such examples in your life.


[1] Edited to add: The hotel example has received much nitpicking in the comments.  But: (A) the actual example was legit, I think.  Yes, stopping to think has some legitimacy, but driving slowly for a long time because uncertain does not optimize for thinking.  Similarly, it may make sense to drive slowly to stare at the buildings in some contexts... but I was on a very long empty country road, with no buildings anywhere (true historical fact), and also I was not squinting carefully at the scenery.  The thing I needed to do was to execute an efficient search pattern, with a threshold for a future time at which to switch from full-speed in some direction to full-speed in the other.  Also: (B) consider some of the other examples; "kind of working", "kind of hanging out with my friend", etc. seem to be common behaviors that are mostly not all that useful in the usual case.

Why CFAR's Mission?

38 AnnaSalamon 02 January 2016 11:23PM

Related to:


Briefly put, CFAR's mission is to improve the sanity/thinking skill of those who are most likely to actually usefully impact the world.

I'd like to explain what this mission means to me, and why I think a high-quality effort of this sort is essential, possible, and urgent.

I used a Q&A format (with imaginary Q's) to keep things readable; I would also be very glad to Skype 1-on-1 if you'd like something about CFAR to make sense, as would Pete Michaud.  You can schedule a conversation automatically with me or Pete.

---

Q:  Why not focus exclusively on spreading altruism?  Or else on "raising awareness" for some particular known cause?

Briefly put: because historical roads to hell have been powered in part by good intentions; because the contemporary world seems bottlenecked by its ability to figure out what to do and how to do it (i.e. by ideas/creativity/capacity) more than by folks' willingness to sacrifice; and because rationality skill and epistemic hygiene seem like skills that may distinguish actually useful ideas from ineffective or harmful ones in a way that "good intentions" cannot.

Q:  Even given the above -- why focus extra on sanity, or true beliefs?  Why not focus instead on, say, competence/usefulness as the key determinant of how much do-gooding impact a motivated person can have?  (Also, have you ever met a Less Wronger?  I hear they are annoying and have lots of problems with “akrasia”, even while priding themselves on their high “epistemic” skills; and I know lots of people who seem “less rational” than Less Wrongers on some axes who would nevertheless be more useful in many jobs; is this “epistemic rationality” thingy actually the thing we need for this world-impact thingy?...)

This is an interesting one, IMO.

Basically, it seems to me that epistemic rationality, and skills for forming accurate explicit world-models, become more useful the more ambitious and confusing a problem one is tackling.

For example:

continue reading »

View more: Next