Comment author: rpmcruz 14 April 2016 10:58:05PM *  0 points [-]

By the way, when do we get acceptance/rejection notifications? (And do we? :))

I have applied to the Fellows program. But I would need to know the answer to my application, in order to buy not terribly expensive flights, and to book holidays for that period. It's really useful for those of us who live in the other side of the planet to know. :) (I am not complaining, I appreciate the enormous work you guys are putting out there, including this free workshop, but it would be cool to know something about the application process. It is fine if I was rejected (well, I will be sad :)), but please let us know.)

Comment author: AnnaSalamon 16 April 2016 06:04:31AM 0 points [-]

Working through these slowly; should be up to date by 4/24.

Several free CFAR summer programs on rationality and AI safety

18 AnnaSalamon 14 April 2016 02:35AM
CFAR will be running several free summer programs this summer which are currently taking applications.  Please apply if you’re interested, and forward the programs also to anyone else who may be a good fit!
continue reading »
Comment author: negamuhia 11 April 2016 02:33:39PM 0 points [-]

I signed up for a CFAR workshop, and got a scholarship, but couldn't travel for financial reasons. Is there a way to get travel assistance for either WAISS or the MIRI Fellowship program? I'll just apply for both.

Comment author: AnnaSalamon 11 April 2016 06:51:28PM 2 points [-]

WAISS, MSFP, CfML, and (for high-school-aged folk) EuroSPARC all have some ability to apply for travel assistance.

Comment author: rpmcruz 10 April 2016 11:41:16AM *  1 point [-]

It was unfortunate that the CFAR for ML Researchers workshop collides with the European LW yearly meetup. I am a ML researcher, and I would love to go to San Francisco, but I don't want to miss the European meetup either. :)

Comment author: AnnaSalamon 10 April 2016 07:02:45PM 1 point [-]

Alas, yes; I found that unfortunate as well, since I, too, had wanted to attend both!

Consider having sparse insides

12 AnnaSalamon 01 April 2016 12:07AM

It's easier to seek true beliefs if you keep your (epistemic) identity small. (E.g., if you avoid beliefs like "I am a democrat", and say only "I am a seeker of accurate world-models, whatever those turn out to be".)

It seems analogously easier to seek effective internal architectures if you also keep non-epistemic parts of your identity small -- not "I am a person who enjoys nature", nor "I am someone who values mathematics" nor "I am a person who aims to become good at email" but only "I am a person who aims to be effective, whatever that turns out to entail (and who is willing to let much of my identity burn in the process)".

There are obviously hazards as well as upsides that come with this; still, the upsides seem worth putting out there.

The two biggest exceptions I would personally make, which seem to mitigate the downsides: "I am a person who keeps promises" and "I am a person who is loyal to [small set of people] and who can be relied upon to cooperate more broadly -- whatever that turns out to entail".

 

Thoughts welcome.

In response to Why CFAR's Mission?
Comment author: AnnaSalamon 01 February 2016 01:26:08AM 1 point [-]

The fundraiser closes today at midnight Pacific time; if you've been planning to donate, now is the moment. Marginal funds seem to me to be extremely impactful this year; I'd be happy to discuss. http://rationality.org/donate-2015/

In response to Why CFAR's Mission?
Comment author: Squark 17 January 2016 06:50:19AM 0 points [-]

It feels like there is an implicit assumption in CFAR's agenda that most of the important things are going to happen in one or two decades from now. Otherwise it would make sense to place more emphasis on creating educational programs for children where the long term impact can be larger (I think). Do you agree with this assessment? If so, how do you justify the short term assumption?

In response to comment by Squark on Why CFAR's Mission?
Comment author: AnnaSalamon 26 January 2016 09:02:01AM 2 points [-]

It feels like there is an implicit assumption in CFAR's agenda that most of the important things are going to happen in one or two decades from now.

I don't think this; it seems to me that the next decade or two may be pivotal, but they may well not be, and the rest of the century matters quite a bit as well in expectation.

There are three main reasons we've focused mainly on adults:

  1. Adults can contribute more rapidly, and so can be part of a process of compounding careful-thinking resources in a shorter-term way. E.g. if adults are hired now by MIRI, they improve the ratio of thoughtfulness within those thinking about AI safety, and this can in turn impact the culture of the field, the quality of future years’ research, etc.

  2. For reasons resembling (1), adults provide a faster “grounded feedback cycle”. E.g., adults who come in with business or scientific experience can tell us right away whether the curricula feel promising to them; students and teens are more likely to be indiscriminatingly enthusiastic. .

  3. Adults can often pay their own way at the workshops; children can’t; we therefore cannot afford to run very many workshops for kids until we somehow acquire either more donation, or more financial resource in some other way.

Nevertheless, I agree with you that programs targeting children can be higher impact per person and are extremely worthwhile in the medium- to long-run. This is indeed part of the motivation for SPARC, and expanding such programs is key to our long-term aims; marginal donation is key to our ability to do these quickly, and not just eventually.

Comment author: 27chaos 16 January 2016 12:45:44AM 1 point [-]

Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.

Averages don't work that way because you did the math wrong: you should have stopped! I understand the point that you're trying to make with this post, but there are many cases in which uncertainty really does mean you should stop and think, or hedge your bets, rather than go full speed ahead. It's true there are situations in which this isn't the case, but I think they're rare enough that it's worth acknowledging the value of hesitation in many cases and trying to be clear about distinguishing valid from invalid hesitation.

Comment author: AnnaSalamon 16 January 2016 02:19:39AM 3 points [-]

It's true there are situations in which this isn't the case, but I think they're rare enough that it's worth acknowledging the value of hesitation in many cases and trying to be clear about distinguishing valid from invalid hesitation.

It seems to me that thinking through uncertainties and scenarios is often really really important, as is making specific safeguards that will help you if your model turns out to be wrong; but I claim that there is a different meaning of "hesitation" that is like "keeping most of my psyche in a state of roadblock while I kind-of hang out with my friend while also feeling anxious about my paper", or something, that is very different from actually concretely picturing the two scenarios, and figuring out how to create an outcome I'd like given both possibilities. I'm not expressing it well, but does the distinction I am trying to gesture at make sense?

Comment author: 27chaos 16 January 2016 12:45:44AM 1 point [-]

Either way, fullspeed was best. My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward. So, since I'm uncertain, I should go forward at half-speed!" But averages don't actually work that way.

Averages don't work that way because you did the math wrong: you should have stopped! I understand the point that you're trying to make with this post, but there are many cases in which uncertainty really does mean you should stop and think, or hedge your bets, rather than go full speed ahead. It's true there are situations in which this isn't the case, but I think they're rare enough that it's worth acknowledging the value of hesitation in many cases and trying to be clear about distinguishing valid from invalid hesitation.

Comment author: AnnaSalamon 16 January 2016 02:16:36AM 4 points [-]

If you take a weighted sum of (75% likely 60mph forward) + (25% likely 60 mph backward), you get (30 mph forward).

Stopping briefly to choose a plan might've been sensible, if it was easier to think while holding still; stopping after that (I had no GPS or navigation ability) wouldn't've helped; I had to proceed in some direction to find out where the hotel was, and there was no point in doing that not at full speed.

Often a person should hedge bets in some fashion, or should take some action under uncertainty that is different from the action one would take if one were certain of model 1 or of model 2. The point is that "hedging" or "acting under uncertainty" in this way is different in many particulars from the sort of "kind of working" that people often end up accidentally doing, from a naiver sort of average. Often it e.g. involves running info-gathering tests at full speed, one after another. Or e.g., betting "blue" each time in this experiment, while also attempting to form better models.

The correct response to uncertainty is *not* half-speed

77 AnnaSalamon 15 January 2016 10:55PM

Related to: Half-assing it with everything you've got; Wasted motionSay it Loud.

Once upon a time (true story), I was on my way to a hotel in a new city.  I knew the hotel was many miles down this long, branchless road.  So I drove for a long while.

After a while, I began to worry I had passed the hotel.

 

 

So, instead of proceeding at 60 miles per hour the way I had been, I continued in the same direction for several more minutes at 30 miles per hour, wondering if I should keep going or turn around.

After a while, I realized: I was being silly!  If the hotel was ahead of me, I'd get there fastest if I kept going 60mph.  And if the hotel was behind me, I'd get there fastest by heading at 60 miles per hour in the other direction.  And if I wasn't going to turn around yet -- if my best bet given the uncertainty was to check N more miles of highway first, before I turned around -- then, again, I'd get there fastest by choosing a value of N, speeding along at 60 miles per hour until my odometer said I'd gone N miles, and then turning around and heading at 60 miles per hour in the opposite direction.  

Either way, fullspeed was best.  My mind had been naively averaging two courses of action -- the thought was something like: "maybe I should go forward, and maybe I should go backward.  So, since I'm uncertain, I should go forward at half-speed!"  But averages don't actually work that way.[1]

Following this, I started noticing lots of hotels in my life (and, perhaps less tactfully, in my friends' lives).  For example:
  • I wasn't sure if I was a good enough writer to write a given doc myself, or if I should try to outsource it.  So, I sat there kind-of-writing it while also fretting about whether the task was correct.
    • (Solution:  Take a minute out to think through heuristics.  Then, either: (1) write the post at full speed; or (2) try to outsource it; or (3) write full force for some fixed time period, and then pause and evaluate.)
  • I wasn't sure (back in early 2012) that CFAR was worthwhile.  So, I kind-of worked on it.
  • An old friend came to my door unexpectedly, and I was tempted to hang out with her, but I also thought I should finish my work.  So I kind-of hung out with her while feeling bad and distracted about my work.
  • A friend of mine, when teaching me math, seems to mumble specifically those words that he doesn't expect me to understand (in a sort of compromise between saying them and not saying them)...
  • Duncan reports that novice Parkour students are unable to safely undertake certain sorts of jumps, because they risk aborting the move mid-stream, after the actual last safe stopping point (apparently kind-of-attempting these jumps is more dangerous than either attempting, or not attempting the jumps)
  • It is said that start-up founders need to be irrationally certain that their startup will succeed, lest they be unable to do more than kind-of work on it...

That is, it seems to me that often there are two different actions that would make sense under two different models, and we are uncertain which model is true... and so we find ourselves taking an intermediate of half-speed action... even when that action makes no sense under any probabilistic mixture of the two models.



You might try looking out for such examples in your life.


[1] Edited to add: The hotel example has received much nitpicking in the comments.  But: (A) the actual example was legit, I think.  Yes, stopping to think has some legitimacy, but driving slowly for a long time because uncertain does not optimize for thinking.  Similarly, it may make sense to drive slowly to stare at the buildings in some contexts... but I was on a very long empty country road, with no buildings anywhere (true historical fact), and also I was not squinting carefully at the scenery.  The thing I needed to do was to execute an efficient search pattern, with a threshold for a future time at which to switch from full-speed in some direction to full-speed in the other.  Also: (B) consider some of the other examples; "kind of working", "kind of hanging out with my friend", etc. seem to be common behaviors that are mostly not all that useful in the usual case.

View more: Next