Comment author: gjm 20 December 2015 10:13:58PM 6 points [-]

I find this bit quite alarming:

We hit some of our concrete goals for 2015, and pivoted away from others.

We created a metric for strategic usefulness, hitting the first goal; we started tracking that metric, hitting the second goal.

We chose to change focus from boosting alumni scores on these components, however. [...] Focusing on boosting those components no longer made sense, and we transitioned away from that target.

because it seems to me to amount to this: "Our goals for the year included putting in place metrics by which we could tell whether we were actually achieving what we want. So we did that. And then we decided we didn't want to track those, so we threw them away again."

... which is, to be sure, a reasonable course of action if you discover that you were measuring the wrong thing -- but is also exactly what you'd see if CFAR had found (or guessed) that it wasn't making progress according to those metrics, and didn't want that fact to be too visible.

Combined with an apparent shift from "hold workshops that enhance people's instrumental rationality" in the direction of "hold workshops that funnel people into MIRI", and the discovery that real rationality apparently necessarily involves "deep caring" ... I dunno, maybe it's all absolutely fine, but it looks just a little too much like a transition from "rationality enhancer" to "cult recruitment vehicle".

Comment author: AnnaSalamon 21 December 2015 08:51:25AM *  4 points [-]

Sorry. Original phrasing around how we were now going to measure was pretty bad, I agree. I just edited it. I had been bothered by the very text you quoted, and we had an internal thread where we all discussed that and agreed that the phrases were wrong... but we were slow about that, and you commented while we were discussing! The new text more closely reflects the actual structure of how we've been thinking about it all.

It's a bit tricky to publish a long post with many co-editors without letting something inaccurate through (at least in a sleep-deprived marathon like we very rationally used before publishing this one...; there were a bunch of us working collaboratively on the text...); but we should probably in fact have edited a bit more before posting; anyhow, my apologies for editing this text on you after you commented.

Comment author: AnnaSalamon 20 December 2015 02:48:29AM *  14 points [-]

I would be extremely glad to talk to talk to anyone about CFAR, the impact of marginal CFAR donations on the world's talent bottlenecks, or any related things. (If you like, we can try the double crux game.) You can book time with me here: http://www.meetme.so/cfar-anna

In response to comment by gwillen on LessWrong 2.0
Comment author: Kaj_Sotala 03 December 2015 12:56:52PM 18 points [-]

I agree with this. I might not get as much value from LW as I used to, but it being up and running is still positive net value for me.

In response to comment by Kaj_Sotala on LessWrong 2.0
Comment author: AnnaSalamon 14 December 2015 04:49:06AM 5 points [-]

Me, too.

In response to LessWrong 2.0
Comment author: AnnaSalamon 14 December 2015 04:45:15AM 26 points [-]

I need to think about this more, but my present impression is in favor of keeping LW and making it good. It seems to me that we gain quite a bit from having a Schelling place to post, such that evidence and arguments posted to that Schelling location become common knowledge.

I agree that LW has been doing fairly badly lately, but I am fairly seriously attempting to craft a post sequence designed to combat that; if not LW, I'd favor some other method that has a single secure Schelling spot.

My impression is that we know a fair bit of applied rationality that was not successfully conveyed by Eliezer's original Sequences (although a fair chunk of it seems to me to be implicit in those Sequences), and that we are now in a position to make a more serious attempt to convey in writing.

Why startup founders have mood swings (and why they may have uses)

47 AnnaSalamon 09 December 2015 06:59PM

(This post was collaboratively written together with Duncan Sabien.)

 

Startup founders stereotypically experience some pretty serious mood swings.  One day, their product seems destined to be bigger than Google, and the next, it’s a mess of incoherent, unrealistic nonsense that no one in their right mind would ever pay a dime for.  Many of them spend half of their time full of drive and enthusiasm, and the other half crippled by self-doubt, despair, and guilt.  Often this rollercoaster ride goes on for years before the company either finds its feet or goes under.

 

 

 

 

 

Well, sure, you might say.  Running a startup is stressful.  Stress comes with mood swings.  

 

But that’s not really an explanation—it’s like saying stuff falls when you let it go.  There’s something about the “launching a startup” situation that induces these kinds of mood swings in many people, including plenty who would otherwise be entirely stable.

 

continue reading »
In response to Two Growth Curves
Comment author: Gunnar_Zarncke 02 October 2015 10:38:47PM *  3 points [-]

How long since you first drew these graphs? Have you since considered where you are standing on the graph? Does the graph look (or rather feel) like you initially thought?

Comment author: AnnaSalamon 03 October 2015 09:26:02PM *  1 point [-]

I think so, roughly, although it's not like I have anything like metrics. (It's been 5 or 6 years.)

Two Growth Curves

35 AnnaSalamon 02 October 2015 12:59AM

Sometimes, it helps to take a model that part of you already believes, and to make a visual image of your model so that more of you can see it.

One of my all-time favorite examples of this: 

I used to often hesitate to ask dumb questions, to publicly try skills I was likely to be bad at, or to visibly/loudly put forward my best guesses in areas where others knew more than me.

I was also frustrated with this hesitation, because I could feel it hampering my skill growth.  So I would try to convince myself not to care about what people thought of me.  But that didn't work very well, partly because what folks think of me is in fact somewhat useful/important.

Then, I got out a piece of paper and drew how I expected the growth curves to go.

In blue, I drew the apparent-coolness level that I could achieve if I stuck with the "try to look good" strategy.  In brown, I drew the apparent-coolness level I'd have if I instead made mistakes as quickly and loudly as possible -- I'd look worse at first, but then I'd learn faster, eventually overtaking the blue line.

Suddenly, instead of pitting my desire to become smart against my desire to look good, I could pit my desire to look good now against my desire to look good in the future :)

I return to this image of two growth curves often when I'm faced with an apparent tradeoff between substance and short-term appearances.  (E.g., I used to often find myself scurrying to get work done, or to look productive / not-horribly-behind today, rather than trying to build the biggest chunks of capital for tomorrow.  I would picture these growth curves.)

Comment author: AnnaSalamon 13 May 2015 08:31:50PM 2 points [-]

The program is no longer conditional; we're on; group looks awesome; applications still welcome.

Comment author: Dorikka 01 May 2015 12:33:45AM 1 point [-]

It may help to mention in what way the event is conditional. Summer is a rather valuablr time to many who may attend, and some types of back up plans (internship) are hard to make.

Comment author: AnnaSalamon 01 May 2015 03:45:52AM 1 point [-]

The event is conditional on finding 14+ good participants. Applications are looking good, and I'm optimistic, but it's not certain yet. We will try to finalize things as soon as we can.

Comment author: AlexMennen 30 April 2015 03:30:55PM 1 point [-]

Why does this program rely on AI risk being within the Overton window? I would guess that the majority of people interested in this were already interested in AI risk before it went mainstream.

Comment author: AnnaSalamon 30 April 2015 07:48:08PM 5 points [-]

First, because the high-math community seems to contain many who are interested now (and have applied), who it would've been harder to interest before. Second, because running such a program for MIRI is more compatible with CFAR's branding, and CFAR's ability to appeal to a wide audience, now than before.

View more: Prev | Next