I would be extremely glad to talk to talk to anyone about CFAR, the impact of marginal CFAR donations on the world's talent bottlenecks, or any related things. (If you like, we can try the double crux game.) You can book time with me here: http://www.meetme.so/cfar-anna
I agree with this. I might not get as much value from LW as I used to, but it being up and running is still positive net value for me.
I need to think about this more, but my present impression is in favor of keeping LW and making it good. It seems to me that we gain quite a bit from having a Schelling place to post, such that evidence and arguments posted to that Schelling location become common knowledge.
I agree that LW has been doing fairly badly lately, but I am fairly seriously attempting to craft a post sequence designed to combat that; if not LW, I'd favor some other method that has a single secure Schelling spot.
My impression is that we know a fair bit of applied rationality that was not successfully conveyed by Eliezer's original Sequences (although a fair chunk of it seems to me to be implicit in those Sequences), and that we are now in a position to make a more serious attempt to convey in writing.
Why startup founders have mood swings (and why they may have uses)
(This post was collaboratively written together with Duncan Sabien.)
Startup founders stereotypically experience some pretty serious mood swings. One day, their product seems destined to be bigger than Google, and the next, it’s a mess of incoherent, unrealistic nonsense that no one in their right mind would ever pay a dime for. Many of them spend half of their time full of drive and enthusiasm, and the other half crippled by self-doubt, despair, and guilt. Often this rollercoaster ride goes on for years before the company either finds its feet or goes under.
Well, sure, you might say. Running a startup is stressful. Stress comes with mood swings.
But that’s not really an explanation—it’s like saying stuff falls when you let it go. There’s something about the “launching a startup” situation that induces these kinds of mood swings in many people, including plenty who would otherwise be entirely stable.
How long since you first drew these graphs? Have you since considered where you are standing on the graph? Does the graph look (or rather feel) like you initially thought?
I think so, roughly, although it's not like I have anything like metrics. (It's been 5 or 6 years.)
Two Growth Curves
Sometimes, it helps to take a model that part of you already believes, and to make a visual image of your model so that more of you can see it.
One of my all-time favorite examples of this:
I used to often hesitate to ask dumb questions, to publicly try skills I was likely to be bad at, or to visibly/loudly put forward my best guesses in areas where others knew more than me.
I was also frustrated with this hesitation, because I could feel it hampering my skill growth. So I would try to convince myself not to care about what people thought of me. But that didn't work very well, partly because what folks think of me is in fact somewhat useful/important.
Then, I got out a piece of paper and drew how I expected the growth curves to go.

In blue, I drew the apparent-coolness level that I could achieve if I stuck with the "try to look good" strategy. In brown, I drew the apparent-coolness level I'd have if I instead made mistakes as quickly and loudly as possible -- I'd look worse at first, but then I'd learn faster, eventually overtaking the blue line.
Suddenly, instead of pitting my desire to become smart against my desire to look good, I could pit my desire to look good now against my desire to look good in the future :)
I return to this image of two growth curves often when I'm faced with an apparent tradeoff between substance and short-term appearances. (E.g., I used to often find myself scurrying to get work done, or to look productive / not-horribly-behind today, rather than trying to build the biggest chunks of capital for tomorrow. I would picture these growth curves.)
The program is no longer conditional; we're on; group looks awesome; applications still welcome.
It may help to mention in what way the event is conditional. Summer is a rather valuablr time to many who may attend, and some types of back up plans (internship) are hard to make.
The event is conditional on finding 14+ good participants. Applications are looking good, and I'm optimistic, but it's not certain yet. We will try to finalize things as soon as we can.
Why does this program rely on AI risk being within the Overton window? I would guess that the majority of people interested in this were already interested in AI risk before it went mainstream.
First, because the high-math community seems to contain many who are interested now (and have applied), who it would've been harder to interest before. Second, because running such a program for MIRI is more compatible with CFAR's branding, and CFAR's ability to appeal to a wide audience, now than before.
Subscribe to RSS Feed
= f037147d6e6c911a85753b9abdedda8d)
I find this bit quite alarming:
because it seems to me to amount to this: "Our goals for the year included putting in place metrics by which we could tell whether we were actually achieving what we want. So we did that. And then we decided we didn't want to track those, so we threw them away again."
... which is, to be sure, a reasonable course of action if you discover that you were measuring the wrong thing -- but is also exactly what you'd see if CFAR had found (or guessed) that it wasn't making progress according to those metrics, and didn't want that fact to be too visible.
Combined with an apparent shift from "hold workshops that enhance people's instrumental rationality" in the direction of "hold workshops that funnel people into MIRI", and the discovery that real rationality apparently necessarily involves "deep caring" ... I dunno, maybe it's all absolutely fine, but it looks just a little too much like a transition from "rationality enhancer" to "cult recruitment vehicle".
Sorry. Original phrasing around how we were now going to measure was pretty bad, I agree. I just edited it. I had been bothered by the very text you quoted, and we had an internal thread where we all discussed that and agreed that the phrases were wrong... but we were slow about that, and you commented while we were discussing! The new text more closely reflects the actual structure of how we've been thinking about it all.
It's a bit tricky to publish a long post with many co-editors without letting something inaccurate through (at least in a sleep-deprived marathon like we very rationally used before publishing this one...; there were a bunch of us working collaboratively on the text...); but we should probably in fact have edited a bit more before posting; anyhow, my apologies for editing this text on you after you commented.