I can imagine that it could be useful, but hard to say without trying to organize a solstice. I think if I used the product I would be happy to upload my program through it. I will say that most of the organizational effort was in people, not program, but that I can imagine this would simplify a decent amount of program. My big fud about projects like this is that if I don't have ultimate control over it (eg I want to change a lyric, or a chord or something, and the app doesn't let me do that) then I have to either give up on my change, or copy everything into Google Docs anyway, and I probably end up doing the latter and getting no value from the product.
I used a main Google doc with several tabs: "Plan" for organizers' use with bullet outline, "Booklet" for the printable handout with all song lyrics, and "Speeches" for the text of the speeches. Each bullet item in the Plan had specific timings, was linked to the sheet music and best YouTube recording of the song. Plan also had a bunch of notes/comments on how to execute each element. I separately created a Google Slides presentation with all the lyrics to be projected.
I didn't manage to read this in 2024, but I was linked to it from Anna's end-2025 CFAR posts and reading it now, a bunch of lightbulbs went on that I wasn't yet ready for in 2024.
I did experience something like burnout around then, am now much more recovered, and I resonate a lot with especially the beginning--the idea that orgs need to be paying attention to the world, that there aren't very good forcing functions for them doing so, and that burnout is the pushing-on-a-string feeling caused by that, and recovery looks like putting yourself in places where the string has tension.
The model seems incomplete somehow, I'm not exactly sure what is missing from it, but regardless it is useful to me and resonant.
This is pretty useful!
I note that it assigns infinite badness to going bankrupt (e.g., if you put the cost of any event as >= your wealth, it always takes the insurance). But in life, going bankrupt is not infinitely bad, and there are definitely some insurances that you don't want to pay for even if the loss would cause you to go bankrupt. It is not immediately obvious to me how to improve the app to take this into account, other than warning the user that they're in that situation. Anyway, still useful but figured I'd flag it.
Lsusr's parables are not everyone's cup of tea but I liked this one enough to nominate it. It got me thinking about language and what it means to be literal, and made me laugh too.
I quite liked this post, and strong upvoted it at the time. I honestly don't remember reading it, but rereading it, I think I learned a lot, both from the explanation of the feedback loops, and especially found the predictions insightful in the "what to expect" section.
Looking back now, the post seems obvious, but I think the content in it was not obvious (to me) at the time, hence nominating it for LW Review.
(Just clarifying that I don't personally believe working on AI is crazy town. I'm quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)
I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it's not just ok but good for people to (with a nod to Ajeya) "get off the crazy train" at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don't know what to do to fix it.
(Commenting as myself, not representing any org)
Thanks Elizabeth and Timothy for doing this! Lots of valuable ideas in this transcript.
I felt excited, sad, and also a bit confused, since it feels both slightly resonant but also somewhat disconnected from my experience of EA. Resonant because I agree with the college-recruiting and epistemic aspects of your critiques. Disconnected, because while collectively the community doesn't seem to be going in the direction that I would hope, I do see many individuals in EA leadership positions who I deeply respect and trust to have good individual views and good process and I'm sad you don't see them (maybe they are people who aren't at their best online, and mostly aren't in the Bay).
I am pretty worried about the Forum and social media more broadly. We need better forms of engagement online - like this article + your other critiques. In the last few years, it's become clearer and clearer to me that EA's online strategy is not really serving the community well. If I knew what the right strategy was, I would try to nudge it. Regardless I still see lots of good in EA's work and overall trajectory.
[my critiques] dropped like a stone through water
I dispute this. Maybe you just don't see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn't have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That's why I'm pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn't become apparent to you though.
Personally, I'm still struggling with my own relationship to EA. I've been on the EV board for a year+ - an influential role at the most influential meta org - and I don't understand how to use this role to impact EA. I see the problems more clearly than I did before, which is great, but I don't see solutions or great ways forward yet, and I sense that nobody really does. We're mostly working on stuff to stay afloat rather than high level navigation.
I liked Zach's recent talk/Forum post about EA's commitment to principles first. I hope this is at least a bit hope-inspiring, since I get the sense that a big part of your critique is that EA has lost its principles.
Okay. I get where you're coming from but, man, this seems like a naive take!
You've cited an interesting-to-me set of people: people who have all been extraordinarily successful at getting an impactful message out to new audiences, in particular audiences who wouldn't have been drawn into the existing LW sphere on its own. Piper, MacAskill and Ball are all great communicators. A big part of that communication skill is matching your message to its audience. My hypothesis is that the behavior you're critiquing is driven by their skill at figuring out what people need to hear, and when, to excite them and move them in the direction that they need to go in order to make the world a better place. Maybe sometimes they're not perfect at it but they're a hell of a lot better than you and me.
Why do you think this will work? In the political sphere especially, motivations are extraordinarily scrutinized. Maybe Piper and MacAskill could get away with this, but I suspect Ball could not; maybe he can sneak a few things in now that he's proven himself, but his ability to say anything about his motivations now speaks to his extraordinary communications skill getting where he's gotten. (Also, you shouldn't assume Ball was spinning before and is now telling the truth: he knows that "I'm a straight shooter" is what 80k's audience wants to hear; he may be forced to spin whatever he said to folks he works with, etc.)
I don't disagree that people should lean more in the motivation truthfulness direction that you're pushing for. I also want people to lean more in that direction, and try to hold myself and my teams to a higher standard of truthfulness in comms every day. But it's very much a spectrum, not black and white, depending on the audience. In my speaking and writing I've ~always had to change my message depending on the forum and I think the best communicators are ones who know this intimately and craft a message that people both need to hear and want to hear.