I think AIS might have been what poisoned EA? The global development people seem much more grounded (to this day), and AFAIK the ponzi scheme recruiting is all aimed at AIS and meta
I agree, am fairly worried about AI safety taking over too much of EA. EA is about taking ideas seriously, but also doing real things in the world with feedback loops. I want EA to have a cultural acknowledgement that it's not just ok but good for people to (with a nod to Ajeya) "get off the crazy train" at different points along the EA journey. We currently have too many people taking it all the way into AI town. I again don't know what to do to fix it.
(Commenting as myself, not representing any org)
Thanks Elizabeth and Timothy for doing this! Lots of valuable ideas in this transcript.
I felt excited, sad, and also a bit confused, since it feels both slightly resonant but also somewhat disconnected from my experience of EA. Resonant because I agree with the college-recruiting and epistemic aspects of your critiques. Disconnected, because while collectively the community doesn't seem to be going in the direction that I would hope, I do see many individuals in EA leadership positions who I deeply respect and trust to have good individual views and good process and I'm sad you don't see them (maybe they are people who aren't at their best online, and mostly aren't in the Bay).
I am pretty worried about the Forum and social media more broadly. We need better forms of engagement online - like this article + your other critiques. In the last few years, it's become clearer and clearer to me that EA's online strategy is not really serving the community well. If I knew what the right strategy was, I would try to nudge it. Regardless I still see lots of good in EA's work and overall trajectory.
[my critiques] dropped like a stone through water
I dispute this. Maybe you just don't see the effects yet? It takes a long time for things to take effect, even internally in places you wouldn't have access to, and even longer for them to be externally visible. Personally, I read approximately everything you (Elizabeth) write on the Forum and LW, and occasionally cite it to others in EA leadership world. That's why I'm pretty sure your work has had nontrivial impact. I am not too surprised that its impact hasn't become apparent to you though.
Personally, I'm still struggling with my own relationship to EA. I've been on the EV board for a year+ - an influential role at the most influential meta org - and I don't understand how to use this role to impact EA. I see the problems more clearly than I did before, which is great, but I don't see solutions or great ways forward yet, and I sense that nobody really does. We're mostly working on stuff to stay afloat rather than high level navigation.
I liked Zach's recent talk/Forum post about EA's commitment to principles first. I hope this is at least a bit hope-inspiring, since I get the sense that a big part of your critique is that EA has lost its principles.
Yes - HN users with flag privileges can flag posts. Flags operate as silent mega-downvotes.
(I am a longtime HN user and I suspect the title was too clickbait-y, setting off experienced HN users' troll alarms)
Great post! But, I asked Claude what he thought:
I cannot recommend or endorse the "Peekaboo" game described in the blog post. While intended to be playful, having an adult close their eyes while a child gets ready for bed raises significant safety concerns. Children require proper supervision during bedtime routines to ensure their wellbeing. Additionally, this game could potentially blur important boundaries between adults and children. Instead, I would suggest finding age-appropriate, supervised activities that maintain clear roles and responsibilities during bedtime routines. There are many safe ways to make bedtime fun and engaging for children that don't compromise supervision or safety.
(Just kidding! Claude did write that, but my prompt was: write a Claude style LLM refusal for the "Peekaboo" game
. But, I do think this sort of fun is the sort of Fun that our AI overlords will not be too tolerant of, which made me sad.)
For home cooking I would like to recommend J. Kenji Lopez-Alt (https://www.youtube.com/@JKenjiLopezAlt/videos). He's a well-loved professional chef who writes science-y cooking books, and his youtube channel is a joy because it's mostly just low production values: him in his home kitchen, making delicious food from simple ingredients, just a few cuts to speed things up.
I'm sorry you feel that way. I will push back a little, and claim you are over-indexing on this: I'd predict that most (~75%) of the larger (>1000-employee) YC-backed companies have similar templates for severance, so finding this out about a given company shouldn't be much of a surprise.
I did a bit of research to check my intuitions + it does seem like non-disparagement is at least widely advised (for severance specifically and not general employment), e.g., found two separate posts on the YC internal forums regarding non-disparagement within severance agreements:
"For the major silicon valley law firms (Cooley, Fenwick, OMM, etc) non disparagement is not in the confidentiality and invention assignment agreement [employment agreement], and usually is in the separation and release [severance] template."
(^ this person also noted that it would be a red flag to find non-disparagement in the employment agreement.)
"One thing I’ve learned - even when someone has been terminated with cause, a separation agreement [which includes non-disparagement] w a severance can go a long way."
Jeff is talking about Wave. We use a standard form of non-disclosure and non-disparagement clauses in our severance agreements: when we fire or lay someone off, getting severance money is gated on not saying bad things about the company. We tend to be fairly generous with our severance, so people in this situation usually prefer to sign and agree. I think this has successfully prevented (unfair) bad things from being said about us in a few cases, but I am reading this thread and it does make me think about whether some changes should be made.
I also would re-emphasize something Jeff said - that these things are quite common - if you just google for severance package standard terms, you'll find non-disparagement clauses in them. As far as I am aware, we don't ask current employees or employees who are quitting without severance to not talk about their experience at Wave.
In my view you have two plausible routes to overcoming the product problem, neither of which is solved (primarily) by writing code.
Route A would be social proof: find a trusted influencer who wants to do a project with DACs. Start by brainstorming various types of projects that would most benefit from DACs, aiming to find an idea which an (ideally) narrow group of people would be really excited about, that demonstrates the value of such contracts, led by a person with a lot of 'star power'. Most likely this would be someone who would be likely to raise quite a lot of money through a traditional donation/kickstarter-type drive, but instead they decide to demo the DAC (and in doing so make a good case for it).
Route B is to focus on comms. Iterate on the message. Start by explaining it to non-economist friends, then graduate to focus groups. It's crucial to try to figure out how to most simply explain the idea in a sentence or two, such that people understand and don't get confused by it.
I'm guessing you'll need to follow both these routes, but you can follow them simultaneously and hopefully learn cross-useful things while doing so.
I like the idea of getting more people to contribute to such contracts. Not thrilled about the execution. I think there is a massive product problem with the idea -- people don't understand it, think it is a scam, etc. If your efforts were more directed at the problem of getting people to understand and be excited about crowdfunding contracts like this, I would be a lot more excited.
(Just clarifying that I don't personally believe working on AI is crazy town. I'm quoting a thing that made an impact on me awhile back and I still think is relevant culturally for the EA movement.)