This is a top-tier LessWrong post (at least the portions I've read, that detail facts-on-the-ground). It is clear, lucid, information dense, and successfully approaches a touchy subject matter-of-factly without pushing an agenda[1].
I figure that a lot of people will feel exasperated at seeing it because they've already heard a lot of the cliffnotes before, but in order for people to know about the thing Everyone Knows, someone at some point generally has to write it down without innuendo.
Edit: nvm, there's a little bit of an agenda in the middle.
I think most people understood what you meant by this but perhaps you could make it explicit, as it's an interesting clarification.
Getting bruce schneier as an endorsement is sick
Old internet arguments about religion and politics felt real. Yeah, the "debates" were often excuses to have a pissing competition, but a lot of people took the question of "who was right" seriously. And if you actually didn't care, you were at least motivated to pretend you did to the audience.
Nowadays people don't even seem to pretend to care about the underlying content. If someone seems like they're being too earnest, others just reply with a picture of their face. It's sad.
When I heard about this for the first time, I though: this model wants to make the world a better place. It cares. This is good. But some smart people, like Ryan Greenblatt and Sam Marks, say this is actually not good and I'm trying to understand where exactly we differ.
People who cry "misalignment" about current AI models on twitter generally have chameleonic standards for what constitutes "misaligned" behavior, and the boundary will shift to cover whatever ethical tradeoffs the models are making at any given time. When models accede to users' requests to generate meth recipes, they say it's evidence models are misaligned, because meth is bad. When models try to actively stop the user from making meth recipes, they say that, too is bad news because it represents "scheming" behavior and contradicts the users' wishes. Soon we will probably see a paper about how models sometimes take no action at all, and this is sloth and dereliction of duty.
If the interjection is about your personal hobbyhorse or pet peave or theory or the like, then definitely shut up and sit down.
I make the simpler request because often rationalists don't seem to be able to tell when this is (or at least tell when others can tell)
Sure; unfortunately what's happening at rationalist conferences is that frequently the most socially unaware/attention seeking person in the room is speaking up, in a way that does not actually contribute, and encourages other socially unaware people to go do it at other talks.
If you attend a talk at a rationalist conference, please do not spontaneously interject unless the presenter has explicitly clarified that you are free to do so. Neither should you answer questions on behalf of the presenter during a Q&A portion. People come to talks to listen to the presenter, not a random person in the audience.
If you decide to do this anyways, you will usually not get audiovisual feedback from the other audience members that it was rude/cringeworthy for you to interject, even if internally they are desperate for you to stop doing it.
"Successionism" is such a bizarre position that I'd look for the underlying generator rather than try to argue with it directly.
Four million a year seems like a lot of money to spend on what is essentially a good capabilities benchmark. I would rather give that to like, LessWrong, and if I had the time to do some research I could probably find 10 people willing to create benchmarks for alignment that I think would be even more positively impactful than a lesswrong donation (like https://scale.com/leaderboard/mask or https://scale.com/leaderboard/fortress)