LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
I'd expect what your goals are to have a pretty noticeable effect on how to optimize your anki.
I think these mostly don't take the form of "posts" because it mostly involves actually going and forming organizations and coordinating and running stuff. (maybe see Dark Forest Theories)
There was a lot more explicit discussion of this sort of thing 10 years ago during the early days of the EA movement, and right now I think it's a combo of a) mostly those conversations turned into professional orgs doing stuff, and b) we're also in a period where it's more obvious that there were significant problems with this focus so there's a bit of a reaction against it.
Also, note: if your plan to recruit more people is working, you should still expect to see mostly posts on the object level. Like, if you didn't successfully get 10x or 100x the people working on the object level, that would indicate your plan to scale had failed.
One thing I aim for is to do most of the practicing several months in advance (ideally like right about now, if you're running this year's Solstice), and then a couple times a couple months later (still a couple months before Solstice) so that when Solstice hits instead of feeling overrehearsed, it's more like it's "internalized", and there's more chance that you can actually emotionally feel it in a natural way.
I often like to give talks in the improvised way. I think it works a lot less well for the middle-act of a large public Solstice, where you can't really see the audience and interaction usually doesn't make much sense. (small solstices are a different game)
One thing I've noticed is that when I improvise a solstice speech on the fly... usually it ends up with more words, and the words are mostly filler, and it makes it worse.
I mean more like MIRI apologists who didn’t notice that the Death with Dignity post really ought to be a halt, melt, catch fire moment.
I wasn't sure what you meant here, where two guesses are "the models/appeal in Death with Dignity are basically accurate, but, should prompt a deeper 'what went wrong with LW or MIRI's collective past thinking and decisionmaking?, '" and "the models/appeals in Death with Dignity are suspicious or wrong, and we should be halt-melting-catching-fire about the fact that Eliezer is saying them?"
This was a final straw-breaking-camels-back that led me to retitle the post.
Well, I think this is where the distinction between the two styles of rationality (cognitive algorithm development VS winning) matters a lot. If you want to solve alignment and want to be efficient about it, it seems obvious that there are better strategies than researching the problem yourself, like don't spend 3+ years on a PhD (cognitive rationality) but instead get 10 other people to work on the issue (winning rationality). And that 10x s your efficiency already.
[...]
This is especially true when other strategies get you orders of magnitude more leverage on the problem. To pick an extreme example, who do you think has more capacity to solve alignment, Paul Christiano, or Elon Musk? (hint: Elon Musk can hire a lot of AI alignment researchers).
a) what makes you think this isn't already what's happening? (I think it's actually happened a lot)
b) I think we've historically actually overindexed on the kinds of things you talk about here, and much of it has turned out to be very bad for the world IMO, and the good parts of it are still much harder/complicated than you're implying here.
(This comment ended up a little more aggro than I meant it to, I think it's fairly reasonable to come in with the question you have here, but I do think the assumption here is fairly wrong on two levels)
There's been a bunch of fieldbuilding work, starting with the MIRI (then Singularity Institute) Summer Fellows, in many ways the founding of CFAR, AIRCs, MATS, PIBBS. (CFAR both included a fairly major focus on "winning" and also was in significant part an effort to recruit people capable of working on the alignment problem).
In 2014, this included getting Elon Musk involved, which AFAICT contributed nontrivially to OpenAI getting created, which was IMO very bad for the world. Later, the person who seemed maybe on track to have a lot of real world power/winning was Sam Bankman-fried, who later turned out to destroy $8 billion and burn a lot of bridges and was hugely net negative.
It's not enough to say "work on AI and alignment", you need to successfully convey the subtleties of what that actually means. Today, there are programs that particularly scale the reasonably-scalable parts of the AI safety field, but those parts generally aren't the most difficult and bottlenecky parts. And it's still a fairly common outcome from people in those programs to end up joining frontier labs doing work that is IMO net negative.
The work that needs doing for alignment is just actually very hard, many people working on the harder parts have tried and failed to scale the work.
(Also, note, the whole reason I wrote Rationality is not (exactly) Winning is that this was a very common focus, that needed to be argued against. It turns out when you focus on winning, you get powerseeking and bad epistemics fairly shortly)
None of this is to say winning isn't important or even in some sense the most important part of rationality, just that overly focusing on it has predictable problems.
See:
Curated. I've periodically attempted Anki and bounced off. This post got me somewhat excited to try again, feeling a bit more in control of it this time.
The "keep it short" advice certainly resonates. The ideas on how to handle "knowledge thickets" felt practical and useful.
Ironically I do think it might have made more sense to break it into multiple posts, not only because it's quite long and each section felt reasonably self-contained, but also, I think there's a "space-repetition" like principle on LessWrong where, like, people are more likely to absorb an overall idea if it's repeated across multiple days/weeks on LessWrong.
I do wish this post felt more oriented about why you might want to be using Anki. I like the focus on "what's a realistic trigger", but most of the examples felt kind of random and not like facts I'd actually likely want to remember. (By contrast, I liked Turntrout's old post on Self Teaching which presented Anki in a more goal-directed way)
I found this review from another participant useful. I particular resonate with the "generative AI slot machine effect."
We like to say that LLMs are tools, but treat them more like a magic bullet. Literally any dev can attest to the satisfaction from finally debugging a thorny issue. LLMs are a big dopamine shortcut button that may one-shot your problem. Do you keep pressing the button that has a 1% chance of fixing everything? It's a lot more enjoyable than the grueling alternative, at least to me.
FYI I reviewed and approved this user's first post because it seemed much more specific/actually-making-claims than most of our other possibly-crank posts. I am interested in whether downvotes are more like "this is crank" or "this is AI capabilities."