If it’s worth saying, but not worth its own post, here's a place to put it.
If you are new to LessWrong, here's the place to introduce yourself. Personal stories, anecdotes, or just general comments on how you found us and what you hope to get from the site and community are invited. This is also the place to discuss feature requests and other ideas you have for the site, if you don't want to write a full top-level post.
If you want to explore the community more, I recommend reading the Library, checking recent Curated posts, seeing if there are any meetups in your area, and checking out the Getting Started section of the LessWrong FAQ. If you want to orient to the content on the site, you can also check out the new Concepts section.
The Open Thread tag is here. The Open Thread sequence is here.
This is a crosspost of my comment on the post Brave Little Humans. The open thread seems better for visibility and general discussion related to the metacrisis.
AI doom seems to fit the category "races to the bottom with unintended consequences" (and it isn't the only existential risk in that category). As such, its desperate urgency is downstream from the metacrisis (or the meaning crisis as John Vervaeke called it). Resolving or mitigating the metacrisis would give much-needed breathing room for studying AI alignment and exacerbating the metacrisis would seem to increase AI risk further.
I personally happened to fall into studying the metacrisis rather than AI, and it is my estimate that the metacrisis is more solvable and has aspects to it that seem relevant to understanding cognitive agency and intelligence in general. The linkage is such that I believe both problems merit attention and may benefit from cross pollination.
Yeah, it was.