LessWrong team member / moderator. I've been a LessWrong organizer since 2011, with roughly equal focus on the cultural, practical and intellectual aspects of the community. My first project was creating the Secular Solstice and helping groups across the world run their own version of it. More recently I've been interested in improving my own epistemic standards and helping others to do so as well.
(as long as you are careful and opinionated about only selling environments you think are good for alignment, not just good for capabilities
I'm interested in your take on what the differences are here.
Which post are you referring to re: psychology and alignment?
(This sounds like a good blogpost title-concept btw, maybe for a slightly different post. i.e "Decisionmakers need to understand the illegible problems of AI")
I'm mostly going off intuitions. One bit of data you might look over is the titles of the Best of LessWrong section, which is what people turned out to remember and find important.
I think there is something virtuous about the sort of title you make, but, also a different kind of virtue in writing to argue for specific points or concepts you want in people's heads. (In this case, the post does get "Illegible problems" into people's heads, it's just that I think people mostly already have heard of those, or think they have)
(I think an important TODO is for someone to find a compelling argument that people who are skeptical about "work on illegible stuff" would find persuasive)
The version of this I would say is "Heroic responsibility is not a thing that's handed to you. It's a thing you decide on."
Every problem in the world exists. You could choose to take heroic responsbility for any of them, or, for an EA style "systematically go down the list of things that seem like they need doing and do them in order or priority." But, you don't have to! (If you choose not to take heroic responsbility for things, well, they might not get done, but that doesn't imply anything else like 'you failed in a responsibility')
I'm guessing they are something like "I disagree that this is the right question to be asking."
I do think this is pretty interesting – I agree a lot of this is imaginable if you think about it and I'm excited by the exercise of people trying to One-shot solutions to complex problems.
I am curious whether you think you could make some kind of equivalent prediction, about a random facet of the evolved world that is not the Bear Fat thing in particular, that you don't already know about?
Your description makes this feel plausible, but, it's a lot easier when you get to look at the evidence in hindsight.
It's done by the Lightcone team using various flavors of AI art tool.
Seems reasonable, but, I think it is still just false that "math" is a word that means "the concept I was getting at in this post." (Certainly it's related, but, if you're smushing the above together with math, I think you're missing an important distinction)
(fyi I think probably makes somewhat more sense to consolidate this in Solstice Season Megameetups, which is pinned and more comprehensive, but, seems fine meanwhile if people like this better for whatever reason)
I do highly suggest people make explicit LessWrong Events for their solstice, because some new UI is about to go up that highlights them more.