This post is really really good, and will likely be the starting point for my plans henceforth
I was just starting to write up some high level thoughts to evaluate what my next steps should be. The thoughts would've been a subset of this post
I haven't yet had time to process the substance of this post, just commenting that you've done a better job of putting words to what my feelings were getting at, than I expect I myself would have at this stage
Scott Alexander wrote a solid follow up to this piece last year.
TLDR; the brain obviously wishes to avoid pain, but not at the cost of like, avoiding thinking about painful things at all costs.
Like, you don't want to be eaten by a lion, so you avoid doing things that lead to you being eaten by lions.
But this pain avoidance shouldn't compromise on your epistemics; you shouldn't go so far to avoid pain as to avoid thinking about lions at all. this doesn't work.
This is potentially also what's going on with ugh fields. avoiding thinking...
as is said in some of the recommended resources at the bottom of my intro to AI doom and alignment
This link is broken
here's the correct link https://www.lesswrong.com/posts/T4KZ62LJsxDkMf4nF/a-casual-intro-to-ai-doom-and-alignment-1
Not everyone concerned about safety is looking to leave. The concerned have three options: stay and try to steer towards safety, continue moving on the current trajectory, or just leave. Helping some of those who’ve changed their mind about capabilities gain actually get out is only a net negative if those people staying in the field would’ve changed the trajectory of the field. I simply don’t think that everyone should try help by staying and trying to change. There is absolutely room for people to help by just leaving, and reducing the amount of work goi...
That's an old blog, he's currently active on https://members.themindhackersguild.com/ and https://theeffortlessway.com/
For anyone interested in working on this, you should add yourself on this spreadsheet. https://docs.google.com/spreadsheets/d/1WEsiHjTub9y28DLtGVeWNUyPO6tIm_75bMF1oeqpJpA/edit?usp=sharing
It's very useful for people building such an organisation to know of interested people, and vice versa.
If you don't want to use the spreadsheet, you can also DM me and I'll keep you in the loop privately.
If you're making such an organisation, please contact me. I'd like to work with you.
For anyone else also interested, you should add yourself on this spreadsheet. https://docs.google.com/spreadsheets/d/1WEsiHjTub9y28DLtGVeWNUyPO6tIm_75bMF1oeqpJpA/edit?usp=sharing
It's very useful for people building such an organisation to know of interested people, and vice versa.
If you don't want to use the spreadsheet, you can also DM me and I'll keep you in the loop privately.
If you're making such an organisation, please contact me. I'd like to work with you.
Interesting that the very first thing he discusses is whether AI can be stopped