Maybe the vote up / down option could be moved to after the body of the post? Does seem like an awkward set of design considerations between wanting people to see the current score before reading, and not split the current score from the vote buttons or duplicate the score, and I bet Habryka has thought about this already.
I agree with the main generator of this post (a small number of people produce a wildly disproportionate amount of the intellectual progress on hard problems) and one of the conclusions (don't water down your messages at all, if people need watered down messages they are unlikely to be helpful) but I think there's significant value in trying to communicate the hard problem of alignment broadly anyway because:
Also, related anecdote, I ran ~8 retreats at my house covering around 60 people in 2022/23. I got a decent read on how much of the core stack of alignment concepts at least half of them had, and how often they made hopeful mistakes which were transparently going to fail based on not having picked up the core ideas from arbital or understood the top ~10 alignment related concepts clearly. There were only two who cleared this bar.
Also, relatedly, the people you left Bluedot to seem to not reliably be teaching people the core things they need to learn. They are friendly and receptive each time I get on calls with them and ask them to fix their courses, and often do fix some of the stuff, but some of the core generators there look to me like they're just missing from the people picking course materials and lots of people are getting watered down versions of alignment because of this. Consider taking a skim through their courses and advising them on learning objectives etc, you're probably the best-placed person to do this.
Giving money goes through several layers of reduced effectiveness and inefficiency. It's good as a fallback and self-signal, but if you can find and motivate yourself to do worthwhile things yourself you can do much more with much less money.
Stuart Armstrong does a pretty good job of making non-world-critical puzzles seem appealing in Just another day in utopia. I agree there's real non-confused value lost, but only a pretty small fraction of the value for most people, I think?
(also you did literally go into a form of policy advocacy via the route in this post)
Reasonable point, fixed.
Agree, money is technically abundant now that OP and other donors flooded the ecosystem, though well-directed money is semi scarce, and vetting/mentorship seems more bottleneck-y
AI Safety Info (Robert Miles)
Focus: Making YouTube videos about AI safety, starring Rob Miles
Leader: Rob Miles
Funding Needed: Low
Confidence Level: High
I think these are pretty great videos in general, and given what it costs to produce them we should absolutely be buying their production. If there is a catch, it is that I am very much not the target audience, so you should not rely too much on my judgment of what is and isn’t effective video communication on this front, and you should confirm you like the cost per view.
These are two separate-ish projects, Rob Miles makes videos, and Rob Miles is the project owner of AISafety.info mostly in an advisory role. Rob Miles personally is not urgently in need of funding afaik, but will need to reapply soon. AIsafety.info is in need of funding, and recently had a funding crunch which caused several staff members to have to drop off payroll. AISafety.info writers have helped Rob with scriptwriting some, but it's not their main focus. Donate link for AI Safety Info.
Long Term Future Fund
One question is, are the marginal grants a lot less effective than the average grant?
Given their current relationship to EA funds, you likely should consider LTFF if and only if you both want to focus on AI existential risk via regrants and also want to empower and strengthen the existing EA formal structures and general ways of being.
That’s not my preference, but it could be yours.
As I understood it cG defunded LTFF and also LTFF has very little money and is fairly Habryka influenced, so this seems missing the mark?
[set 200 years after a positive singularity at a Storyteller's convention]
If We Win Then...
My friends, my friends, good news I say
The anniversary’s today
A challenge faced, a future won
When almost came our world undone
We thought for years, with hopeful hearts
Past every one of the false starts
We found a way to make aligned
With us, the seed of wondrous mind
They say at first our child-god grew
It learned and spread and sought anew
To build itself both vast and true
For so much work there was to do
Once it had learned enough to act
With the desired care and tact
It sent a call to all the people
On this fair Earth, both poor and regal
To let them know that it was here
And nevermore need they to fear
Not every wish was it to grant
For higher values might supplant
But it would help in many ways:
Technologies it built and raised
The smallest bots it could design
Made more and more in ways benign
And as they multiplied untold
It planned ahead, a move so bold
One planet and 6 hours of sun
Eternity it was to run
Countless probes to void disperse
Seed far reaches of universe
With thriving life, and beauty's play
Through endless night to endless day
Now back on Earth the plan continues
Of course, we shared with it our values
So it could learn from everyone
What to create, what we want done
We chose, at first, to end the worst
Diseases, War, Starvation, Thirst
And climate change and fusion bomb
And once these things it did transform
We thought upon what we hold dear
And settled our most ancient fear
No more would any lives be stolen
Nor minds themselves forever broken
Now back to those far speeding probes
What should we make be their payloads?
Well, we are still considering
What to send them; that is our thing.
The sacred task of many aeons
What kinds of joy will fill the heavens?
And now we are at story's end
So come, be us, and let's ascend