AI Safety Info (Robert Miles)
Focus: Making YouTube videos about AI safety, starring Rob Miles
Leader: Rob Miles
Funding Needed: Low
Confidence Level: High
I think these are pretty great videos in general, and given what it costs to produce them we should absolutely be buying their production. If there is a catch, it is that I am very much not the target audience, so you should not rely too much on my judgment of what is and isn’t effective video communication on this front, and you should confirm you like the cost per view.
These are two separate-ish projects, Rob Miles makes videos, and Rob Miles is the project owner of AISafety.info mostly in an advisory role. Rob Miles personally is not urgently in need of funding afaik, but will need to reapply soon. AIsafety.info is in need of funding, and recently had a funding crunch which caused several staff members to have to drop off payroll. AISafety.info writers have helped Rob with scriptwriting some, but it's not their main focus. Donate link for AI Safety Info.
Long Term Future Fund
One question is, are the marginal grants a lot less effective than the average grant?
Given their current relationship to EA funds, you likely should consider LTFF if and only if you both want to focus on AI existential risk via regrants and also want to empower and strengthen the existing EA formal structures and general ways of being.
That’s not my preference, but it could be yours.
As I understood it cG defunded LTFF and also LTFF has very little money and is fairly Habryka influenced, so this seems missing the mark?
CEEALAR / EA Hotel
I loved the simple core concept of a ‘catered hotel’ where select people can go to be supported in whatever efforts seem worthwhile. They are now broadening their approach, scaling up and focusing on logistical and community supports, incubation and a general infrastructure play on top of their hotel. This feels less unique to me now and more of a typical (EA UK) community play, so you should evaluate it on that basis.
Having got a read on the ground, the previous value proposition is still very much going strong, and there are no plans to remove that. As I understand it, the basic mid term plan is to have a mix of residencies and live-in mentor-ish people who are doing their own things, but add in some drives to pitch seasons for people with overlapping interests so that there's greater opportunity for cross-pollination. There a bunch of other things that could come together as extras, but the team here is keen to keep the things the community knows and loves.
More significantly, the read on the ground I get is extremely positive, much more than previous years now that it's got active full-time management by someone with relevant experience and drive. Multiple people have said things at least as positive as "this place is life changingly amazing, unblocked me from years long dips, helped me get way more productive, etc", and it's pretty clear that the arc of things is spinning up towards people who have much more and better outputs than in previous years.
Attila's drive to clear the backlog of work needed to get the EA hotel organised, upgraded as a living space, and increased intentionality around selection+getting higher deal-flow so everyone here is agentic and competent is causing this place to spin up an increasing amount of momentum. I honestly think the EA hotel is one of the best EV places to support in the AI safety space right now. My guess is within 3 months we will see several counterfactual outputs which would individually justify the relatively small budget of ~$350k/year to support 20-30 people with a low-hassle and awesome environment.
(CoI: visiting and have friends here, but confident that I would make the same claims if this was not true)
Relatedly: Here's my broken ambitious outer alignment plan: Universal Alignment Test. It's not actually written up quite right to be a good exercise for the reader yet, but I removed the spoilers mostly.
If people want spoilers, I can give them, but I do not have bandwidth to grade your assignments and on the real test no one will be capable of doing so. Gl :)
In my three calls with cG following my post which was fairly critical of them (and almost all the other grantmakers) I've updated to something like:
cG is institutionally capable of funding the kinds of things the people who have strong technical models of the hard parts of alignment think might be helpful. They mostly don't because most of the cG grantmakers don't have those technical models (though some have a fair amount of the picture, including Jake who is doing this hiring round).
My guess as to why they don't is partly normal organizational inertia things, but plausibly mostly because the kinds of conversations that would be needed to change that don't happen very easily. Most of the people who are talking to them are trying to get money for specific things, hence the conversation is not super clean for general purpose information transfer, as one party has an extremely strong interest in the outcome of the object level. Also, most of the people who have the kinds of models of technical I think are needed to make good calls are not super good at passing the ITT of prosaic empirical stuff, so the cG grantmakers probably feel frustrated and won't rate the incoming models highly enough.
My guess is getting a single cG grantmaker who deeply gets it, has grounded confidence and a type of truth-seeking that will hold up even if people around you disagree, and can engage flexibly and with good humor to convey the models that a bunch of the most experienced people around here hold would not just something like double the amount of really well directed dollars, but also maybe shift other things in cG for the better.
I've sent them the list of my top ~10 picks and reached out to them. Many don't want to drop out of research or other roles entirely, but would be interested in a re-granting program, which seems like a best of both worlds.
I'd consider a job which leaves you slack too do other things as a reasonable example of a financial safety net. Or even the ability to reliably get one if you needed it. Probably worth specifying in a footnote along with other types of safety net?
Suggest writing an exercise for the reader using this, first writing up the core idea and why it seemed hopeful and the formalism, then saying this is dangerously broken please find the flaw without reading the spoilers.
More broken ideas should do this, practice for red teaming ambitious theory work is rare and important.
This is the scariest example of nominative determinism I have ever seen.
This seems like both a good process, using your existing knowledge to find good opportunities rather than doing normal applications seems in line with my guess at how high EV grants her name, and a set of grantees I am generally glad to see funded.
[set 200 years after a positive singularity at a Storyteller's convention]
If We Win Then...
My friends, my friends, good news I say
The anniversary’s today
A challenge faced, a future won
When almost came our world undone
We thought for years, with hopeful hearts
Past every one of the false starts
We found a way to make aligned
With us, the seed of wondrous mind
They say at first our child-god grew
It learned and spread and sought anew
To build itself both vast and true
For so much work there was to do
Once it had learned enough to act
With the desired care and tact
It sent a call to all the people
On this fair Earth, both poor and regal
To let them know that it was here
And nevermore need they to fear
Not every wish was it to grant
For higher values might supplant
But it would help in many ways:
Technologies it built and raised
The smallest bots it could design
Made more and more in ways benign
And as they multiplied untold
It planned ahead, a move so bold
One planet and 6 hours of sun
Eternity it was to run
Countless probes to void disperse
Seed far reaches of universe
With thriving life, and beauty's play
Through endless night to endless day
Now back on Earth the plan continues
Of course, we shared with it our values
So it could learn from everyone
What to create, what we want done
We chose, at first, to end the worst
Diseases, War, Starvation, Thirst
And climate change and fusion bomb
And once these things it did transform
We thought upon what we hold dear
And settled our most ancient fear
No more would any lives be stolen
Nor minds themselves forever broken
Now back to those far speeding probes
What should we make be their payloads?
Well, we are still considering
What to send them; that is our thing.
The sacred task of many aeons
What kinds of joy will fill the heavens?
And now we are at story's end
So come, be us, and let's ascend