We still need more funding to be able to run another edition. Our fundraiser raised $6k as of now, and will end if it doesn't reach the $15k minimum, on February 1st. We need proactive donors.
If we don't get funded for this time, there is a good chance we will move on to different work in AI Safety and new commitments. This would make it much harder to reassemble the team to run future AISCs, even if the funding situation improves.
You can take a look at the track record section and see if it's worth it:
- ≥ $1.4 million granted to projects started at AI Safety Camp
- ≥ 43 jobs in AI Safety taken by alumni
- ≥ 10 organisations started by alumni
Edit to add: Linda just wrote a new post about AISC's theory of change.
You can donate through our Manifund page
You can also read more about our plans there.
If you prefer to donate anonymously, this is possible on Manifund.
Suggested budget for the next AISC
If you're a large donor (>$15k), we're open to let you choose what to fund.
Could you be specific here?
You are sharing a negative impression ("gone off the deep end"), but not what it is based on. This puts me and others in a position of not knowing whether you are e.g. reacting with a quick broad strokes impression, and/or pointing to specific instances of dialogue that I handled poorly and could improve on, and/or revealing a fundamental disagreement between us.
For example, is it because on Twitter I spoke up against generative AI models that harm communities, and this seems somehow strategically bad? Do you not like the intensity of my messaging? Or do you intuitively disagree with my arguments about AGI being insufficiently controllable?
As is, this is dissatisfying. On this forum, I'd hope[1] there is a willingness to discuss differences in views first, before moving to broadcasting subjective judgements[2] about someone.
Even though that would be my hope, it's no longer my expectation. There's an unhealthy dynamic on this forum, where 3+ times I noticed people moving to sideline someone with unpopular ideas, without much care.
To give a clear example, someone else listed vaguely dismissive claims about research I support. Their comment lacked factual grounding but still got upvotes. When I replied to point out things they were missing, my reply got downvoted into the negative.
I guess this is a normal social response on most forums. It is naive of me to hope that on LessWrong it would be different.
This particularly needs to be done with care if the judgement is given by someone seen as having authority (because others will take it at face value), and if the judgement is guarding default notions held in the community (because that supports an ideological filter bubble).