I was just thinking about this, and it seems to imply something about AI consciousness so I want to hear if you have any thoughts on this:
If LLM output is the LLM roleplaying an AI assistant, that suggests that anything it says about its own consciousness is not evidence about its consciousness. Because any statement the LLM produces isn't actually a statement about its own consciousness, it's a statement about the AI assistant that it's roleplaying as.
Counterpoint: The LLM is, in a way, roleplaying as itself, so statements about its consciousness might be self-describing.
Thank you! Good to hear your perspective
I was wondering why this comment had a disproportionately large disagree-score but then I saw it was only one vote. Still, that leaves me wondering why that one person disagreed so strongly.
Obviously you're allowed to vote without commenting, it's just nice to understand how I might be making a mistake when I make a significant decision. Perhaps there's a perspective I'm missing.
Update: Having slept on it, I just donated $10,000.
Update: Having now slept on it, I just donated $10,000.
I'm considering donating $10,000 to CAIP. I am not going to decide anything until tomorrow because I want to sleep on it, but I'll write up my thoughts now.
I wrote about CAIP in 2024 and it was one of my top donation candidates. I haven't re-assessed the AI safety landscape since then, but my current best guess is that CAIP is the single best use of marginal funds.
My basic thinking:
My list of concerns is more wordy than my list of positives, Lest that gives readers the wrong idea, I would like to note that I think the positives are much stronger than the negatives.
As an aside, I've noticed you routinely go out of your way to help charities get funding when they're in difficult situations, which I think is great for the ecosystem and I'm glad you do it.
How do Manifund donations interact with your 501(c)(4) status? Are donations through Manifund just as good as unrestricted donations, or are you more constrained on how you can use that money?
As I understand it, PauseAI Global aka PauseAI supports protests in most regions, whereas US-based protests are run by PauseAI US which is a separate group of people.
What can ordinary people do to reduce AI risk? People who don't have expertise in AI research / decision theory / policy / etc.
Some ideas: