MichaelDickens

Wikitag Contributions

Comments

Sorted by

What can ordinary people do to reduce AI risk? People who don't have expertise in AI research / decision theory / policy / etc.

Some ideas:

  • Donate to orgs that are working to AI risk (which ones, though?)
  • Write letters to policy-makers expressing your concerns
  • Be public about your concerns. Normalize caring about x-risk

I was just thinking about this, and it seems to imply something about AI consciousness so I want to hear if you have any thoughts on this:

If LLM output is the LLM roleplaying an AI assistant, that suggests that anything it says about its own consciousness is not evidence about its consciousness. Because any statement the LLM produces isn't actually a statement about its own consciousness, it's a statement about the AI assistant that it's roleplaying as.

Counterpoint: The LLM is, in a way, roleplaying as itself, so statements about its consciousness might be self-describing.

I was wondering why this comment had a disproportionately large disagree-score but then I saw it was only one vote. Still, that leaves me wondering why that one person disagreed so strongly.

Obviously you're allowed to vote without commenting, it's just nice to understand how I might be making a mistake when I make a significant decision. Perhaps there's a perspective I'm missing.

Update: Having slept on it, I just donated $10,000.

Update: Having now slept on it, I just donated $10,000.

I'm considering donating $10,000 to CAIP. I am not going to decide anything until tomorrow because I want to sleep on it, but I'll write up my thoughts now.

I wrote about CAIP in 2024 and it was one of my top donation candidates. I haven't re-assessed the AI safety landscape since then, but my current best guess is that CAIP is the single best use of marginal funds.

My basic thinking:

Positives

  • I believe international treaties + US regulations are the best strategies for keeping AI safe.
    • Even if they're not the best thing to do, they're good and we should be trying to make them happen.
  • I agree with OP that having good US regulations increases the chance of getting international treaties.
  • A critical step in getting legislation passed is writing and advocating for model legislation.
  • To my knowledge, the only orgs that have written legislation are CAIS and CAIP. It's important that these two orgs continue to exist.
  • There are a handful of other orgs that do political outreach on AI x-risk, and I like most of them too, but none of them are as funding-constrained as CAIP.

Potential concerns

My list of concerns is more wordy than my list of positives, Lest that gives readers the wrong idea, I would like to note that I think the positives are much stronger than the negatives.

  • Why aren't more institutional donors funding CAIP?
    • My best guess is they're (overly) averse to supporting political action, especially because CAIP is open about its concern for x-risk, and I think many donors are timid about pushing the Overton window. That's just a guess, I don't want to claim to be able to read donors' minds.
    • There could be some externally-illegible reason why CAIP is actually ineffective or counterproductive. CAIP looks quite competent as far as I can tell, but I know little about politics.
  • Why didn't CAIP write this post until they had 30 days of runway left?
    • But I'd expect they'd try to fundraise from institutional donors first and only write this post after that failed.
  • If I donate, CAIP might shut down anyway, and then my donation didn't achieve anything.
    • This is less of a concern if I donate through Manifund because the donation only goes through if the fundraiser reaches its minimum goal.
    • The fact that CAIP can't downscale means it might have continual difficulty raising funding.
    • I think donations still have high expected value because there's a small chance that my donation will buy CAIP just enough time to get the fundraising momentum it needs, and then in some sense I can claim 100% credit for everything CAIP does after that.
    • My current thinking is that I should donate regardless, but I would feel better about it if there were some mechanism (like what Manifund has) where I only pay if CAIP gets enough donation commitments to sustain itself.

Some personal considerations

  • I wasn't planning on donating for another 4–6 months, but this is a special situation so I may adjust my plans.
  • I may donate more than $10,000, but I'll need to review my financial situation first.
    • I can donate $10,000 immediately out of my bank account. I can donate more than that from my DAF, but the recipient must be a 501(c)(3). (So I could donate via Manifund.)

As an aside, I've noticed you routinely go out of your way to help charities get funding when they're in difficult situations, which I think is great for the ecosystem and I'm glad you do it.

How do Manifund donations interact with your 501(c)(4) status? Are donations through Manifund just as good as unrestricted donations, or are you more constrained on how you can use that money?

As I understand it, PauseAI Global aka PauseAI supports protests in most regions, whereas US-based protests are run by PauseAI US which is a separate group of people.

PauseAI US is a separate entity from PauseAI so I believe it should also be listed.

Load More