LESSWRONG
LW

803
Eric Neyman
4941Ω99271520
Message
Dialogue
Subscribe

I work at the Alignment Research Center (ARC). I write a blog on stuff I'm interested in (such as math, philosophy, puzzles, statistics, and elections): https://ericneyman.wordpress.com/

Sequences

Posts

Sorted by New

Wikitag Contributions

Comments

Sorted by
Newest
No wikitag contributions to display.
Pseudorandomness Contest
6Eric Neyman's Shortform
2y
105
Eric Neyman's Shortform
Eric Neyman2d20

Oh yup, thanks, this does a good job of illustrating my point. I hadn't seen this graphic!

Reply
Eric Neyman's Shortform
Eric Neyman2d132

This would require a longer post, but roughly speaking, I'd want the people making the most important decisions about how advanced AI is used once it's built to be smart, sane, and selfless. (Huh, that was some convenient alliteration.)

  • Smart: you need to be able to make really important judgment calls quickly. There will be a bunch of actors lobbying for all sorts of things, and you need to be smart enough to figure out what's most important.
  • Sane: smart is not enough. For example, I wouldn't trust Elon Musk with these decisions, because I think that he'd make rash decisions even though he's smart, and even if he had humanity's best interests at heart.
  • Selfless: even a smart and sane actor could curtail the future if they were selfish and opted to e.g. become world dictator.

And so I'm pretty keen on interventions that make it more likely that smart, sane, and selfless people are in a position to make the most important decisions. This includes things like:

  • Doing research to figure out the best way to govern advanced AI once it's developed, and then disseminating those ideas.
  • Helping to positively shape internal governance at the big AI companies (I don't have concrete suggestions in this bucket, but like, whatever led to Anthropic having a Long Term Benefit Trust, and whatever could have led to OpenAI's non-profit board having actual power to fire the CEO).
  • Helping to staff governments with competent people.
  • Helping elect smart, sane, and selfless people to elected positions in governments (see 1, 2).
Reply1
Eric Neyman's Shortform
Eric Neyman2d605

People are underrating making the future go well conditioned on no AI takeover.

This deserves a full post, but for now a quick take: in my opinion, P(no AI takeover) = 75%, P(future goes extremely well | no AI takeover) = 20%, and most of the value of the future is in worlds where it goes extremely well (and comparatively little value comes from locking in a world that's good-but-not-great).

Under this view, an intervention is good insofar as it affects P(no AI takeover) * P(things go really well | no AI takeover). Suppose that a given intervention can change P(no AI takeover) and/or P(future goes extremely well | no AI takeover). Then the overall effect of the intervention is proportional to ΔP(no AI takeover) * P(things go really well | no AI takeover) + P(no AI takeover) * ΔP(things go really well | no AI takeover).

Plugging in my numbers, this gives us 0.2 * ΔP(no AI takeover) + 0.75 * ΔP(things go really well | no AI takeover).

And yet, I think that very little AI safety work focuses on affecting P(things go really well | no AI takeover). Probably Forethought is doing the best work in this space.

(And I don't think it's a tractability issue: I think affecting P(things go really well | no AI takeover) is pretty tractable!)

(Of course, if you think P(AI takeover) is 90%, that would probably be a crux.)

Reply
Eric Neyman's Shortform
Eric Neyman12d80

If you donate through the link on this post, he will know! The /sw_ai at the end is ours -- that's what lets him know.

(The post is now edited to say this, but I should have said it earlier, sorry!)

Reply
Consider donating to AI safety champion Scott Wiener
Eric Neyman14d100

Just so people are aware, I added the following note to the cost-effectiveness analysis. I intend to return to it later:

[Edit: the current cost-effectiveness analysis fails to account for the opportunity cost of Scott Wiener remaining in the State Senate for another two years -- 2027-2028 -- until he needs to leave due to term limits. I think this is an important consideration. My current all-things-considered belief is that this consideration is almost canceled out by the other neglected effect of strengthening ties between AI alignment advocates and Wiener in worlds where he loses and remains in the State Senate for those two years. However, this analysis is subject to change.]

Reply
Consider donating to AI safety champion Scott Wiener
Eric Neyman14d30

Thank you!

Reply1
Eric Neyman's Shortform
Eric Neyman14d*14251

California state senator Scott Wiener, author of AI safety bills SB 1047 and SB 53, just announced that he is running for Congress! I'm very excited about this, and I wrote a blog post about why.

It’s an uncanny, weird coincidence that the two biggest legislative champions for AI safety in the entire country announced their bids for Congress just two days apart. But here we are.*

In my opinion, Scott Wiener has done really amazing work on AI safety. SB 1047 is my absolute favorite AI safety bill, and SB 53 is the best AI safety bill that has passed anywhere in the country. He's been a dedicated AI safety champion who has spent a huge amount of political capital in his efforts to make us safer from advanced AI.

On Monday, I made the case that donating to Alex Bores -- author of the New York RAISE Act -- calling it a "once in every couple of years opportunity", but flagging that I was also really excited about Scott Wiener.

I plan to have a more detailed analysis posted soon, but my bottom line is that donating to Wiener today is about 75% as good as donating to Bores was on Monday, and that this is also an excellent opportunity that will come up very rarely. (The main reason that it looks less good than donating to Bores is that he's running for Nancy Pelosi's seat, and Pelosi hasn't decided whether she'll retire. If not for that, the two donation opportunities would look almost exactly equally good, by my estimates.)

(I think that donating now looks better than waiting for Pelosi to decide whether to retire; if you feel skeptical of this claim, I'll have more soon.)

I have donated $7,000 (the legal maximum) and encourage others to as well. If you're interested in donating, here's a link.

Caveats:

  • If you haven't already donated to Bores, please read about the career implications of political donations before deciding to donate.
  • If you are currently working on federal policy, or think that you might be in the near future, you should consider whether it makes sense to wait to donate to Wiener until Pelosi announces retirement, because backing a challenger to a powerful incumbent may hurt your career.

*So, just to be clear, I think it's unlikely (20%?) that there will be a political donation opportunity at least this good in the next few months.

Reply
Consider donating to Alex Bores, author of the RAISE Act
Eric Neyman15d80

In the past, I've had:

  • One instance of the campaign emailing me to set up a bank transfer. This... seems to have happened 9 months after the candidate lost the primary, actually? Which is honestly absurdly long; I don't know if it's typical.
  • One time, I think the campaign just sent a check to the address I used when I donated? But I don't remember for sure. My guess is that they would have tried to reach me if I didn't cash the check, but I'm not sure. I vaguely recall that the check was sent within a few months of the candidate losing the primary, but I'm not confident.

My suggestion: set a reminder for, like, September 2026 (I'm guessing that the primary will be in June 2026). Reach out to the campaign if you haven't gotten anything by then.

Reply
Eric Neyman's Shortform
Eric Neyman16d90

I think this is just because Ballotpedia hasn't been updated -- he only announced today. See e.g. this NYT article.

Reply
Eric Neyman's Shortform
Eric Neyman16d10871

I think that people concerned with AI safety should consider giving to Alex Bores, who's running for Congress.

Alex Bores is the author of the RAISE Act, a piece of AI safety legislation in New York that Zvi profiled positively a few months ago. Today, Bores announced that he's running for Congress.

In my opinion, Bores is one of the best lawmakers anywhere in the country on the issue of AI safety. I wrote a post making the case for donating to his campaign.

If you feel persuaded by the post, here's a link to donate! (But if you think you might want to work in government, then read the section on career capital considerations before donating.)

Note that I expect donations in the first 24 hours to be ~20% better than donations after that, because donations in the first 24 hours will help generate positive press for the campaign. But I don't mean to rush anyone: if you don't feel equipped to assess the donation opportunity on your own terms, you should take your time!

Reply
Load More
132Consider donating to AI safety champion Scott Wiener
14d
8
257Consider donating to Alex Bores, author of the RAISE Act
16d
18
26Balancing exploration and resistance to memetic threats after AGI
3mo
5
407Will Jesus Christ return in an election year?
6mo
59
149A computational no-coincidence principle
Ω
9mo
Ω
39
139Which things were you surprised to learn are not metaphors?
Q
1y
Q
91
99Seven lessons I didn't learn from election day
1y
33
87Research update: Towards a Law of Iterated Expectations for Heuristic Estimators
Ω
1y
Ω
2
41Implications of China's recession on AGI development?
Q
1y
Q
4
82My thesis (Algorithmic Bayesian Epistemology) explained in more depth
1y
4
Load More