Posts

Sorted by New

Wiki Contributions

Comments

Sorted by
Rebecca116

Being homeless sucks, it’s pretty legitimate to want to avoid that

Rebecca58

I’ve found use of the term catastrophe/catastrophic in discussions of SB 1047 makes it harder for me to think about the issue. The scale of the harms captured by SB 1047 has a much much lower floor than what EAs/AIS people usually term catastrophic risk, like $0.5bn+ vs $100bn+. My view on the necessity of pre-harm enforcement, to take the lens of the Anthropic letter, is very different in each case. Similarly, while the Anthropic letter talks about the the bill as focused on catastrophic risk, it also talks about “skeptics of catastrophic risk” - surely this is about eg not buying that AI will be used to start a major pandemic, rather than whether eg there’ll be an increase in the number of hospital systems subject to ransomware attacks bc of AI.

Rebecca30

Perhaps when you share the post with friends you could quote some of the bits focused on progressive concerns?

Rebecca20

a dramatic hardware shift like that is likely going to mean a significant portion of progress up until that shift in topics like interpretability and alignment may be going out the window.

Why is this the case?

Rebecca10

The weights could be stolen as soon as the model is trained though

Rebecca119

unless the nondisparagement provision was mutual

This could be true for most cases though

Rebecca10

That seems like a valuable argument. It might be worth updating the wording under premise 2 to clarifying this? To me it reads as saying that the configuration, rather than the aim, of OpenAI was the major red flag.

Rebecca92

My impression is that post-board drama, they’ve de-emphasised the non-profit messaging. Also in a more recent interview Sam said basically ‘well I guess it turns out the board can’t fire me’ and that in the long term there should be democratic governance of the company. So I don’t think it’s true that #8-10 are (still) being pushed simultaneously with the others.

I also haven’t seen anything that struck me as communicating #3 or #11, though I agree it would be in OpenAI’s interest to say those things. Can you say more about where you are seeing that?

Rebecca10

So the argument is that Open Phil should only give large sums of money to (democratic) governments? That seems too overpowered for the OpenAI case.

Rebecca20

In that case OP’s argument would be saying that donors shouldn’t give large sums of money to any sort of group of people, which is a much bolder claim

Load More