Wiki Contributions

Comments

Linch61

I'm interested in what people think of are the strongest arguments against this view. Here are a few counterarguments that I'm aware of: 

1. Empirically the AI-focused scaling labs seem to care quite a lot about safety, and make credible commitments for safety. If anything, they seem to be "ahead of the curve" compared to larger tech companies or governments.

2. Government/intergovernmental agencies, and to a lesser degree larger companies, are bureaucratic and sclerotic and generally less competent. 

3. The AGI safety issues that EAs worry about the most are abstract and speculative, so having a "normal" safety culture isn't as helpful as buying in into the more abstract arguments, which you might expect to be easier to do for newer companies.

4. Scaling labs share "my" values. So AI doom aside, all else equal, you might still want scaling labs to "win" over democratically elected governments/populist control.

Linch4310

(x-posted from the EA Forum)

We should expect that the incentives and culture for AI-focused companies to make them uniquely terrible for producing safe AGI. 

From a “safety from catastrophic risk” perspective, I suspect an “AI-focused company” (e.g. Anthropic, OpenAI, Mistral) is abstractly pretty close to the worst possible organizational structure for getting us towards AGI. I have two distinct but related reasons:

  1. Incentives
  2. Culture

From an incentives perspective, consider realistic alternative organizational structures to “AI-focused company” that nonetheless has enough firepower to host multibillion-dollar scientific/engineering projects:

  1. As part of an intergovernmental effort (e.g. CERN’s Large Hadron Collider, the ISS)
  2. As part of a governmental effort of a single country (e.g. Apollo Program, Manhattan Project, China’s Tiangong)
  3. As part of a larger company (e.g. Google DeepMind, Meta AI)

In each of those cases, I claim that there are stronger (though still not ideal) organizational incentives to slow down, pause/stop, or roll back deployment if there is sufficient evidence or reason to believe that further development can result in major catastrophe. In contrast, an AI-focused company has every incentive to go ahead on AI when the cause for pausing is uncertain, and minimal incentive to stop or even take things slowly. 

From a culture perspective, I claim that without knowing any details of the specific companies, you should expect AI-focused companies to be more likely than plausible contenders to have the following cultural elements:

  1. Ideological AGI Vision AI-focused companies may have a large contingent of “true believers” who are ideologically motivated to make AGI at all costs and
  2. No Pre-existing Safety Culture AI-focused companies may have minimal or no strong “safety” culture where people deeply understand, have experience in, and are motivated by a desire to avoid catastrophic outcomes. 

The first one should be self-explanatory. The second one is a bit more complicated, but basically I think it’s hard to have a safety-focused culture just by “wanting it” hard enough in the abstract, or by talking a big game. Instead, institutions (relatively) have more of a safe & robust culture if they have previously suffered the (large) costs of not focusing enough on safety.

For example, engineers who aren’t software engineers understand fairly deep down that their mistakes can kill people, and that their predecessors’ fuck-up have indeed killed people (think bridges collapsing, airplanes falling, medicines not working, etc). Software engineers rarely have such experience.

Similarly, governmental institutions have institutional memories with the problems of major historical fuckups, in a way that new startups very much don’t.

Linch62

I can see some arguments in your direction but would tentatively guess the opposite. 

Linch340

(not a lawyer) 

My layman's understanding is that managerial employees are excluded from that ruling, unfortunately. Which I think applies to William_S if I read his comment correctly. (See Pg 11, in the "Excluded" section in the linked pdf in your link)

Linch20

"This is more of a comment than a question" as they say

Linch20

Yeah that's fair! I agree that they would lose the bet as stated.

Linch20

Rebuttal here!

Anyway, if the message someone received from Hanson's writings on medicine was "yay Hanson", and Scott's response was "boo Hanson," then I agree people should wait for Hanson's rebuttal before being like "boo Hanson."

But if the message that people received was "medicine doesn't work" (and it appears that many people did), then Scott's writings should be an useful update, independent of whether Hanson's-writings-as-intended was actually trying to deliver that message.

Linch50

People might appreciate this short (<3 minutes) video interviewing me about my April 1 startup, Open Asteroid Impact:

 

Linch20

Alas I think doing this will be prohibitively expensive/technologically infeasible.  We did some BOTECs at the launch party and even just getting rid of leap seconds was too expensive for us.

That's one of many reasons why I'm trying to raise 7 trillion dollars.

Linch20

Thanks for the pro-tip! I'm not much of a geologist, more of an ideas guy[1] myself. 

  1. ^

    "I can handle the business end"

Load More